Releases: MyPrototypeWhat/context-chef
@context-chef/tanstack-ai@0.2.1
Patch Changes
-
2e13c66Thanks @MyPrototypeWhat! - Bump@context-chef/corepeer to pick up the new media-aware compression strategy (attachments are now stripped to[image]/[document]text placeholders before reaching the compression model). No source changes in this package — multimodal compression behavior is driven entirely by core. -
Updated dependencies [
2e13c66]:- @context-chef/core@3.2.0
@context-chef/core@3.2.0
Minor Changes
-
2e13c66Thanks @MyPrototypeWhat! - ### Compression now strips media attachments to text placeholdersJanitor.executeCompression()no longer ships binary attachment data through the compression call. Each attachment in the messages being compressed is replaced inline with a[image]/[image: photo.png]/[document]/[document: report.pdf]text marker before the compressionModel is invoked. The summarizer sees that media existed at this point in the conversation without being asked to process raw base64.- Modeled on Claude Code's
stripImagesFromMessagesstrategy - Avoids prompt-too-long failures on the compression call itself when histories contain many images
- Empty
mediaTypeproduces[attachment]instead of misleading[document] toKeep(the recent messages preserved verbatim) is untouched — its attachments still reach the main model through the target adapter
Removed
Prompts.MEDIA_DESCRIPTION_INSTRUCTIONThe constant is gone from the exported
Promptsobject. It was previously appended to the compression prompt when attachments were detected, asking the compression model to "describe the visual content." In practice this never worked —compressionModelis a(Message[]) => Promise<string>function with no adapter pipeline, so the binary data onMessage.attachmentswas never actually forwarded to the LLM. The new placeholder-based strategy supersedes it.If you imported
Prompts.MEDIA_DESCRIPTION_INSTRUCTIONdirectly, remove the reference — the behavior it described was already a no-op. - Modeled on Claude Code's
@context-chef/ai-sdk-middleware@1.1.3
Patch Changes
-
2e13c66Thanks @MyPrototypeWhat! -fromAISDK()now maps AI SDKFilePart(type'file') on user and assistant messages to IRattachments, so multimodal turns participate in the new core compression placeholder logic ([image]/[document]markers in the compression payload).Attachment.datain the middleware path is a presence/metadata signal only — Janitor readsm.attachments?.lengthfor placeholder injection but never the binary itself. The actualUint8Array/URL/ string payload round-trips losslessly through_userContent/_assistantContent, whichtoAISDK()hands back to the underlying AI SDK provider verbatim. No re-encoding, no data loss. -
Updated dependencies [
2e13c66]:- @context-chef/core@3.2.0
@context-chef/tanstack-ai@0.2.0
Minor Changes
-
246175cThanks @MyPrototypeWhat! - ### New Package:@context-chef/tanstack-aiTanStack AI
ChatMiddlewarepowered by context-chef. Drop in as a single middleware to get transparent history compression, tool result truncation, and token budget management.Features
- History Compression — automatically compresses older messages when conversation exceeds the token budget, with optional LLM-based summarization via a cheap adapter
- Tool Result Truncation — large tool outputs are truncated while preserving head and tail, with optional VFS storage for full content retrieval
- Token Budget Tracking — extracts
promptTokensfromonUsagecallbacks and feeds it back to the compression engine automatically - Compact (Mechanical Pruning) — zero-LLM-cost removal of tool call/result pairs and empty messages with configurable retention modes
- Dynamic State Injection — injects runtime state as XML into the last user message or system prompt on every call
- Transform Context Hook — custom post-processing for RAG injection or prompt manipulation
Adapter
fromTanStackAI()/toTanStackAI()— lossless round-trip converters between TanStack AIModelMessage[]and context-chefMessage[]IR- Preserves multimodal
ContentPart[]content andproviderMetadataon tool calls through round-trip via_originalContent/_originalToolCallspass-through fields
Patch Changes
- Updated dependencies [
246175c]:- @context-chef/core@3.1.1
@context-chef/core@3.1.1
Patch Changes
246175cThanks @MyPrototypeWhat! - Change license from ISC to MIT
@context-chef/core@3.1.0
Minor Changes
-
d6169e4Thanks @MyPrototypeWhat! - ### Multimodal Attachment Support- Added
Attachmentinterface andMessage.attachmentsfield to IR for provider-neutral media representation - Janitor detects
attachmentsduring compression and augments the prompt withMEDIA_DESCRIPTION_INSTRUCTIONto guide the compression model toward describing image/media content in summaries - Output adapters (
compile()) now convertattachmentsto provider-specific formats:- OpenAI:
image_url/filecontent parts - Anthropic:
image/documentcontent blocks - Gemini:
inlineData/fileDataparts
- OpenAI:
Input Adapters (Provider → IR)
- Added
fromOpenAI(),fromAnthropic(),fromGemini()to convert provider-native messages to ContextChef IR - Returns
{ system, history }— automatically separates system messages from conversation history - Multimodal content (images, files, documents) automatically mapped to IR
attachments - New types:
HistoryMessage,ParsedMessages
- Added
@context-chef/ai-sdk-middleware@1.1.2
Patch Changes
-
246175cThanks @MyPrototypeWhat! - Change license from ISC to MIT -
Updated dependencies [
246175c]:- @context-chef/core@3.1.1
@context-chef/ai-sdk-middleware@1.1.1
Patch Changes
- Updated dependencies [
d6169e4]:- @context-chef/core@3.1.0
@context-chef/ai-sdk-middleware@1.1.0
Minor Changes
-
dceea52Thanks @MyPrototypeWhat! - Replace compact implementation with AI SDK'spruneMessagesBreaking change to
CompactConfig:Before:
compact: { clear: ["thinking", { target: "tool-result", keepRecent: 5 }]; }
After:
compact: { reasoning: 'all', toolCalls: 'before-last-message' }
CompactConfig.clearreplaced withreasoning,toolCalls, andemptyMessagesfields, matchingpruneMessagesparameters- Compact now runs before IR conversion (on raw AI SDK messages) instead of after
- Removed
TOOL_RESULT_CLEARED_INSTRUCTIONsystem prompt injection —pruneMessagesremoves chunks entirely rather than replacing with placeholders - Per-tool pruning support via
toolCallsarray form:[{ type: 'before-last-message', tools: ['search'] }]
@context-chef/core@3.0.3
Patch Changes
-
25b2b98Thanks @MyPrototypeWhat! - Janitor compression pipeline improvements and internal type cleanup.New
Prompts.formatCompactSummary(raw)utility — strips<analysis>scratchpad blocks and extracts content from<summary>tags, falling back to cleaned raw text when no tags are present.executeCompression()now pipescompressionModeloutput through this cleaner before wrapping withgetCompactSummaryWrapper, preventing XML scaffolding from leaking into the next context window. Before this change, the default prompt asked the model to wrap its output in<summary></summary>but nothing stripped the tags — they silently leaked into the continuation context.Upgraded
CONTEXT_COMPACTION_INSTRUCTION— now uses a two-phase<analysis>+<summary>+<example>pattern (inspired by Claude Code's compact prompt) for measurably better summary quality. The 5 output sections remain domain-agnostic (Task Overview / Current State / Important Discoveries / Next Steps / Context to Preserve) so the prompt works for support, research, shopping, coding, or any other conversational agent — no coding-specific language introduced.New
JanitorConfig.customCompressionInstructions?: string— additional focused instructions appended to the default prompt as an "Additional Instructions:" section. Additive (not replacement) so the default scaffolding that enforces the<analysis>/<summary>parsing contract is always preserved. Users who need radically different compression behavior can still provide their owncompressionModelentirely.new ContextChef({ janitor: { compressionModel, customCompressionInstructions: "Focus on customer sentiment, unresolved issues, and preserve ticket IDs verbatim.", }, });
Compression circuit breaker — after three consecutive
compressionModelfailures, subsequentcompress()calls return history unchanged instead of retrying. This prevents sessions from hammering a broken compression endpoint on every turn (e.g. expired API key, rate limit lockout). The counter resets on successful compression, explicitjanitor.reset(), orchef.clearHistory(). TheconsecutiveFailuresfield is part ofJanitorSnapshotand preserved bychef.snapshot()/chef.restore().restoreState()uses?? 0for defensive backward compatibility with snapshots serialized by older versions.Removed
Prompts.DEEP_CONVERSATION_SUMMARIZATION— this export was unreferenced internal dead code with an inconsistent<history_summary>contract that diverged from the default prompt's<summary>contract. External code that imported it (unlikely, as it was never documented) should migrate toCONTEXT_COMPACTION_INSTRUCTION, which now covers the same detailed-summary use case via the upgraded scaffolding.Internal type cleanup — replaced scattered
astype assertions with generics, type guards, and typed helpers across core source and test files. From 40+ cast sites, only two unavoidable assertions remain, both documented:Assembler.orderKeysDeterministically<T>— single boundary assertion to express the "same shape, reordered keys" transformation which TypeScript cannot model at the type level. Function is now generic, so call sites no longer need their own casts.TypedEventEmitter.emit— necessary widening to call storedEventHandler<never>with a concrete payload. Storage now uses contravariance (Set<EventHandler<never>>), soon()andoff()no longer need casts; only the call site inemit()retains one, guarded by a runtime invariant established inon().
Additional source cleanups:
Pruneruses a newisRecord()type guard;VFSMemoryStoreuses typed variable coercion instead ofJSON.parse(...) as T; adapter implementations (anthropicAdapter,openAIAdapter,geminiAdapter) declare SDK types explicitly instead of trailingas SDKTypeon object literals.