Problem
When using structuredChatResource with a complex schema containing a large anyOf discriminated union (e.g., 10+ branches), OpenAI's response_format: { type: "json_schema", strict: true } mode causes severe output quality degradation.
What happens
I'm building a generative dashboard where the LLM outputs a grid of UI components. The schema has 10 component types (kpi-card, chart, data-table, text-block, metric-card, progress-list, metrics-bar, ranked-list, comparison-card, stat-highlight) in a discriminated union via s.anyOf([...]).
With this schema, GPT-4.1:
- Generates only 3 simple cards instead of the 6-10 requested
- Ignores explicit user requests ("add a chart + table") and defaults to the simplest branch (kpi-card)
- Fills optional fields unnecessarily (progress bars with no meaningful value)
- Produces round/fake-looking numbers despite real data being in the prompt
Why it happens
OpenAI's strict structured output uses constrained decoding — a grammar-level filter on every token. With a 10-branch anyOf, the decoder must track 10 parallel parse states for every component cell. This makes the model extremely conservative, defaulting to the simplest union branch.
This is not specific to OpenAI — any provider using grammar-constrained structured output (Gemini, vLLM, llama.cpp) will hit similar degradation with large unions.
Before hashbrown (worked well)
Before adopting hashbrown, I used response_format: { type: "json_object" } with the schema described in the system prompt. The model generated diverse, complete dashboards with all 10 component types because there was no grammar enforcement — just prompt-guided JSON generation.
Proposed solution
Add a third response format mode alongside strict (default) and emulateStructuredOutput:
provideHashbrown({
baseUrl: '...',
responseFormatMode: 'json_object', // NEW — uses { type: "json_object" }
});
Or at the resource level:
structuredChatResource({
model: 'gpt-4.1',
schema: MyComplexSkillet,
responseFormatMode: 'json_object', // override per-resource
});
Behavior:
- Sends
response_format: { type: "json_object" } to the provider (no schema in the request)
- Appends the JSON Schema description to the system prompt so the model knows the expected shape
- Validates the response against the skillet client-side after generation
- On validation failure: retry (respecting the existing
retries option) or surface the error
This gives the best of both worlds for complex schemas: the model has full creative freedom while hashbrown ensures conformance.
Alternatives considered
| Approach |
Issue |
| Simplify the schema (fewer types) |
Limits what the LLM can generate — defeats the purpose |
emulateStructuredOutput: true |
Designed for providers without native JSON support, not for this use case |
Flatten anyOf to single object |
Workaround that loses type discrimination at the schema level |
Environment
@hashbrownai/core: 0.4.1
@hashbrownai/angular: 0.4.1
@hashbrownai/openai: 0.4.1
- Model:
gpt-4.1
- Schema: 10-branch discriminated union with nested objects (~300 lines of skillet definitions)
Problem
When using
structuredChatResourcewith a complex schema containing a largeanyOfdiscriminated union (e.g., 10+ branches), OpenAI'sresponse_format: { type: "json_schema", strict: true }mode causes severe output quality degradation.What happens
I'm building a generative dashboard where the LLM outputs a grid of UI components. The schema has 10 component types (kpi-card, chart, data-table, text-block, metric-card, progress-list, metrics-bar, ranked-list, comparison-card, stat-highlight) in a discriminated union via
s.anyOf([...]).With this schema, GPT-4.1:
Why it happens
OpenAI's strict structured output uses constrained decoding — a grammar-level filter on every token. With a 10-branch
anyOf, the decoder must track 10 parallel parse states for every component cell. This makes the model extremely conservative, defaulting to the simplest union branch.This is not specific to OpenAI — any provider using grammar-constrained structured output (Gemini, vLLM, llama.cpp) will hit similar degradation with large unions.
Before hashbrown (worked well)
Before adopting hashbrown, I used
response_format: { type: "json_object" }with the schema described in the system prompt. The model generated diverse, complete dashboards with all 10 component types because there was no grammar enforcement — just prompt-guided JSON generation.Proposed solution
Add a third response format mode alongside
strict(default) andemulateStructuredOutput:Or at the resource level:
Behavior:
response_format: { type: "json_object" }to the provider (no schema in the request)retriesoption) or surface the errorThis gives the best of both worlds for complex schemas: the model has full creative freedom while hashbrown ensures conformance.
Alternatives considered
emulateStructuredOutput: trueanyOfto single objectEnvironment
@hashbrownai/core: 0.4.1@hashbrownai/angular: 0.4.1@hashbrownai/openai: 0.4.1gpt-4.1