Skip to content

backport v6#5

Open
callycodes wants to merge 1703 commits intoAllTheTables:mainfrom
vercel:main
Open

backport v6#5
callycodes wants to merge 1703 commits intoAllTheTables:mainfrom
vercel:main

Conversation

@callycodes
Copy link

Background

Summary

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • Formatting issues have been fixed (run pnpm prettier-fix in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

ctate and others added 30 commits February 11, 2026 16:21
…events (#12456)

When using xAI Grok models (e.g. `grok-4-1-fast-reasoning`) with
function tools via the Responses API, the model streams
`response.function_call_arguments.delta` and
`response.function_call_arguments.done` events — standard Responses API
events also used by OpenAI. The `@ai-sdk/xai` provider's Zod schema did
not include these event types, causing `AI_TypeValidationError` and
breaking the entire stream whenever a function tool call was attempted.

**Summary**

- Added `response.function_call_arguments.delta` and
`response.function_call_arguments.done` to `xaiResponsesChunkSchema` in
`xai-responses-api.ts`
- Updated the stream handler in `xai-responses-language-model.ts` to:
  - Track ongoing tool calls by `output_index`
- Emit `tool-input-start` on `response.output_item.added` for
`function_call` items
- Emit `tool-input-delta` for each
`response.function_call_arguments.delta` event
- Emit `tool-input-end` and `tool-call` on `response.output_item.done`
(once arguments are fully received)
- Skip `response.custom_tool_call_input.delta/done` events for
`function_call` items (these are handled by the output_item events)
- Added unit test for function call argument streaming

**Manual Verification**

Built the patched `@ai-sdk/xai` locally and ran an e2e test against the
real xAI API using `grok-4-1-fast-reasoning` with a function tool:

**Before fix** — `AI_TypeValidationError` on
`response.function_call_arguments.done`:
```
AI_TypeValidationError: Type validation failed: Value: {
  "sequence_number":4,
  "type":"response.function_call_arguments.done",
  "arguments":"{}",
  "item_id":"fc_14f28817-...",
  "output_index":0
}
```

**After fix** — full streaming pipeline works:
```
tool-input-start:  PASS
tool-input-delta:  PASS
tool-call:         PASS
no errors:         PASS
```

**Checklist**

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

Fixes function tool calls failing with `AI_TypeValidationError` for xAI
Grok models using the Responses API.
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/xai@3.0.54

### Patch Changes

- 902e93b: Add support for `response.function_call_arguments.delta` and
`response.function_call_arguments.done` streaming events in the xAI
Responses API provider. Previously, xAI Grok models using function tools
would fail with `AI_TypeValidationError` because these standard
Responses API events were missing from the Zod schema and stream
handler.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
#12462)

## Background

When generating videos from images using the Google Vertex AI provider,
the MIME type was not being passed along with the base64-encoded image
data. This resulted in Vertex failing with an error re: missing mime
type for media.

## Summary

This PR fixes the Google Vertex provider's image-to-video generation by
properly passing the MIME type with the base64-encoded image data.

## Manual Verification

Tested the image-to-video generation functionality with a base64-encoded
image and confirmed that the video is generated correctly with the MIME
type properly passed to the API.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/google-vertex@4.0.54

### Patch Changes

- a8835e9: fix (provider/google-vertex): pass mime type with i2v video
generation

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…parsing (#12474)

## Background

The gateway video model was missing support for certain warning types in
the response parsing schema, specifically for 'unsupported' and
'compatibility' warnings. This limited the ability to properly handle
and display these warning types when they were returned from providers.

## Summary

Added support for additional warning types in the gateway video model
response parsing:
- Added 'unsupported' warning type with feature and optional details
fields
- Added 'compatibility' warning type with feature and optional details
fields
- Updated the schema to use a discriminated union for proper type
validation
- Added tests to verify the correct handling of these new warning types

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

Fixes an issue where 'unsupported' and 'compatibility' warnings from
video providers weren't properly handled by the gateway.
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## ai@6.0.82

### Patch Changes

-   Updated dependencies [1819bc1]
    -   @ai-sdk/gateway@3.0.42

## @ai-sdk/angular@2.0.83

### Patch Changes

-   ai@6.0.82

## @ai-sdk/gateway@3.0.42

### Patch Changes

- 1819bc1: fix (provider/gateway): add missing warning types for video
response parsing

## @ai-sdk/langchain@2.0.87

### Patch Changes

-   ai@6.0.82

## @ai-sdk/llamaindex@2.0.82

### Patch Changes

-   ai@6.0.82

## @ai-sdk/react@3.0.84

### Patch Changes

-   ai@6.0.82

## @ai-sdk/rsc@2.0.82

### Patch Changes

-   ai@6.0.82

## @ai-sdk/svelte@4.0.82

### Patch Changes

-   ai@6.0.82

## @ai-sdk/vue@3.0.82

### Patch Changes

-   ai@6.0.82

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/amazon-bedrock@4.0.57

### Patch Changes

- 61d25a9: fix(provider/amazon-bedrock): extract response metadata from
api headers
- 08f54fc: feat(provider/amazon-bedrock): add performanceConfig,
serviceTier, and cacheDetails to provider metadata

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
… for existing schemas (#12457)

## Background

Follow-up to #12443: six providers still had internal providerOptions
schemas that weren't renamed to the normalized naming convention or
exported. This PR addresses those remaining gaps tracked in #12269.

## Summary

Renames and exports provider-specific model options types for 6
providers that already had Zod schemas internally but weren't following
the normalized naming convention:

- `@ai-sdk/amazon-bedrock`: `AmazonBedrockEmbeddingModelOptions`
- `@ai-sdk/lmnt`: `LMNTSpeechModelOptions`
- `@ai-sdk/hume`: `HumeSpeechModelOptions`
- `@ai-sdk/revai`: `RevaiTranscriptionModelOptions`
- `@ai-sdk/assemblyai`: `AssemblyAITranscriptionModelOptions`
- `@ai-sdk/gladia`: `GladiaTranscriptionModelOptions`

Also updates documentation code snippets and examples to use `satisfies`
with the newly exported types.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

Other categories of providers from #12269 still need attention:
- Providers using raw passthrough without schemas (e.g. elevenlabs, fal,
sarvam)
- Providers extending OpenAI-compatible that inherit options
- Providers without embedding providerOptions support

These will require separate follow up issues for consideration.

## Related Issues

Fixes #12269
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/amazon-bedrock@4.0.58

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

## @ai-sdk/assemblyai@2.0.19

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

## @ai-sdk/gladia@2.0.19

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

## @ai-sdk/hume@2.0.19

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

## @ai-sdk/lmnt@2.0.19

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

## @ai-sdk/revai@2.0.19

### Patch Changes

- 242696c: feat: normalize and export provider specific model options
type names for existing schemas

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

When using LangGraph with `toUIMessageStream`, there's no way to access
the final graph state after stream completion. The existing `onFinal`
callback only provides aggregated text, not the full state object
containing tool calls, custom fields, or other graph-specific data.

Currently, consumers must wrap the raw LangGraph stream with custom
logic before passing to `toUIMessageStream`:

```typescript
// Current workaround
const stream = withOnSuccess(rawStream, async finalState => {
  await sendAnalytics(finalState);
});
const uiStream = toUIMessageStream(stream);
```

## Summary

Extends `StreamCallbacks` with three new callbacks:

| Callback | When Called | Receives |
| ----------------- | --------------------- |
------------------------------------------------------- |
| `onFinish(state)` | Successful completion | LangGraph state (or
`undefined` for model/streamEvents) |
| `onError(error)` | Stream error | The `Error` object |
| `onAbort()` | Stream aborted | Nothing |

```typescript
toUIMessageStream<MyGraphState>(graphStream, {
  onFinish: async finalState => {
    if (finalState) {
      await saveConversation(finalState.messages);
    }
  },
  onError: error => logger.error('Stream failed', error),
  onAbort: () => logger.warn('Client disconnected'),
});
```

**Callback lifecycle:**

| Outcome | `onFinal`   | `onFinish` | `onError` | `onAbort` |
| ------- | ----------- | ---------- | --------- | --------- |
| Success | ✓ (text)    | ✓ (state)  | -         | -         |
| Error   | ✓ (partial) | -          | ✓         | -         |
| Abort   | ✓ (partial) | -          | -         | ✓         |

**Other changes:**

- Added `parseLangGraphEvent` helper to `utils.ts` (extracts duplicated
logic)
- Updated README with callbacks documentation

**Design rationale:**

This aligns with the core SDK's `streamText` callback pattern:

| Core SDK | This PR | Notes |
| ---------- | ---------- | -------------------------------------------
|
| `onFinish` | `onFinish` | Success only, receives result data |
| `onError` | `onError` | Error path |
| `onAbort` | `onAbort` | Abort path |
| - | `onFinal` | Langchain-specific legacy (aggregated text) |

The naming overlap between `onFinal` and `onFinish` exists because
`onFinal` was added to the langchain package before the core SDK
standardized on `onFinish`. We preserve `onFinal` for backward
compatibility.

**Trade-offs:**

- `onFinish` receives `undefined` for non-LangGraph streams (consistent
lifecycle vs. conditional invocation)
- Abort detection only catches standard `AbortError`; edge cases go to
`onError`
- Type safety is opt-in via generic parameter
- `onFinish` currently only receives `state`, not a rich event object
like core SDK

## Manual Verification

Verified callback behavior using the `next-langchain` example with a
local LangGraph agent. Confirmed:

- `onFinish` receives complete state after successful stream
- `onError` fires on thrown errors
- `onAbort` fires when request is cancelled via AbortController

## Future Work

**Consolidate `onFinal` into `onFinish` (breaking change):**

The ideal API would deprecate `onFinal` and have `onFinish` receive a
unified result object:

```typescript
interface StreamCallbacks<TState = unknown> {
  onStart?: () => void;
  onChunk?: (text: string) => void;
  onFinish?: (result: StreamResult<TState>) => void;
}

type StreamResult<TState> =
  | { status: 'success'; text: string; state: TState | undefined }
  | { status: 'error'; text: string; error: Error }
  | { status: 'abort'; text: string };
```

This would:

- Eliminate the confusing `onFinal` vs `onFinish` naming
- Always call `onFinish` (success, error, or abort)
- Match the pattern users expect from the core SDK

**Enrich `onFinish` event data:**

The core SDK's `onFinish` receives rich metadata (`usage`,
`finishReason`, `steps`, etc.). We could expand ours:

```typescript
onFinish?: (event: {
  text: string;           // accumulated text
  state: TState | undefined;  // LangGraph state
  // Future additions:
  messages?: UIMessage[]; // extracted from state
  duration?: number;      // stream duration in ms
}) => void;
```

**Other improvements:**

- Add `onStepStart`/`onStepFinish` callbacks for LangGraph step
transitions
- Expand abort detection to handle more edge cases (e.g., `ECONNRESET`)
if users report issues
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/langchain@2.0.88

### Patch Changes

- 2b29f7a: Add `onFinish`, `onError`, and `onAbort` callbacks to
`StreamCallbacks` for `toUIMessageStream`.

- `onFinish(state)`: Called on successful completion with final
LangGraph state (or `undefined` for other stream types)
    -   `onError(error)`: Called when stream encounters an error
    -   `onAbort()`: Called when stream is aborted

Also adds `parseLangGraphEvent` helper for parsing LangGraph event
tuples.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## ai@6.0.83

### Patch Changes

-   Updated dependencies [b424e50]
    -   @ai-sdk/gateway@3.0.43

## @ai-sdk/angular@2.0.84

### Patch Changes

-   ai@6.0.83

## @ai-sdk/gateway@3.0.43

### Patch Changes

- b424e50: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@2.0.89

### Patch Changes

-   ai@6.0.83

## @ai-sdk/llamaindex@2.0.83

### Patch Changes

-   ai@6.0.83

## @ai-sdk/react@3.0.85

### Patch Changes

-   ai@6.0.83

## @ai-sdk/rsc@2.0.83

### Patch Changes

-   ai@6.0.83

## @ai-sdk/svelte@4.0.83

### Patch Changes

-   ai@6.0.83

## @ai-sdk/vue@3.0.83

### Patch Changes

-   ai@6.0.83

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
#12494)

## Background

`useSearchGrounding` is an obsolete provider option for the Google (and
Vertex) providers that was deprecated a long time ago. v5 migration
guides already include documentation how to migrate away from it, but
several examples and e2e tests still referenced it.

## Summary

Replaced all `useSearchGrounding` providerOption usages with the current
`googleSearch` tool API.

The only remaining `useSearchGrounding` reference is in the v5 migration
guide, which is correct since it documents historical behavior.

## Manual Verification

N/A — example-only changes, verified via `pnpm type-check:full` and
running the examples.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

Fixes #12438
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## background

gladia tests used inline `prepareJsonResponse` with hand-constructed
mock data. this migrates to the fixture-based pattern used across other
providers

## summary

- add 3 fixture files for gladia's 3-step API flow (upload, initiate,
poll result)
- rewrite tests to load fixtures from `__fixtures__/` directory
- add full snapshot test for `doGenerate` response
- add standalone response headers and response metadata tests
- all 5 original test cases preserved + 1 new snapshot test

## verification

<details>
<summary>test output</summary>

```
 ✓ src/gladia-transcription-model.test.ts  (6 tests) 35ms

 Test Files  2 passed (2)
      Tests  7 passed (7)
```

</details>
## Background

multi step tool calls fail because the `thoughtSignature` gets lost due
to a key mismatch in providerOptions.
if the `thoughtSignature` is passed under the google key, the vertex
provider silently drops it. the api then rejects the request and we see
the errors as observed in #12351

## Summary

fallback check to see if `vertex` key doesn't have a `thoughtSignature`,
we check if it's passed within the google key

## Manual Verification

verified via the repro code given in the original issue, which is
resolved after the fix

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixes #12351
## Background

reported in #12326 but needed some extra changes

## Summary

updated dependency version and pnpm lock file
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/google@3.0.27

### Patch Changes

-   051361b: fix(vertex): add fallback for providerOptions keyname

## @ai-sdk/google-vertex@4.0.55

### Patch Changes

-   Updated dependencies [051361b]
    -   @ai-sdk/google@3.0.27

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…n't using it yet (#12498)

## Background

The `registry/generate-image.ts` example was the only image-generating
example in `ai-functions` that manually wrote images with
`fs.writeFileSync` instead of using the shared `presentImages` helper.

## Summary

Replaced the manual `fs.writeFileSync` + `console.log` with a call to
`presentImages([image])`, matching other image generation examples.
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…enerateImage` (#12502)

## Background

Follow-up to #12252 (which added Gemini image generation to the `google`
provider). This wasn't yet wired up for the `google-vertex` provider.

## Summary

- Adds Gemini image model support to `google-vertex`'s
`generateImage()`. When a `gemini-*` model ID is detected, the image
model internally delegates to `GoogleGenerativeAILanguageModel` with
`responseModalities: ['IMAGE']`, matching the approach used in the
`google` provider. Existing Imagen model behavior is unchanged.
- Includes docs for this new `generateImage` capability for
`google-vertex`
- Also improves provider options typing in both `google` and
`google-vertex` image models to use `satisfies
GoogleLanguageModelOptions`

## Manual Verification

Ran `google-vertex-gemini-image.ts` and
`google-vertex-gemini-editing.ts` examples against Vertex AI with real
credentials. Both image generation and image editing produce correct
results.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

Fixes #12452
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/google@3.0.28

### Patch Changes

- 5a307f5: feat(provider/google-vertex): allow using Gemini image models
with `generateImage`

## @ai-sdk/google-vertex@4.0.56

### Patch Changes

- 5a307f5: feat(provider/google-vertex): allow using Gemini image models
with `generateImage`
-   Updated dependencies [5a307f5]
    -   @ai-sdk/google@3.0.28

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…erate tests (#12503)

## background

amazon-bedrock was partially migrated to the fixture-based test pattern
- doStream and some doGenerate tests already used
`prepareJsonFixtureResponse` / `prepareChunksFixtureResponse`, but the
doGenerate section still had an inline `prepareJsonResponse` helper used
by ~29 tests

## summary

- remove 2 duplicate tests (`should extract text response` and `should
extract usage`) that were identical to existing fixture-based tests in
the `describe('text')` block
- replace `prepareJsonResponse({})` calls with
`prepareJsonFixtureResponse('bedrock-text')` for tests that only check
request body/headers
- replace `prepareJsonResponse({...specific fields...})` calls with
direct `server.urls[generateUrl].response` assignment for tests that
need specific response content
- delete the `prepareJsonResponse` helper function and unused
`BedrockReasoningContentBlock` / `BedrockRedactedReasoningContentBlock`
imports

no new fixtures were needed - all existing fixtures were already
recorded from real API calls

## verification

- `grep -c prepareJsonResponse` returns 0
- test count: 103 → 101 (2 duplicates removed)
- 290/290 tests pass
…ity (#12507)

## Background

The contributing docs were outdated, referring to v2 provider spec.

## Summary

Updated to v3 spec requirements. Also included a note to focus on
clarity for developers and agents - higher priority than brevity.
- Replace unbounded `arrayBuffer()`/`blob()` calls in `download()` and
`downloadBlob()` with streaming reads that enforce a **2 GiB default
size limit**
- Add `abortSignal` passthrough from callers (`transcribe`,
`generateVideo`) to `fetch()`
- Check `Content-Length` header for early rejection before reading body
- Track bytes incrementally via `ReadableStream.getReader()`, abort with
`DownloadError` when limit exceeded
- Expose configurable `download` parameter on `transcribe()` and
`experimental_generateVideo()` (instead of adding a new
`maxDownloadSize` argument) — keeps download config separate from API
function signatures
- Export `createDownload({ maxBytes })` factory from `ai` for custom
size limits

closes #9481 / addresses
#9481 (comment)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## ai@6.0.84

### Patch Changes

- 4024a3a: security: prevent unbounded memory growth in download
functions

The `download()` and `downloadBlob()` functions now enforce a default 2
GiB size limit when downloading from user-provided URLs. Downloads that
exceed this limit are aborted with a `DownloadError` instead of
consuming unbounded memory and crashing the process. The `abortSignal`
parameter is now passed through to `fetch()` in all download call sites.

Added `download` option to `transcribe()` and
`experimental_generateVideo()` for providing a custom download function.
Use the new `createDownload({ maxBytes })` factory to configure download
size limits.

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/gateway@3.0.44

## @ai-sdk/alibaba@1.0.4

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/amazon-bedrock@4.0.59

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/anthropic@3.0.43

## @ai-sdk/angular@2.0.85

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/anthropic@3.0.43

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/assemblyai@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/azure@3.0.29

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai@3.0.28

## @ai-sdk/baseten@1.0.33

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/black-forest-labs@1.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/cerebras@2.0.33

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/cohere@3.0.21

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/deepgram@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/deepinfra@2.0.34

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/deepseek@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/elevenlabs@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/fal@2.0.21

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/fireworks@2.0.34

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/gateway@3.0.44

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/gladia@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/google@3.0.29

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/google-vertex@4.0.57

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/anthropic@3.0.43
    -   @ai-sdk/google@3.0.29

## @ai-sdk/groq@3.0.24

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/huggingface@1.0.32

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/hume@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/klingai@3.0.3

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/langchain@2.0.90

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84

## @ai-sdk/llamaindex@2.0.84

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84

## @ai-sdk/lmnt@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/luma@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/mcp@1.0.21

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/mistral@3.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/moonshotai@2.0.5

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/open-responses@1.0.2

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/openai@3.0.28

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/openai-compatible@2.0.30

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/perplexity@3.0.19

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/prodia@1.0.16

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/provider-utils@4.0.15

### Patch Changes

- 4024a3a: security: prevent unbounded memory growth in download
functions

The `download()` and `downloadBlob()` functions now enforce a default 2
GiB size limit when downloading from user-provided URLs. Downloads that
exceed this limit are aborted with a `DownloadError` instead of
consuming unbounded memory and crashing the process. The `abortSignal`
parameter is now passed through to `fetch()` in all download call sites.

Added `download` option to `transcribe()` and
`experimental_generateVideo()` for providing a custom download function.
Use the new `createDownload({ maxBytes })` factory to configure download
size limits.

## @ai-sdk/react@3.0.86

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/replicate@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/revai@2.0.20

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/rsc@2.0.84

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/svelte@4.0.84

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/togetherai@2.0.33

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/valibot@2.0.16

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/vercel@2.0.32

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

## @ai-sdk/vue@3.0.84

### Patch Changes

-   Updated dependencies [4024a3a]
    -   ai@6.0.84
    -   @ai-sdk/provider-utils@4.0.15

## @ai-sdk/xai@3.0.55

### Patch Changes

-   Updated dependencies [4024a3a]
    -   @ai-sdk/provider-utils@4.0.15
    -   @ai-sdk/openai-compatible@2.0.30

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
… tests (#12511)

## background

anthropic tests were partially migrated to the fixture-based pattern -
doStream and some doGenerate tests already used
`prepareJsonFixtureResponse` / `prepareChunksFixtureResponse`, but 3
`prepareJsonResponse` helper definitions (55 total calls) still remained

## summary

- record `anthropic-text.json` and `anthropic-text.chunks.txt` fixtures
from real API (`claude-sonnet-4-5-20250929`)
- replace 23 `prepareJsonResponse({})` calls (tests that only check
request body/headers) with
`prepareJsonFixtureResponse('anthropic-text')`
- inline `server.urls` assignments for 10 main doGenerate tests that
need specific response content (reasoning, usage, citations, tool calls,
etc.)
- inline `server.urls` assignments for 8 web search tests and delete the
scoped helper
- inline `server.urls` assignments for 4 code execution tests and delete
the scoped helper
- remove the main `prepareJsonResponse` function definition and unused
`Citation`/`JSONObject` imports
vercel-ai-sdk bot and others added 30 commits March 5, 2026 04:28
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.


# Releases
## @ai-sdk/google@3.0.43

### Patch Changes

- 7ba09a4: Fix gateway failover losing thoughtSignature when failing
over from Vertex to Google AI Studio. The Google provider now falls back
to checking the vertex namespace for thoughtSignature, matching the
existing Vertex-to-Google fallback behavior.

## @ai-sdk/google-vertex@4.0.80

### Patch Changes

-   Updated dependencies [7ba09a4]
    -   @ai-sdk/google@3.0.43

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Summary

- [x] Create `release-v6.0` maintenance branch from current `main` HEAD
for stable v6 patches
- Enter changeset pre-release mode (`beta`) so merges to `main` publish
as `ai@7.0.0-beta.X`
- Add major changeset for all 53 published packages
- Create `LanguageModelV4` spec skeleton in `@ai-sdk/provider` (copy of
V3 with renamed types)
- Create `LanguageModelV4Middleware` and `ProviderV4` types
- Add `MockLanguageModelV4` and `MockProviderV4` test utilities in
`ai/test`

## Context

Tracking issue: #12999

Once merged, the first beta release (e.g. `ai@7.0.0-beta.1`) will
publish automatically when the Version Packages PR is merged.

The V4 spec skeleton is an exact copy of V3 with type names updated to
V4 and `specificationVersion: 'v4'`. The shared types
(`SharedV3ProviderOptions`, `SharedV3ProviderMetadata`, etc.) are
intentionally kept as-is since they are version-agnostic and shared
across spec versions.

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/gateway@4.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/alibaba@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/amazon-bedrock@5.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/anthropic@4.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/angular@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/anthropic@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/assemblyai@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/azure@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai@4.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/baseten@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/black-forest-labs@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/bytedance@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/cerebras@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/codemod@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

## @ai-sdk/cohere@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/deepgram@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/deepinfra@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/deepseek@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/devtools@1.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0

## @ai-sdk/elevenlabs@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/fal@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/fireworks@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/gateway@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/gladia@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/google@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/google-vertex@5.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/anthropic@4.0.0-beta.0
    -   @ai-sdk/google@4.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/groq@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/huggingface@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/hume@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/klingai@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/langchain@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0

## @ai-sdk/llamaindex@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0

## @ai-sdk/lmnt@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/luma@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/mcp@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/mistral@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/moonshotai@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/open-responses@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/openai@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/openai-compatible@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/perplexity@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/prodia@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/provider@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

## @ai-sdk/provider-utils@5.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0

## @ai-sdk/react@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/replicate@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/revai@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/rsc@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/svelte@5.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/test-server@2.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

## @ai-sdk/togetherai@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/valibot@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/vercel@3.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/vue@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   ai@7.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

## @ai-sdk/xai@4.0.0-beta.0

### Major Changes

-   8359612: Start v7 pre-release

### Patch Changes

-   Updated dependencies [8359612]
    -   @ai-sdk/openai-compatible@3.0.0-beta.0
    -   @ai-sdk/provider@4.0.0-beta.0
    -   @ai-sdk/provider-utils@5.0.0-beta.0

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…rict:true (#12967)

## Summary

Fixes #12767

When `strict: true` is set on one or more function tools, the Google and
Google Vertex AI providers now set `functionCallingConfig.mode` to
`'VALIDATED'` instead of the default `AUTO`/`ANY`. `VALIDATED` mode
enables strict schema-conformant tool argument generation on the Gemini
side.

Ref:
https://ai.google.dev/gemini-api/docs/function-calling#function_calling_modes

## Behaviour

| `toolChoice` | Any `strict: true` tool | `mode` used |
|---|---|---|
| `auto` | ✅ | `VALIDATED` |
| `required` | ✅ | `VALIDATED` |
| `tool` | ✅ | `VALIDATED` (+ `allowedFunctionNames`) |
| `none` | any | `NONE` (strict irrelevant) |
| (none) | ✅ | `VALIDATED` |
| any | ❌ | existing AUTO/ANY/NONE behaviour |

The `@ai-sdk/google-vertex` provider inherits this fix automatically
because it reuses `GoogleGenerativeAILanguageModel` from
`@ai-sdk/google`.

## Testing

Added 4 tests covering the new `VALIDATED` mode behaviour.

---------

Co-authored-by: Dmitrii Troitskii <jsleitor@gmail.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/google@4.0.0-beta.1

### Patch Changes

- 19b95f9: fix(google): use VALIDATED function calling mode when any
tool has strict:true

## @ai-sdk/google-vertex@5.0.0-beta.1

### Patch Changes

-   Updated dependencies [19b95f9]
    -   @ai-sdk/google@4.0.0-beta.1

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…e` and `addToolOutput` (#11048)

## Background

<!-- Why was this change necessary? -->

Users should be able to customize the api request similar to sendMessage
& regenerate

Fixes: #11423

## Summary

<!-- What did you change? -->

Adding optional ChatRequestOptions parameter to addToolApprovalResponse.
Add-on work to #8541.

## Checklist

<!--
Do not edit this list. Leave items unchecked that don't apply. If you
need to track subtasks, create a new "## Tasks" section

Please check if the PR fulfills the following requirements:
-->

- ~Tests have been added / updated (for bug fixes / features)~ - n/a
- ~Documentation has been added / updated (for bug fixes / features)~ -
n/a
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

n/a

---------

Co-authored-by: Felix Arntz <felix.arntz@vercel.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.1

### Patch Changes

- 6a3793e: chore(ai): add optional ChatRequestOptions to
`addToolApprovalResponse` and `addToolOutput`

## @ai-sdk/angular@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/langchain@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/llamaindex@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/react@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/rsc@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/svelte@5.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

## @ai-sdk/vue@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [6a3793e]
    -   ai@7.0.0-beta.1

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

Adds support for OpenAI's new GPT-5.4 model family (`gpt-5.4`,
`gpt-5.4-pro`, etc.)

<img width="1177" height="674" alt="image"
src="https://github.com/user-attachments/assets/97d9c504-f4de-4a91-a7dc-11b8f6f6017f"
/>


## Summary

- [x] Add GPT-5.4 model IDs to `GatewayModelId` in `@ai-sdk/gateway`
- [x] Add GPT-5.4 model IDs to `OpenAIChatModelId` and
`OpenAIResponsesModelId` in `@ai-sdk/openai`
- [x] Register GPT-5.4 models in the `openaiResponsesReasoningModelIds`
array
- [x] Update unit tests to verify GPT-5.4language model capabilities

## Manual Verification

- Ran `pnpm test` and verified that all tests pass 🚀 

## Checklist

<!--
Do not edit this list. Leave items unchecked that don't apply. If you
need to track subtasks, create a new "## Tasks" section

Please check if the PR fulfills the following requirements:
-->

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

<!--
Feel free to mention things not covered by this PR that can be done in
future PRs.
Remove the section if it's not needed.
 -->

## Related Issues

Fixes #13114

---------

Co-authored-by: Felix Arntz <flixos90@gmail.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.2

### Patch Changes

-   Updated dependencies [7afaece]
-   Updated dependencies [f16c103]
    -   @ai-sdk/gateway@4.0.0-beta.1

## @ai-sdk/angular@3.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/azure@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [7afaece]
    -   @ai-sdk/openai@4.0.0-beta.1

## @ai-sdk/gateway@4.0.0-beta.1

### Patch Changes

-   7afaece: feat(provider/openai): add GPT-5.4 model support
- f16c103: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@3.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/llamaindex@3.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/openai@4.0.0-beta.1

### Patch Changes

-   7afaece: feat(provider/openai): add GPT-5.4 model support

## @ai-sdk/react@4.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/rsc@3.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/svelte@5.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

## @ai-sdk/vue@4.0.0-beta.2

### Patch Changes

-   ai@7.0.0-beta.2

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/google@4.0.0-beta.2

### Patch Changes

- c9c4661: fix(provider/google): preserve groundingMetadata and
urlContextMetadata when they arrive in a stream chunk before the
finishReason chunk

## @ai-sdk/google-vertex@5.0.0-beta.2

### Patch Changes

-   Updated dependencies [c9c4661]
    -   @ai-sdk/google@4.0.0-beta.2

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…vent SSRF bypass (#13111)

## Background

The existing `validateDownloadUrl` (added in #13085) only validates the
initial URL before `fetch()`. Since `fetch()` follows HTTP redirects by
default, an attacker can bypass SSRF protections by providing a
safe-looking public URL that 302-redirects to internal endpoints (e.g.,
`http://169.254.169.254/latest/meta-data/`), enabling in-band response
body exfiltration through the AI model's response.

## Summary

Added `response.redirected` check with
`validateDownloadUrl(response.url)` in both `downloadBlob`
(`@ai-sdk/provider-utils`) and `download` (`ai`) functions to validate
the final URL after following redirects, before reading the response
body.

## Manual Verification

- `pnpm vitest run packages/provider-utils/src/download-blob.test.ts` —
16 tests pass
- `pnpm vitest run
packages/provider-utils/src/validate-download-url.test.ts` — 29 tests
pass
- `pnpm vitest run packages/ai/src/util/download/download.test.ts -t
"SSRF"` — 5 tests pass
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.3

### Patch Changes

- 531251e: fix(security): validate redirect targets in download
functions to prevent SSRF bypass

Both `downloadBlob` and `download` now validate the final URL after
following HTTP redirects, preventing attackers from bypassing SSRF
protections via open redirects to internal/private addresses.

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/gateway@4.0.0-beta.2

## @ai-sdk/alibaba@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/amazon-bedrock@5.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/anthropic@4.0.0-beta.1

## @ai-sdk/angular@3.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   ai@7.0.0-beta.3

## @ai-sdk/anthropic@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/assemblyai@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/azure@4.0.0-beta.2

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai@4.0.0-beta.2

## @ai-sdk/baseten@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/black-forest-labs@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/bytedance@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/cerebras@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/cohere@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/deepgram@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/deepinfra@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/deepseek@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/elevenlabs@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/fal@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/fireworks@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/gateway@4.0.0-beta.2

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/gladia@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/google@4.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/google-vertex@5.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/anthropic@4.0.0-beta.1
    -   @ai-sdk/google@4.0.0-beta.3

## @ai-sdk/groq@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/huggingface@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/hume@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/klingai@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/langchain@3.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   ai@7.0.0-beta.3

## @ai-sdk/llamaindex@3.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   ai@7.0.0-beta.3

## @ai-sdk/lmnt@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/luma@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/mcp@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/mistral@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/moonshotai@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/open-responses@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/openai@4.0.0-beta.2

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/openai-compatible@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/perplexity@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/prodia@2.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/provider-utils@5.0.0-beta.1

### Patch Changes

- 531251e: fix(security): validate redirect targets in download
functions to prevent SSRF bypass

Both `downloadBlob` and `download` now validate the final URL after
following HTTP redirects, preventing attackers from bypassing SSRF
protections via open redirects to internal/private addresses.

## @ai-sdk/react@4.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   ai@7.0.0-beta.3

## @ai-sdk/replicate@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/revai@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/rsc@3.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   ai@7.0.0-beta.3

## @ai-sdk/svelte@5.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   ai@7.0.0-beta.3

## @ai-sdk/togetherai@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/valibot@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1

## @ai-sdk/vercel@3.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

## @ai-sdk/vue@4.0.0-beta.3

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   ai@7.0.0-beta.3

## @ai-sdk/xai@4.0.0-beta.1

### Patch Changes

-   Updated dependencies [531251e]
    -   @ai-sdk/provider-utils@5.0.0-beta.1
    -   @ai-sdk/openai-compatible@3.0.0-beta.1

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

when a user uses `experimental_transform`, the doStream telemetry span
`ai.streamText.doStream` was recording untransformed values for
`ai.response.text` and `ai.response.providerMetadata`

this is also so that with the new telemetry approach, we have feature
parity with how the new flow works when setting the spans. see
[comment](https://github.com/vercel/ai/pull/13051/changes/#r2885652481)

## Summary

essentially split the doStreamSpan so that we set ai.response.text,
reasoning and providerMetadata by reading from the fully-processed
recordedSteps

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.4

### Patch Changes

-   5ceed7d: fix(ai): doStream should reflect transformed values

## @ai-sdk/angular@3.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/langchain@3.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/llamaindex@3.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/react@4.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/rsc@3.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/svelte@5.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

## @ai-sdk/vue@4.0.0-beta.4

### Patch Changes

-   Updated dependencies [5ceed7d]
    -   ai@7.0.0-beta.4

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

- our telemetry span creation didn't account for certain usage metrics. 
- there were inconsistencies with how telemetry was captured in
streamText and generateText.
- there was also a bug where the top level span in generateText didn't
calculate the total usage (and just accounted for the step usage)

## Summary

- add missing attributes such as: 
  - "ai.usage.inputTokenDetails.noCacheTokens"
  - "ai.usage.inputTokenDetails.cacheReadTokens"
  - "ai.usage.inputTokenDetails.cacheWriteTokens"
  - "ai.usage.outputTokenDetails.textTokens"
  
- got rid of the completionTokens and promptTokens (there was a TODO to
rename them)
- make the pattern consistent
- bug fix

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
…le partial JSON (#13137)

## Background

In 5 OpenAI-compatible providers (`openai`, `openai-compatible`, `groq`,
`deepseek`, `alibaba`), streaming tool call arguments were finalized
using `isParsableJson()` as a heuristic for completion. If partial
accumulated JSON happened to be valid JSON before all chunks arrived,
the tool call would be executed with incomplete arguments, potentially
dropping safety-relevant parameters.

For example, a model streaming `{"query": "test", "limit": 10}` might
produce `{"query": "test"}` as an intermediate value — which is valid
JSON. The tool call would be finalized early with only `{"query":
"test"}`, and the `"limit": 10` parameter would be silently lost.

Providers not affected: `anthropic` (uses explicit `content_block_stop`
events), OpenAI Responses API (different protocol).

## Summary

- Removed all inline `isParsableJson()` checks from the streaming tool
call delta handlers across 5 providers
- Moved tool call finalization to the `flush()` handler, which only runs
after the stream is fully consumed, ensuring tool calls always receive
complete arguments
- Added a regression test demonstrating the vulnerability (first commit
fails against unfixed code, second commit fixes it)
- Updated existing test snapshots to reflect the new ordering (tool
calls finalize in `flush()` after text-end, rather than inline during
streaming)

## Manual Verification

The first commit (`573a744`) adds the regression test only — it fails
against the unfixed code, proving the vulnerability exists:
```
- Expected: input: '{"query": "test"}, "limit": 10}'
+ Received: input: '{"query": "test"}'
```

The second commit (`7ae73dd`) applies the fix, making all tests pass.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

VULN-6861

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…tents via symlinks (#13143)

## Background

The `verify-changesets` GitHub Action reads PR-changed `.changeset/*.md`
files via `readFile('../../../../' + path)`. If a changeset file is a
symlink, Node follows it and reads the target. On failure, the catch
block writes the full file contents (`error.content`) into
`$GITHUB_STEP_SUMMARY`, exposing arbitrary runner file contents to
anyone who can view the PR's workflow run.

An attacker can exploit this by opening a PR that adds a symlink under
`.changeset/` pointing to a sensitive file (e.g. `/etc/passwd`,
`/proc/self/environ`).

CWE-59 (Improper Link Resolution Before File Access) / CWE-200 (Exposure
of Sensitive Information)

## Summary

Two complementary fixes:

1. **Reject symlinks** — Use `lstat` to check if a changeset file is a
symbolic link before reading it. Symlinked files are rejected with an
error that does not include file contents.

2. **Stop leaking raw content (defense in depth)** — Error objects no
longer attach the full file content. Instead, only the parsed YAML
frontmatter (which is safe to display) is attached. The error handler
writes `error.frontmatter` instead of `error.content` to the step
summary. Even if an attacker bypasses the symlink check, the full file
content is never written to the step summary.

## Manual Verification

- Ran all 12 tests (10 existing + 2 new) against the unfixed code: 4
failures (symlink not rejected, raw content leaked)
- Ran all 12 tests against the fixed code: all pass

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
…arch with multimodal output (#13171)

## Summary

This helps test and verify that thought signatures now work correctly
end-to-end, for both images and web search and image search sources,
including in multi-turn content.

## Manual Verification

Did some extensive testing with these two new examples - everything
works as expected.

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

Follow up to #12926
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.5

### Patch Changes

-   ebd4da2: feat(ai): add missing usage attributes

## @ai-sdk/alibaba@2.0.0-beta.2

### Patch Changes

- 45b3d76: fix(security): prevent streaming tool calls from finalizing
on parsable partial JSON

Streaming tool call arguments were finalized using `isParsableJson()` as
a heuristic for completion. If partial accumulated JSON happened to be
valid JSON before all chunks arrived, the tool call would be executed
with incomplete arguments. Tool call finalization now only occurs in
`flush()` after the stream is fully consumed.

- f7295cb: revert incorrect fix
<#13172>

-   Updated dependencies [45b3d76]

-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/angular@3.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/azure@4.0.0-beta.3

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai@4.0.0-beta.3

## @ai-sdk/baseten@2.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/cerebras@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/deepinfra@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/deepseek@3.0.0-beta.2

### Patch Changes

- 45b3d76: fix(security): prevent streaming tool calls from finalizing
on parsable partial JSON

Streaming tool call arguments were finalized using `isParsableJson()` as
a heuristic for completion. If partial accumulated JSON happened to be
valid JSON before all chunks arrived, the tool call would be executed
with incomplete arguments. Tool call finalization now only occurs in
`flush()` after the stream is fully consumed.

- f7295cb: revert incorrect fix
<#13172>

## @ai-sdk/fireworks@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/groq@4.0.0-beta.2

### Patch Changes

- 45b3d76: fix(security): prevent streaming tool calls from finalizing
on parsable partial JSON

Streaming tool call arguments were finalized using `isParsableJson()` as
a heuristic for completion. If partial accumulated JSON happened to be
valid JSON before all chunks arrived, the tool call would be executed
with incomplete arguments. Tool call finalization now only occurs in
`flush()` after the stream is fully consumed.

- f7295cb: revert incorrect fix
<#13172>

## @ai-sdk/huggingface@2.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/langchain@3.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/llamaindex@3.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/moonshotai@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/openai@4.0.0-beta.3

### Patch Changes

- 45b3d76: fix(security): prevent streaming tool calls from finalizing
on parsable partial JSON

Streaming tool call arguments were finalized using `isParsableJson()` as
a heuristic for completion. If partial accumulated JSON happened to be
valid JSON before all chunks arrived, the tool call would be executed
with incomplete arguments. Tool call finalization now only occurs in
`flush()` after the stream is fully consumed.

- f7295cb: revert incorrect fix
<#13172>

## @ai-sdk/openai-compatible@3.0.0-beta.2

### Patch Changes

- 45b3d76: fix(security): prevent streaming tool calls from finalizing
on parsable partial JSON

Streaming tool call arguments were finalized using `isParsableJson()` as
a heuristic for completion. If partial accumulated JSON happened to be
valid JSON before all chunks arrived, the tool call would be executed
with incomplete arguments. Tool call finalization now only occurs in
`flush()` after the stream is fully consumed.

- f7295cb: revert incorrect fix
<#13172>

## @ai-sdk/react@4.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/rsc@3.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/svelte@5.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/togetherai@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/vercel@3.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

## @ai-sdk/vue@4.0.0-beta.5

### Patch Changes

-   Updated dependencies [ebd4da2]
    -   ai@7.0.0-beta.5

## @ai-sdk/xai@4.0.0-beta.2

### Patch Changes

-   Updated dependencies [45b3d76]
-   Updated dependencies [f7295cb]
    -   @ai-sdk/openai-compatible@3.0.0-beta.2

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.6

### Patch Changes

-   Updated dependencies [c949e25]
    -   @ai-sdk/gateway@4.0.0-beta.3

## @ai-sdk/angular@3.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/gateway@4.0.0-beta.3

### Patch Changes

- c949e25: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@3.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/llamaindex@3.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/react@4.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/rsc@3.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/svelte@5.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

## @ai-sdk/vue@4.0.0-beta.6

### Patch Changes

-   ai@7.0.0-beta.6

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

faced the issue while implementing #13157 - for multiturn conversation,
the item Ids weren't properly being mapped within aisdk and there was a
mismatch between what we were sending to the openai api and what was
expected.

we were attching the same tool call id as the `tool-call` object to the
`tool-result` which caused duplicate ID found errors + incorrect ids
sent for certain parts

## Summary

- since the item_id for tool_search_output is propagated via
providerMetadata - we ensure the providerMetadata is preserved through
the UI stream.
- similar change needed for generate text as well

## Manual Verification

will be done in #13157 

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.7

### Patch Changes

-   210ed3d: feat(ai): pass result provider metadata across the stream

## @ai-sdk/angular@3.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/langchain@3.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/llamaindex@3.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/react@4.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/rsc@3.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/svelte@5.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

## @ai-sdk/vue@4.0.0-beta.7

### Patch Changes

-   Updated dependencies [210ed3d]
    -   ai@7.0.0-beta.7

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…when store: false (#13234)

## Background

When the OpenAI storage flag is disabled, the OpenAI responses API signs
reasoning parts when they are complete. This happens in the last part of
a reasoning sequence, because the signature requires the content from
all reasoning parts in that sequence.

When a request is aborted, this reasoning signature (encrypted content)
is not available, because it cannot be computed yet (the overall
reasoning is incomplete). However, the first reasoning parts of the
sequence have been published and break future OpenAI responses requests
if they are included, since there are now reasoning parts without a
valid signature.

## Summary

Drop reasoning parts without encrypted reasoning content when using
`store: false`.

## Related Issues

Fixes #8811
Fixes #10290
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/azure@4.0.0-beta.4

### Patch Changes

-   Updated dependencies [a71d345]
    -   @ai-sdk/openai@4.0.0-beta.4

## @ai-sdk/openai@4.0.0-beta.4

### Patch Changes

- a71d345: fix(provider/openai): drop reasoning parts without encrypted
content when store: false

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.