-
Notifications
You must be signed in to change notification settings - Fork 772
Description
Environment details
- Programming language: Python
- OS: Windows 11 (also reproducible inside devcontainer)
- Container:
python:3.11.8-slim - Language runtime version: Python 3.11.8
- Package version:
google-genai==1.64.0
pip show google-genai output:
Name: google-genai
Version: 1.64.0
Location: /usr/local/lib/python3.11/site-packages
Description
We are attempting to use Veo 3.1 via Vertex AI (vertexai=True) in the google-genai SDK.
Calling Veo 3.1 Fast GA model using client.aio.models.generate_videos() with Vertex AI enabled returns:
400 INVALID_ARGUMENT
reason: RESOURCE_PROJECT_INVALID
method: google.cloud.aiplatform.v1beta1.PredictionService.PredictLongRunning
However, calling the same model via direct REST to the documented Vertex AI v1 endpoint succeeds.
This suggests that when vertexai=True is set, the SDK is routing Veo GA calls to the v1beta1 PredictionService endpoint instead of the required v1 endpoint.
Steps to reproduce
1. Initialize client
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project="PROJECT_ID",
location="us-central1",
)2. Call Veo 3.1 Fast GA
operation = await client.aio.models.generate_videos(
model="veo-3.1-fast-generate-001",
prompt="A cinematic drone shot over ocean cliffs at golden hour",
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=4,
aspect_ratio="16:9",
),
)3. Observe error
Full error:
ClientError: 400 INVALID_ARGUMENT. {
'error': {
'code': 400,
'message': 'Invalid resource field value in the request.',
'status': 'INVALID_ARGUMENT',
'details': [{
'@type': 'type.googleapis.com/google.rpc.ErrorInfo',
'reason': 'RESOURCE_PROJECT_INVALID',
'domain': 'googleapis.com',
'metadata': {
'method': 'google.cloud.aiplatform.v1beta1.PredictionService.PredictLongRunning',
'service': 'aiplatform.googleapis.com'
}
}]
}
}Control test (works)
Calling the documented REST endpoint directly works:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/veo-3.1-fast-generate-001:predictLongRunningReturns:
200 OK
{
"name": "projects/.../operations/..."
}This confirms:
- Project is enabled for Veo
- Billing is enabled
- Region is correct
- IAM permissions are correct
- Model access is valid
Expected behavior
The SDK should route Veo 3.1 GA model requests to:
v1 PredictionService.PredictLongRunning
matching the documented REST behavior.
Actual behavior
SDK routes to:
google.cloud.aiplatform.v1beta1.PredictionService.PredictLongRunning
which results in RESOURCE_PROJECT_INVALID.
Additional Notes
- Gemini models work correctly via the same client configuration.
- Issue reproduces consistently in both Windows 11 and Docker (
python:3.11.8-slim). - Raw REST calls confirm product entitlement is correct.
- Appears specific to Veo GA long-running video generation when using
google-genaiwithvertexai=True.
Hypothesis
It appears that the SDK may still be routing Veo long-running publisher model calls through the v1beta1 PredictionService when vertexai=True is enabled, while the GA Veo model requires the v1 endpoint.
If so, the endpoint mapping for long-running video generation models may need to be updated internally within the SDK.