diff --git a/README.md b/README.md index c4e6bd6c..076ba8e4 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,7 @@ - [Overview](#-overview) - [Features](#-features) - [Quick Start](#-quick-start) +- [API Access](#-api-access) - [Advanced Setup](#-advanced-setup) - [Development](#-development) - [Testing LLM Agents](#-testing-llm-agents) @@ -47,11 +48,12 @@ You can watch the video **PentAGI overview**: - 📝 Detailed Reporting. Generation of thorough vulnerability reports with exploitation guides. - 📦 Smart Container Management. Automatic Docker image selection based on specific task requirements. - 📱 Modern Interface. Clean and intuitive web UI for system management and monitoring. -- 🔌 API Integration. Support for REST and GraphQL APIs for seamless external system integration. +- 🔌 Comprehensive APIs. Full-featured REST and GraphQL APIs with Bearer token authentication for automation and integration. - 💾 Persistent Storage. All commands and outputs are stored in PostgreSQL with [pgvector](https://hub.docker.com/r/vxcontrol/pgvector) extension. - 🎯 Scalable Architecture. Microservices-based design supporting horizontal scaling. - 🏠 Self-Hosted Solution. Complete control over your deployment and data. - 🔑 Flexible Authentication. Support for various LLM providers ([OpenAI](https://platform.openai.com/), [Anthropic](https://www.anthropic.com/), [Ollama](https://ollama.com/), [AWS Bedrock](https://aws.amazon.com/bedrock/), [Google AI/Gemini](https://ai.google.dev/), [Deep Infra](https://deepinfra.com/), [OpenRouter](https://openrouter.ai/), [DeepSeek](https://www.deepseek.com/en)), [Moonshot](https://platform.moonshot.ai/) and custom configurations. +- 🔐 API Token Authentication. Secure Bearer token system for programmatic access to REST and GraphQL APIs. - ⚡ Quick Deployment. Easy setup through [Docker Compose](https://docs.docker.com/compose/) with comprehensive environment configuration. ## 🏗️ Architecture @@ -429,7 +431,7 @@ The architecture of PentAGI is designed to be modular, scalable, and secure. Her 1. **Core Services** - Frontend UI: React-based web interface with TypeScript for type safety - - Backend API: Go-based REST and GraphQL APIs for flexible integration + - Backend API: Go-based REST and GraphQL APIs with Bearer token authentication for programmatic access - Vector Store: PostgreSQL with pgvector for semantic search and memory storage - Task Queue: Async task processing system for reliable operation - AI Agent: Multi-agent system with specialized roles for efficient testing @@ -675,6 +677,307 @@ The `ASSISTANT_USE_AGENTS` setting affects the initial state of the "Use Agents" Note that users can always override this setting by toggling the "Use Agents" button in the UI when creating or editing an assistant. This environment variable only controls the initial default state. +## 🔌 API Access + +PentAGI provides comprehensive programmatic access through both REST and GraphQL APIs, allowing you to integrate penetration testing workflows into your automation pipelines, CI/CD processes, and custom applications. + +### Generating API Tokens + +API tokens are managed through the PentAGI web interface: + +1. Navigate to **Settings** → **API Tokens** in the web UI +2. Click **Create Token** to generate a new API token +3. Configure token properties: + - **Name** (optional): A descriptive name for the token + - **Expiration Date**: When the token will expire (minimum 1 minute, maximum 3 years) +4. Click **Create** and **copy the token immediately** - it will only be shown once for security reasons +5. Use the token as a Bearer token in your API requests + +Each token is associated with your user account and inherits your role's permissions. + +### Using API Tokens + +Include the API token in the `Authorization` header of your HTTP requests: + +```bash +# GraphQL API example +curl -X POST https://your-pentagi-instance:8443/api/v1/graphql \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + -H "Content-Type: application/json" \ + -d '{"query": "{ flows { id title status } }"}' + +# REST API example +curl https://your-pentagi-instance:8443/api/v1/flows \ + -H "Authorization: Bearer YOUR_API_TOKEN" +``` + +### API Exploration and Testing + +PentAGI provides interactive documentation for exploring and testing API endpoints: + +#### GraphQL Playground + +Access the GraphQL Playground at `https://your-pentagi-instance:8443/api/v1/graphql/playground` + +1. Click the **HTTP Headers** tab at the bottom +2. Add your authorization header: + ```json + { + "Authorization": "Bearer YOUR_API_TOKEN" + } + ``` +3. Explore the schema, run queries, and test mutations interactively + +#### Swagger UI + +Access the REST API documentation at `https://your-pentagi-instance:8443/api/v1/swagger/index.html` + +1. Click the **Authorize** button +2. Enter your token in the format: `Bearer YOUR_API_TOKEN` +3. Click **Authorize** to apply +4. Test endpoints directly from the Swagger UI + +### Generating API Clients + +You can generate type-safe API clients for your preferred programming language using the schema files included with PentAGI: + +#### GraphQL Clients + +The GraphQL schema is available at: +- **Web UI**: Navigate to Settings to download `schema.graphqls` +- **Direct file**: `backend/pkg/graph/schema.graphqls` in the repository + +Generate clients using tools like: +- **GraphQL Code Generator** (JavaScript/TypeScript): [https://the-guild.dev/graphql/codegen](https://the-guild.dev/graphql/codegen) +- **genqlient** (Go): [https://github.com/Khan/genqlient](https://github.com/Khan/genqlient) +- **Apollo iOS** (Swift): [https://www.apollographql.com/docs/ios](https://www.apollographql.com/docs/ios) + +#### REST API Clients + +The OpenAPI specification is available at: +- **Swagger JSON**: `https://your-pentagi-instance:8443/api/v1/swagger/doc.json` +- **Swagger YAML**: Available in `backend/pkg/server/docs/swagger.yaml` + +Generate clients using: +- **OpenAPI Generator**: [https://openapi-generator.tech](https://openapi-generator.tech) + ```bash + openapi-generator-cli generate \ + -i https://your-pentagi-instance:8443/api/v1/swagger/doc.json \ + -g python \ + -o ./pentagi-client + ``` + +- **Swagger Codegen**: [https://github.com/swagger-api/swagger-codegen](https://github.com/swagger-api/swagger-codegen) + ```bash + swagger-codegen generate \ + -i https://your-pentagi-instance:8443/api/v1/swagger/doc.json \ + -l typescript-axios \ + -o ./pentagi-client + ``` + +- **swagger-typescript-api** (TypeScript): [https://github.com/acacode/swagger-typescript-api](https://github.com/acacode/swagger-typescript-api) + ```bash + npx swagger-typescript-api \ + -p https://your-pentagi-instance:8443/api/v1/swagger/doc.json \ + -o ./src/api \ + -n pentagi-api.ts + ``` + +### API Usage Examples + +
+Creating a New Flow (GraphQL) + +```graphql +mutation CreateFlow { + createFlow( + modelProvider: "openai" + input: "Test the security of https://example.com" + ) { + id + title + status + createdAt + } +} +``` + +
+ +
+Listing Flows (REST API) + +```bash +curl https://your-pentagi-instance:8443/api/v1/flows \ + -H "Authorization: Bearer YOUR_API_TOKEN" \ + | jq '.flows[] | {id, title, status}' +``` + +
+ +
+Python Client Example + +```python +import requests + +class PentAGIClient: + def __init__(self, base_url, api_token): + self.base_url = base_url + self.headers = { + "Authorization": f"Bearer {api_token}", + "Content-Type": "application/json" + } + + def create_flow(self, provider, target): + query = """ + mutation CreateFlow($provider: String!, $input: String!) { + createFlow(modelProvider: $provider, input: $input) { + id + title + status + } + } + """ + response = requests.post( + f"{self.base_url}/api/v1/graphql", + json={ + "query": query, + "variables": { + "provider": provider, + "input": target + } + }, + headers=self.headers + ) + return response.json() + + def get_flows(self): + response = requests.get( + f"{self.base_url}/api/v1/flows", + headers=self.headers + ) + return response.json() + +# Usage +client = PentAGIClient( + "https://your-pentagi-instance:8443", + "your_api_token_here" +) + +# Create a new flow +flow = client.create_flow("openai", "Scan https://example.com for vulnerabilities") +print(f"Created flow: {flow}") + +# List all flows +flows = client.get_flows() +print(f"Total flows: {len(flows['flows'])}") +``` + +
+ +
+TypeScript Client Example + +```typescript +import axios, { AxiosInstance } from 'axios'; + +interface Flow { + id: string; + title: string; + status: string; + createdAt: string; +} + +class PentAGIClient { + private client: AxiosInstance; + + constructor(baseURL: string, apiToken: string) { + this.client = axios.create({ + baseURL: `${baseURL}/api/v1`, + headers: { + 'Authorization': `Bearer ${apiToken}`, + 'Content-Type': 'application/json', + }, + }); + } + + async createFlow(provider: string, input: string): Promise { + const query = ` + mutation CreateFlow($provider: String!, $input: String!) { + createFlow(modelProvider: $provider, input: $input) { + id + title + status + createdAt + } + } + `; + + const response = await this.client.post('/graphql', { + query, + variables: { provider, input }, + }); + + return response.data.data.createFlow; + } + + async getFlows(): Promise { + const response = await this.client.get('/flows'); + return response.data.flows; + } + + async getFlow(flowId: string): Promise { + const response = await this.client.get(`/flows/${flowId}`); + return response.data; + } +} + +// Usage +const client = new PentAGIClient( + 'https://your-pentagi-instance:8443', + 'your_api_token_here' +); + +// Create a new flow +const flow = await client.createFlow( + 'openai', + 'Perform penetration test on https://example.com' +); +console.log('Created flow:', flow); + +// List all flows +const flows = await client.getFlows(); +console.log(`Total flows: ${flows.length}`); +``` + +
+ +### Security Best Practices + +When working with API tokens: + +- **Never commit tokens to version control** - use environment variables or secrets management +- **Rotate tokens regularly** - set appropriate expiration dates and create new tokens periodically +- **Use separate tokens for different applications** - makes it easier to revoke access if needed +- **Monitor token usage** - review API token activity in the Settings page +- **Revoke unused tokens** - disable or delete tokens that are no longer needed +- **Use HTTPS only** - never send API tokens over unencrypted connections + +### Token Management + +- **View tokens**: See all your active tokens in Settings → API Tokens +- **Edit tokens**: Update token names or revoke tokens +- **Delete tokens**: Permanently remove tokens (this action cannot be undone) +- **Token ID**: Each token has a unique ID that can be copied for reference + +The token list shows: +- Token name (if provided) +- Token ID (unique identifier) +- Status (active/revoked/expired) +- Creation date +- Expiration date + ### Custom LLM Provider Configuration When using custom LLM providers with the `LLM_SERVER_*` variables, you can fine-tune the reasoning format used in requests: diff --git a/backend/go.mod b/backend/go.mod index e1b258db..6d878616 100644 --- a/backend/go.mod +++ b/backend/go.mod @@ -163,6 +163,7 @@ require ( github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-localereader v0.0.1 // indirect + github.com/mattn/go-sqlite3 v1.14.24 // indirect github.com/mfridman/interpolate v0.0.2 // indirect github.com/microcosm-cc/bluemonday v1.0.27 // indirect github.com/moby/docker-image-spec v1.3.1 // indirect diff --git a/backend/migrations/sql/20260218_150000_api_tokens.sql b/backend/migrations/sql/20260218_150000_api_tokens.sql new file mode 100644 index 00000000..7f1a91d6 --- /dev/null +++ b/backend/migrations/sql/20260218_150000_api_tokens.sql @@ -0,0 +1,67 @@ +-- +goose Up +-- +goose StatementBegin +CREATE TYPE TOKEN_STATUS AS ENUM ('active', 'revoked'); + +CREATE TABLE api_tokens ( + id BIGINT PRIMARY KEY GENERATED ALWAYS AS IDENTITY, + token_id TEXT NOT NULL, + user_id BIGINT NOT NULL REFERENCES users(id) ON DELETE CASCADE, + role_id BIGINT NOT NULL REFERENCES roles(id), + name TEXT NULL, + ttl BIGINT NOT NULL, + status TOKEN_STATUS NOT NULL DEFAULT 'active', + created_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, + updated_at TIMESTAMPTZ DEFAULT CURRENT_TIMESTAMP, + deleted_at TIMESTAMPTZ NULL, + + CONSTRAINT api_tokens_token_id_unique UNIQUE (token_id) +); + +-- Partial unique index for name per user (only when name is not null and not deleted) +CREATE UNIQUE INDEX api_tokens_name_user_unique_idx ON api_tokens(name, user_id) + WHERE name IS NOT NULL AND deleted_at IS NULL; + +CREATE INDEX api_tokens_token_id_idx ON api_tokens(token_id); +CREATE INDEX api_tokens_user_id_idx ON api_tokens(user_id); +CREATE INDEX api_tokens_status_idx ON api_tokens(status); +CREATE INDEX api_tokens_deleted_at_idx ON api_tokens(deleted_at); + +CREATE TRIGGER update_api_tokens_modified + BEFORE UPDATE ON api_tokens + FOR EACH ROW EXECUTE PROCEDURE update_modified_column(); + +-- Add privileges for Admin role (role_id = 1) +INSERT INTO privileges (role_id, name) VALUES + (1, 'settings.tokens.admin'), + (1, 'settings.tokens.create'), + (1, 'settings.tokens.view'), + (1, 'settings.tokens.edit'), + (1, 'settings.tokens.delete'), + (1, 'settings.tokens.subscribe') + ON CONFLICT DO NOTHING; + +-- Add privileges for User role (role_id = 2) +INSERT INTO privileges (role_id, name) VALUES + (2, 'settings.tokens.create'), + (2, 'settings.tokens.view'), + (2, 'settings.tokens.edit'), + (2, 'settings.tokens.delete'), + (2, 'settings.tokens.subscribe') + ON CONFLICT DO NOTHING; +-- +goose StatementEnd + +-- +goose Down +-- +goose StatementBegin +DELETE FROM privileges WHERE name IN ( + 'settings.tokens.create', + 'settings.tokens.view', + 'settings.tokens.edit', + 'settings.tokens.delete', + 'settings.tokens.admin', + 'settings.tokens.subscribe' +); + +DROP INDEX IF EXISTS api_tokens_name_user_unique_idx; +DROP TABLE IF EXISTS api_tokens; +DROP TYPE IF EXISTS TOKEN_STATUS; +-- +goose StatementEnd diff --git a/backend/pkg/database/api_token_with_secret.go b/backend/pkg/database/api_token_with_secret.go new file mode 100644 index 00000000..29ba3bd7 --- /dev/null +++ b/backend/pkg/database/api_token_with_secret.go @@ -0,0 +1,6 @@ +package database + +type APITokenWithSecret struct { + ApiToken + Token string `json:"token"` +} diff --git a/backend/pkg/database/api_tokens.sql.go b/backend/pkg/database/api_tokens.sql.go new file mode 100644 index 00000000..62bb366c --- /dev/null +++ b/backend/pkg/database/api_tokens.sql.go @@ -0,0 +1,409 @@ +// Code generated by sqlc. DO NOT EDIT. +// versions: +// sqlc v1.27.0 +// source: api_tokens.sql + +package database + +import ( + "context" + "database/sql" +) + +const createAPIToken = `-- name: CreateAPIToken :one +INSERT INTO api_tokens ( + token_id, + user_id, + role_id, + name, + ttl, + status +) VALUES ( + $1, $2, $3, $4, $5, $6 +) +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +type CreateAPITokenParams struct { + TokenID string `json:"token_id"` + UserID int64 `json:"user_id"` + RoleID int64 `json:"role_id"` + Name sql.NullString `json:"name"` + Ttl int64 `json:"ttl"` + Status TokenStatus `json:"status"` +} + +func (q *Queries) CreateAPIToken(ctx context.Context, arg CreateAPITokenParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, createAPIToken, + arg.TokenID, + arg.UserID, + arg.RoleID, + arg.Name, + arg.Ttl, + arg.Status, + ) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const deleteAPIToken = `-- name: DeleteAPIToken :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE id = $1 +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +func (q *Queries) DeleteAPIToken(ctx context.Context, id int64) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, deleteAPIToken, id) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const deleteUserAPIToken = `-- name: DeleteUserAPIToken :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE id = $1 AND user_id = $2 +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +type DeleteUserAPITokenParams struct { + ID int64 `json:"id"` + UserID int64 `json:"user_id"` +} + +func (q *Queries) DeleteUserAPIToken(ctx context.Context, arg DeleteUserAPITokenParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, deleteUserAPIToken, arg.ID, arg.UserID) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const deleteUserAPITokenByTokenID = `-- name: DeleteUserAPITokenByTokenID :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE token_id = $1 AND user_id = $2 +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +type DeleteUserAPITokenByTokenIDParams struct { + TokenID string `json:"token_id"` + UserID int64 `json:"user_id"` +} + +func (q *Queries) DeleteUserAPITokenByTokenID(ctx context.Context, arg DeleteUserAPITokenByTokenIDParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, deleteUserAPITokenByTokenID, arg.TokenID, arg.UserID) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const getAPIToken = `-- name: GetAPIToken :one +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +WHERE t.id = $1 AND t.deleted_at IS NULL +` + +func (q *Queries) GetAPIToken(ctx context.Context, id int64) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, getAPIToken, id) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const getAPITokenByTokenID = `-- name: GetAPITokenByTokenID :one +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +WHERE t.token_id = $1 AND t.deleted_at IS NULL +` + +func (q *Queries) GetAPITokenByTokenID(ctx context.Context, tokenID string) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, getAPITokenByTokenID, tokenID) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const getAPITokens = `-- name: GetAPITokens :many +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +WHERE t.deleted_at IS NULL +ORDER BY t.created_at DESC +` + +func (q *Queries) GetAPITokens(ctx context.Context) ([]ApiToken, error) { + rows, err := q.db.QueryContext(ctx, getAPITokens) + if err != nil { + return nil, err + } + defer rows.Close() + var items []ApiToken + for rows.Next() { + var i ApiToken + if err := rows.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ); err != nil { + return nil, err + } + items = append(items, i) + } + if err := rows.Close(); err != nil { + return nil, err + } + if err := rows.Err(); err != nil { + return nil, err + } + return items, nil +} + +const getUserAPIToken = `-- name: GetUserAPIToken :one +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.id = $1 AND t.user_id = $2 AND t.deleted_at IS NULL +` + +type GetUserAPITokenParams struct { + ID int64 `json:"id"` + UserID int64 `json:"user_id"` +} + +func (q *Queries) GetUserAPIToken(ctx context.Context, arg GetUserAPITokenParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, getUserAPIToken, arg.ID, arg.UserID) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const getUserAPITokenByTokenID = `-- name: GetUserAPITokenByTokenID :one +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.token_id = $1 AND t.user_id = $2 AND t.deleted_at IS NULL +` + +type GetUserAPITokenByTokenIDParams struct { + TokenID string `json:"token_id"` + UserID int64 `json:"user_id"` +} + +func (q *Queries) GetUserAPITokenByTokenID(ctx context.Context, arg GetUserAPITokenByTokenIDParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, getUserAPITokenByTokenID, arg.TokenID, arg.UserID) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const getUserAPITokens = `-- name: GetUserAPITokens :many +SELECT + t.id, t.token_id, t.user_id, t.role_id, t.name, t.ttl, t.status, t.created_at, t.updated_at, t.deleted_at +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.user_id = $1 AND t.deleted_at IS NULL +ORDER BY t.created_at DESC +` + +func (q *Queries) GetUserAPITokens(ctx context.Context, userID int64) ([]ApiToken, error) { + rows, err := q.db.QueryContext(ctx, getUserAPITokens, userID) + if err != nil { + return nil, err + } + defer rows.Close() + var items []ApiToken + for rows.Next() { + var i ApiToken + if err := rows.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ); err != nil { + return nil, err + } + items = append(items, i) + } + if err := rows.Close(); err != nil { + return nil, err + } + if err := rows.Err(); err != nil { + return nil, err + } + return items, nil +} + +const updateAPIToken = `-- name: UpdateAPIToken :one +UPDATE api_tokens +SET name = $2, status = $3 +WHERE id = $1 +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +type UpdateAPITokenParams struct { + ID int64 `json:"id"` + Name sql.NullString `json:"name"` + Status TokenStatus `json:"status"` +} + +func (q *Queries) UpdateAPIToken(ctx context.Context, arg UpdateAPITokenParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, updateAPIToken, arg.ID, arg.Name, arg.Status) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} + +const updateUserAPIToken = `-- name: UpdateUserAPIToken :one +UPDATE api_tokens +SET name = $3, status = $4 +WHERE id = $1 AND user_id = $2 +RETURNING id, token_id, user_id, role_id, name, ttl, status, created_at, updated_at, deleted_at +` + +type UpdateUserAPITokenParams struct { + ID int64 `json:"id"` + UserID int64 `json:"user_id"` + Name sql.NullString `json:"name"` + Status TokenStatus `json:"status"` +} + +func (q *Queries) UpdateUserAPIToken(ctx context.Context, arg UpdateUserAPITokenParams) (ApiToken, error) { + row := q.db.QueryRowContext(ctx, updateUserAPIToken, + arg.ID, + arg.UserID, + arg.Name, + arg.Status, + ) + var i ApiToken + err := row.Scan( + &i.ID, + &i.TokenID, + &i.UserID, + &i.RoleID, + &i.Name, + &i.Ttl, + &i.Status, + &i.CreatedAt, + &i.UpdatedAt, + &i.DeletedAt, + ) + return i, err +} diff --git a/backend/pkg/database/converter/converter.go b/backend/pkg/database/converter/converter.go index 82d459a8..601c1904 100644 --- a/backend/pkg/database/converter/converter.go +++ b/backend/pkg/database/converter/converter.go @@ -401,6 +401,72 @@ func ConvertPrompt(prompt database.Prompt) *model.UserPrompt { } } +func ConvertAPIToken(token database.ApiToken) *model.APIToken { + var name *string + if token.Name.Valid { + name = &token.Name.String + } + + return &model.APIToken{ + ID: token.ID, + TokenID: token.TokenID, + UserID: token.UserID, + RoleID: token.RoleID, + Name: name, + TTL: int(token.Ttl), + Status: model.TokenStatus(token.Status), + CreatedAt: token.CreatedAt.Time, + UpdatedAt: token.UpdatedAt.Time, + } +} + +func ConvertAPITokenRemoveSecret(token database.APITokenWithSecret) *model.APIToken { + var name *string + if token.Name.Valid { + name = &token.Name.String + } + + return &model.APIToken{ + ID: token.ID, + TokenID: token.TokenID, + UserID: token.UserID, + RoleID: token.RoleID, + Name: name, + TTL: int(token.Ttl), + Status: model.TokenStatus(token.Status), + CreatedAt: token.CreatedAt.Time, + UpdatedAt: token.UpdatedAt.Time, + } +} + +func ConvertAPITokenWithSecret(token database.APITokenWithSecret) *model.APITokenWithSecret { + var name *string + if token.Name.Valid { + name = &token.Name.String + } + + return &model.APITokenWithSecret{ + ID: token.ID, + TokenID: token.TokenID, + UserID: token.UserID, + RoleID: token.RoleID, + Name: name, + TTL: int(token.Ttl), + Status: model.TokenStatus(token.Status), + CreatedAt: token.CreatedAt.Time, + UpdatedAt: token.UpdatedAt.Time, + Token: token.Token, + } +} + +func ConvertAPITokens(tokens []database.ApiToken) []*model.APIToken { + result := make([]*model.APIToken, 0, len(tokens)) + for _, token := range tokens { + result = append(result, ConvertAPIToken(token)) + } + return result +} + func ConvertModels(models pconfig.ModelsConfig) []*model.ModelConfig { gmodels := make([]*model.ModelConfig, 0, len(models)) for _, m := range models { diff --git a/backend/pkg/database/models.go b/backend/pkg/database/models.go index 6bf5f931..ff0c5730 100644 --- a/backend/pkg/database/models.go +++ b/backend/pkg/database/models.go @@ -639,6 +639,48 @@ func (ns NullTermlogType) Value() (driver.Value, error) { return string(ns.TermlogType), nil } +type TokenStatus string + +const ( + TokenStatusActive TokenStatus = "active" + TokenStatusRevoked TokenStatus = "revoked" +) + +func (e *TokenStatus) Scan(src interface{}) error { + switch s := src.(type) { + case []byte: + *e = TokenStatus(s) + case string: + *e = TokenStatus(s) + default: + return fmt.Errorf("unsupported scan type for TokenStatus: %T", src) + } + return nil +} + +type NullTokenStatus struct { + TokenStatus TokenStatus `json:"token_status"` + Valid bool `json:"valid"` // Valid is true if TokenStatus is not NULL +} + +// Scan implements the Scanner interface. +func (ns *NullTokenStatus) Scan(value interface{}) error { + if value == nil { + ns.TokenStatus, ns.Valid = "", false + return nil + } + ns.Valid = true + return ns.TokenStatus.Scan(value) +} + +// Value implements the driver Valuer interface. +func (ns NullTokenStatus) Value() (driver.Value, error) { + if !ns.Valid { + return nil, nil + } + return string(ns.TokenStatus), nil +} + type ToolcallStatus string const ( @@ -822,6 +864,19 @@ type Agentlog struct { CreatedAt sql.NullTime `json:"created_at"` } +type ApiToken struct { + ID int64 `json:"id"` + TokenID string `json:"token_id"` + UserID int64 `json:"user_id"` + RoleID int64 `json:"role_id"` + Name sql.NullString `json:"name"` + Ttl int64 `json:"ttl"` + Status TokenStatus `json:"status"` + CreatedAt sql.NullTime `json:"created_at"` + UpdatedAt sql.NullTime `json:"updated_at"` + DeletedAt sql.NullTime `json:"deleted_at"` +} + type Assistant struct { ID int64 `json:"id"` Status AssistantStatus `json:"status"` diff --git a/backend/pkg/database/querier.go b/backend/pkg/database/querier.go index c6e41747..0d94b65f 100644 --- a/backend/pkg/database/querier.go +++ b/backend/pkg/database/querier.go @@ -10,6 +10,7 @@ import ( ) type Querier interface { + CreateAPIToken(ctx context.Context, arg CreateAPITokenParams) (ApiToken, error) CreateAgentLog(ctx context.Context, arg CreateAgentLogParams) (Agentlog, error) CreateAssistant(ctx context.Context, arg CreateAssistantParams) (Assistant, error) CreateAssistantLog(ctx context.Context, arg CreateAssistantLogParams) (Assistantlog, error) @@ -29,6 +30,7 @@ type Querier interface { CreateUser(ctx context.Context, arg CreateUserParams) (User, error) CreateUserPrompt(ctx context.Context, arg CreateUserPromptParams) (Prompt, error) CreateVectorStoreLog(ctx context.Context, arg CreateVectorStoreLogParams) (Vecstorelog, error) + DeleteAPIToken(ctx context.Context, id int64) (ApiToken, error) DeleteAssistant(ctx context.Context, id int64) (Assistant, error) DeleteFlow(ctx context.Context, id int64) (Flow, error) DeletePrompt(ctx context.Context, id int64) error @@ -36,8 +38,13 @@ type Querier interface { DeleteSubtask(ctx context.Context, id int64) error DeleteSubtasks(ctx context.Context, ids []int64) error DeleteUser(ctx context.Context, id int64) error + DeleteUserAPIToken(ctx context.Context, arg DeleteUserAPITokenParams) (ApiToken, error) + DeleteUserAPITokenByTokenID(ctx context.Context, arg DeleteUserAPITokenByTokenIDParams) (ApiToken, error) DeleteUserPrompt(ctx context.Context, arg DeleteUserPromptParams) error DeleteUserProvider(ctx context.Context, arg DeleteUserProviderParams) (Provider, error) + GetAPIToken(ctx context.Context, id int64) (ApiToken, error) + GetAPITokenByTokenID(ctx context.Context, tokenID string) (ApiToken, error) + GetAPITokens(ctx context.Context) ([]ApiToken, error) // Get toolcalls stats for all flows GetAllFlowsToolcallsStats(ctx context.Context) ([]GetAllFlowsToolcallsStatsRow, error) GetAllFlowsUsageStats(ctx context.Context) ([]GetAllFlowsUsageStatsRow, error) @@ -160,6 +167,9 @@ type Querier interface { GetUsageStatsByType(ctx context.Context, userID int64) ([]GetUsageStatsByTypeRow, error) GetUsageStatsByTypeForFlow(ctx context.Context, flowID int64) ([]GetUsageStatsByTypeForFlowRow, error) GetUser(ctx context.Context, id int64) (GetUserRow, error) + GetUserAPIToken(ctx context.Context, arg GetUserAPITokenParams) (ApiToken, error) + GetUserAPITokenByTokenID(ctx context.Context, arg GetUserAPITokenByTokenIDParams) (ApiToken, error) + GetUserAPITokens(ctx context.Context, userID int64) ([]ApiToken, error) GetUserByHash(ctx context.Context, hash string) (GetUserByHashRow, error) GetUserContainers(ctx context.Context, userID int64) ([]Container, error) GetUserFlow(ctx context.Context, arg GetUserFlowParams) (Flow, error) @@ -191,6 +201,7 @@ type Querier interface { GetUserTotalToolcallsStats(ctx context.Context, userID int64) (GetUserTotalToolcallsStatsRow, error) GetUserTotalUsageStats(ctx context.Context, userID int64) (GetUserTotalUsageStatsRow, error) GetUsers(ctx context.Context) ([]GetUsersRow, error) + UpdateAPIToken(ctx context.Context, arg UpdateAPITokenParams) (ApiToken, error) UpdateAssistant(ctx context.Context, arg UpdateAssistantParams) (Assistant, error) UpdateAssistantLanguage(ctx context.Context, arg UpdateAssistantLanguageParams) (Assistant, error) UpdateAssistantLog(ctx context.Context, arg UpdateAssistantLogParams) (Assistantlog, error) @@ -228,6 +239,7 @@ type Querier interface { UpdateToolcallFailedResult(ctx context.Context, arg UpdateToolcallFailedResultParams) (Toolcall, error) UpdateToolcallFinishedResult(ctx context.Context, arg UpdateToolcallFinishedResultParams) (Toolcall, error) UpdateToolcallStatus(ctx context.Context, arg UpdateToolcallStatusParams) (Toolcall, error) + UpdateUserAPIToken(ctx context.Context, arg UpdateUserAPITokenParams) (ApiToken, error) UpdateUserName(ctx context.Context, arg UpdateUserNameParams) (User, error) UpdateUserPassword(ctx context.Context, arg UpdateUserPasswordParams) (User, error) UpdateUserPasswordChangeRequired(ctx context.Context, arg UpdateUserPasswordChangeRequiredParams) (User, error) diff --git a/backend/pkg/graph/context.go b/backend/pkg/graph/context.go index 5818ad6b..4d17ed5f 100644 --- a/backend/pkg/graph/context.go +++ b/backend/pkg/graph/context.go @@ -4,9 +4,10 @@ import ( "context" "errors" "fmt" - "pentagi/pkg/database" "regexp" "slices" + + "pentagi/pkg/database" ) // This file will not be regenerated automatically. @@ -15,10 +16,13 @@ import ( var permAdminRegexp = regexp.MustCompile(`^(.+)\.[a-z]+$`) +var userSessionTypes = []string{"local", "oauth"} + type GqlContextKey string const ( UserIDKey GqlContextKey = "userID" + UserTypeKey GqlContextKey = "userType" UserPermissions GqlContextKey = "userPermissions" ) @@ -34,6 +38,18 @@ func SetUserID(ctx context.Context, userID uint64) context.Context { return context.WithValue(ctx, UserIDKey, userID) } +func GetUserType(ctx context.Context) (string, error) { + userType, ok := ctx.Value(UserTypeKey).(string) + if !ok { + return "", errors.New("user type not found") + } + return userType, nil +} + +func SetUserType(ctx context.Context, userType string) context.Context { + return context.WithValue(ctx, UserTypeKey, userType) +} + func GetUserPermissions(ctx context.Context) ([]string, error) { userPermissions, ok := ctx.Value(UserPermissions).([]string) if !ok { @@ -46,6 +62,19 @@ func SetUserPermissions(ctx context.Context, userPermissions []string) context.C return context.WithValue(ctx, UserPermissions, userPermissions) } +func validateUserType(ctx context.Context, userTypes ...string) (bool, error) { + userType, err := GetUserType(ctx) + if err != nil { + return false, fmt.Errorf("unauthorized: invalid user type: %v", err) + } + + if !slices.Contains(userTypes, userType) { + return false, fmt.Errorf("unauthorized: invalid user type: %s", userType) + } + + return true, nil +} + func validatePermission(ctx context.Context, perm string) (int64, bool, error) { uid, err := GetUserID(ctx) if err != nil { diff --git a/backend/pkg/graph/generated.go b/backend/pkg/graph/generated.go index f348dea5..f281f56a 100644 --- a/backend/pkg/graph/generated.go +++ b/backend/pkg/graph/generated.go @@ -50,6 +50,31 @@ type DirectiveRoot struct { } type ComplexityRoot struct { + APIToken struct { + CreatedAt func(childComplexity int) int + ID func(childComplexity int) int + Name func(childComplexity int) int + RoleID func(childComplexity int) int + Status func(childComplexity int) int + TTL func(childComplexity int) int + TokenID func(childComplexity int) int + UpdatedAt func(childComplexity int) int + UserID func(childComplexity int) int + } + + APITokenWithSecret struct { + CreatedAt func(childComplexity int) int + ID func(childComplexity int) int + Name func(childComplexity int) int + RoleID func(childComplexity int) int + Status func(childComplexity int) int + TTL func(childComplexity int) int + Token func(childComplexity int) int + TokenID func(childComplexity int) int + UpdatedAt func(childComplexity int) int + UserID func(childComplexity int) int + } + AgentConfig struct { FrequencyPenalty func(childComplexity int) int MaxLength func(childComplexity int) int @@ -269,10 +294,12 @@ type ComplexityRoot struct { Mutation struct { CallAssistant func(childComplexity int, flowID int64, assistantID int64, input string, useAgents bool) int + CreateAPIToken func(childComplexity int, input model.CreateAPITokenInput) int CreateAssistant func(childComplexity int, flowID int64, modelProvider string, input string, useAgents bool) int CreateFlow func(childComplexity int, modelProvider string, input string) int CreatePrompt func(childComplexity int, typeArg model.PromptType, template string) int CreateProvider func(childComplexity int, name string, typeArg model.ProviderType, agents model.AgentsConfig) int + DeleteAPIToken func(childComplexity int, tokenID string) int DeleteAssistant func(childComplexity int, flowID int64, assistantID int64) int DeleteFlow func(childComplexity int, flowID int64) int DeletePrompt func(childComplexity int, promptID int64) int @@ -283,6 +310,7 @@ type ComplexityRoot struct { StopFlow func(childComplexity int, flowID int64) int TestAgent func(childComplexity int, typeArg model.ProviderType, agentType model.AgentConfigType, agent model.AgentConfig) int TestProvider func(childComplexity int, typeArg model.ProviderType, agents model.AgentsConfig) int + UpdateAPIToken func(childComplexity int, tokenID string, input model.UpdateAPITokenInput) int UpdatePrompt func(childComplexity int, promptID int64, template string) int UpdateProvider func(childComplexity int, providerID int64, name string, agents model.AgentsConfig) int ValidatePrompt func(childComplexity int, typeArg model.PromptType, template string) int @@ -362,6 +390,8 @@ type ComplexityRoot struct { } Query struct { + APIToken func(childComplexity int, tokenID string) int + APITokens func(childComplexity int) int AgentLogs func(childComplexity int, flowID int64) int AssistantLogs func(childComplexity int, flowID int64, assistantID int64) int Assistants func(childComplexity int, flowID int64) int @@ -431,6 +461,9 @@ type ComplexityRoot struct { } Subscription struct { + APITokenCreated func(childComplexity int) int + APITokenDeleted func(childComplexity int) int + APITokenUpdated func(childComplexity int) int AgentLogAdded func(childComplexity int, flowID int64) int AssistantCreated func(childComplexity int, flowID int64) int AssistantDeleted func(childComplexity int, flowID int64) int @@ -589,6 +622,9 @@ type MutationResolver interface { CreatePrompt(ctx context.Context, typeArg model.PromptType, template string) (*model.UserPrompt, error) UpdatePrompt(ctx context.Context, promptID int64, template string) (*model.UserPrompt, error) DeletePrompt(ctx context.Context, promptID int64) (model.ResultType, error) + CreateAPIToken(ctx context.Context, input model.CreateAPITokenInput) (*model.APITokenWithSecret, error) + UpdateAPIToken(ctx context.Context, tokenID string, input model.UpdateAPITokenInput) (*model.APIToken, error) + DeleteAPIToken(ctx context.Context, tokenID string) (bool, error) } type QueryResolver interface { Providers(ctx context.Context) ([]*model.Provider, error) @@ -622,6 +658,8 @@ type QueryResolver interface { Settings(ctx context.Context) (*model.Settings, error) SettingsProviders(ctx context.Context) (*model.ProvidersConfig, error) SettingsPrompts(ctx context.Context) (*model.PromptsConfig, error) + APIToken(ctx context.Context, tokenID string) (*model.APIToken, error) + APITokens(ctx context.Context) ([]*model.APIToken, error) } type SubscriptionResolver interface { FlowCreated(ctx context.Context) (<-chan *model.Flow, error) @@ -644,6 +682,9 @@ type SubscriptionResolver interface { ProviderCreated(ctx context.Context) (<-chan *model.ProviderConfig, error) ProviderUpdated(ctx context.Context) (<-chan *model.ProviderConfig, error) ProviderDeleted(ctx context.Context) (<-chan *model.ProviderConfig, error) + APITokenCreated(ctx context.Context) (<-chan *model.APIToken, error) + APITokenUpdated(ctx context.Context) (<-chan *model.APIToken, error) + APITokenDeleted(ctx context.Context) (<-chan *model.APIToken, error) } type executableSchema struct { @@ -665,6 +706,139 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in _ = ec switch typeName + "." + field { + case "APIToken.createdAt": + if e.complexity.APIToken.CreatedAt == nil { + break + } + + return e.complexity.APIToken.CreatedAt(childComplexity), true + + case "APIToken.id": + if e.complexity.APIToken.ID == nil { + break + } + + return e.complexity.APIToken.ID(childComplexity), true + + case "APIToken.name": + if e.complexity.APIToken.Name == nil { + break + } + + return e.complexity.APIToken.Name(childComplexity), true + + case "APIToken.roleId": + if e.complexity.APIToken.RoleID == nil { + break + } + + return e.complexity.APIToken.RoleID(childComplexity), true + + case "APIToken.status": + if e.complexity.APIToken.Status == nil { + break + } + + return e.complexity.APIToken.Status(childComplexity), true + + case "APIToken.ttl": + if e.complexity.APIToken.TTL == nil { + break + } + + return e.complexity.APIToken.TTL(childComplexity), true + + case "APIToken.tokenId": + if e.complexity.APIToken.TokenID == nil { + break + } + + return e.complexity.APIToken.TokenID(childComplexity), true + + case "APIToken.updatedAt": + if e.complexity.APIToken.UpdatedAt == nil { + break + } + + return e.complexity.APIToken.UpdatedAt(childComplexity), true + + case "APIToken.userId": + if e.complexity.APIToken.UserID == nil { + break + } + + return e.complexity.APIToken.UserID(childComplexity), true + + case "APITokenWithSecret.createdAt": + if e.complexity.APITokenWithSecret.CreatedAt == nil { + break + } + + return e.complexity.APITokenWithSecret.CreatedAt(childComplexity), true + + case "APITokenWithSecret.id": + if e.complexity.APITokenWithSecret.ID == nil { + break + } + + return e.complexity.APITokenWithSecret.ID(childComplexity), true + + case "APITokenWithSecret.name": + if e.complexity.APITokenWithSecret.Name == nil { + break + } + + return e.complexity.APITokenWithSecret.Name(childComplexity), true + + case "APITokenWithSecret.roleId": + if e.complexity.APITokenWithSecret.RoleID == nil { + break + } + + return e.complexity.APITokenWithSecret.RoleID(childComplexity), true + + case "APITokenWithSecret.status": + if e.complexity.APITokenWithSecret.Status == nil { + break + } + + return e.complexity.APITokenWithSecret.Status(childComplexity), true + + case "APITokenWithSecret.ttl": + if e.complexity.APITokenWithSecret.TTL == nil { + break + } + + return e.complexity.APITokenWithSecret.TTL(childComplexity), true + + case "APITokenWithSecret.token": + if e.complexity.APITokenWithSecret.Token == nil { + break + } + + return e.complexity.APITokenWithSecret.Token(childComplexity), true + + case "APITokenWithSecret.tokenId": + if e.complexity.APITokenWithSecret.TokenID == nil { + break + } + + return e.complexity.APITokenWithSecret.TokenID(childComplexity), true + + case "APITokenWithSecret.updatedAt": + if e.complexity.APITokenWithSecret.UpdatedAt == nil { + break + } + + return e.complexity.APITokenWithSecret.UpdatedAt(childComplexity), true + + case "APITokenWithSecret.userId": + if e.complexity.APITokenWithSecret.UserID == nil { + break + } + + return e.complexity.APITokenWithSecret.UserID(childComplexity), true + case "AgentConfig.frequencyPenalty": if e.complexity.AgentConfig.FrequencyPenalty == nil { break @@ -1650,6 +1824,18 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in return e.complexity.Mutation.CallAssistant(childComplexity, args["flowId"].(int64), args["assistantId"].(int64), args["input"].(string), args["useAgents"].(bool)), true + case "Mutation.createAPIToken": + if e.complexity.Mutation.CreateAPIToken == nil { + break + } + + args, err := ec.field_Mutation_createAPIToken_args(context.TODO(), rawArgs) + if err != nil { + return 0, false + } + + return e.complexity.Mutation.CreateAPIToken(childComplexity, args["input"].(model.CreateAPITokenInput)), true + case "Mutation.createAssistant": if e.complexity.Mutation.CreateAssistant == nil { break @@ -1698,6 +1884,18 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in return e.complexity.Mutation.CreateProvider(childComplexity, args["name"].(string), args["type"].(model.ProviderType), args["agents"].(model.AgentsConfig)), true + case "Mutation.deleteAPIToken": + if e.complexity.Mutation.DeleteAPIToken == nil { + break + } + + args, err := ec.field_Mutation_deleteAPIToken_args(context.TODO(), rawArgs) + if err != nil { + return 0, false + } + + return e.complexity.Mutation.DeleteAPIToken(childComplexity, args["tokenId"].(string)), true + case "Mutation.deleteAssistant": if e.complexity.Mutation.DeleteAssistant == nil { break @@ -1818,6 +2016,18 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in return e.complexity.Mutation.TestProvider(childComplexity, args["type"].(model.ProviderType), args["agents"].(model.AgentsConfig)), true + case "Mutation.updateAPIToken": + if e.complexity.Mutation.UpdateAPIToken == nil { + break + } + + args, err := ec.field_Mutation_updateAPIToken_args(context.TODO(), rawArgs) + if err != nil { + return 0, false + } + + return e.complexity.Mutation.UpdateAPIToken(childComplexity, args["tokenId"].(string), args["input"].(model.UpdateAPITokenInput)), true + case "Mutation.updatePrompt": if e.complexity.Mutation.UpdatePrompt == nil { break @@ -2176,6 +2386,25 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in return e.complexity.ProvidersReadinessStatus.Openai(childComplexity), true + case "Query.apiToken": + if e.complexity.Query.APIToken == nil { + break + } + + args, err := ec.field_Query_apiToken_args(context.TODO(), rawArgs) + if err != nil { + return 0, false + } + + return e.complexity.Query.APIToken(childComplexity, args["tokenId"].(string)), true + + case "Query.apiTokens": + if e.complexity.Query.APITokens == nil { + break + } + + return e.complexity.Query.APITokens(childComplexity), true + case "Query.agentLogs": if e.complexity.Query.AgentLogs == nil { break @@ -2649,6 +2878,27 @@ func (e *executableSchema) Complexity(typeName, field string, childComplexity in return e.complexity.Settings.DockerInside(childComplexity), true + case "Subscription.apiTokenCreated": + if e.complexity.Subscription.APITokenCreated == nil { + break + } + + return e.complexity.Subscription.APITokenCreated(childComplexity), true + + case "Subscription.apiTokenDeleted": + if e.complexity.Subscription.APITokenDeleted == nil { + break + } + + return e.complexity.Subscription.APITokenDeleted(childComplexity), true + + case "Subscription.apiTokenUpdated": + if e.complexity.Subscription.APITokenUpdated == nil { + break + } + + return e.complexity.Subscription.APITokenUpdated(childComplexity), true + case "Subscription.agentLogAdded": if e.complexity.Subscription.AgentLogAdded == nil { break @@ -3429,8 +3679,10 @@ func (e *executableSchema) Exec(ctx context.Context) graphql.ResponseHandler { inputUnmarshalMap := graphql.BuildUnmarshalerMap( ec.unmarshalInputAgentConfigInput, ec.unmarshalInputAgentsConfigInput, + ec.unmarshalInputCreateAPITokenInput, ec.unmarshalInputModelPriceInput, ec.unmarshalInputReasoningConfigInput, + ec.unmarshalInputUpdateAPITokenInput, ) first := true @@ -3677,6 +3929,38 @@ func (ec *executionContext) field_Mutation_callAssistant_argsUseAgents( return zeroVal, nil } +func (ec *executionContext) field_Mutation_createAPIToken_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { + var err error + args := map[string]interface{}{} + arg0, err := ec.field_Mutation_createAPIToken_argsInput(ctx, rawArgs) + if err != nil { + return nil, err + } + args["input"] = arg0 + return args, nil +} +func (ec *executionContext) field_Mutation_createAPIToken_argsInput( + ctx context.Context, + rawArgs map[string]interface{}, +) (model.CreateAPITokenInput, error) { + // We won't call the directive if the argument is null. + // Set call_argument_directives_with_null to true to call directives + // even if the argument is null. + _, ok := rawArgs["input"] + if !ok { + var zeroVal model.CreateAPITokenInput + return zeroVal, nil + } + + ctx = graphql.WithPathContext(ctx, graphql.NewPathWithField("input")) + if tmp, ok := rawArgs["input"]; ok { + return ec.unmarshalNCreateAPITokenInput2pentagiᚋpkgᚋgraphᚋmodelᚐCreateAPITokenInput(ctx, tmp) + } + + var zeroVal model.CreateAPITokenInput + return zeroVal, nil +} + func (ec *executionContext) field_Mutation_createAssistant_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { var err error args := map[string]interface{}{} @@ -3994,6 +4278,38 @@ func (ec *executionContext) field_Mutation_createProvider_argsAgents( return zeroVal, nil } +func (ec *executionContext) field_Mutation_deleteAPIToken_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { + var err error + args := map[string]interface{}{} + arg0, err := ec.field_Mutation_deleteAPIToken_argsTokenID(ctx, rawArgs) + if err != nil { + return nil, err + } + args["tokenId"] = arg0 + return args, nil +} +func (ec *executionContext) field_Mutation_deleteAPIToken_argsTokenID( + ctx context.Context, + rawArgs map[string]interface{}, +) (string, error) { + // We won't call the directive if the argument is null. + // Set call_argument_directives_with_null to true to call directives + // even if the argument is null. + _, ok := rawArgs["tokenId"] + if !ok { + var zeroVal string + return zeroVal, nil + } + + ctx = graphql.WithPathContext(ctx, graphql.NewPathWithField("tokenId")) + if tmp, ok := rawArgs["tokenId"]; ok { + return ec.unmarshalNString2string(ctx, tmp) + } + + var zeroVal string + return zeroVal, nil +} + func (ec *executionContext) field_Mutation_deleteAssistant_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { var err error args := map[string]interface{}{} @@ -4476,6 +4792,65 @@ func (ec *executionContext) field_Mutation_testProvider_argsAgents( return zeroVal, nil } +func (ec *executionContext) field_Mutation_updateAPIToken_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { + var err error + args := map[string]interface{}{} + arg0, err := ec.field_Mutation_updateAPIToken_argsTokenID(ctx, rawArgs) + if err != nil { + return nil, err + } + args["tokenId"] = arg0 + arg1, err := ec.field_Mutation_updateAPIToken_argsInput(ctx, rawArgs) + if err != nil { + return nil, err + } + args["input"] = arg1 + return args, nil +} +func (ec *executionContext) field_Mutation_updateAPIToken_argsTokenID( + ctx context.Context, + rawArgs map[string]interface{}, +) (string, error) { + // We won't call the directive if the argument is null. + // Set call_argument_directives_with_null to true to call directives + // even if the argument is null. + _, ok := rawArgs["tokenId"] + if !ok { + var zeroVal string + return zeroVal, nil + } + + ctx = graphql.WithPathContext(ctx, graphql.NewPathWithField("tokenId")) + if tmp, ok := rawArgs["tokenId"]; ok { + return ec.unmarshalNString2string(ctx, tmp) + } + + var zeroVal string + return zeroVal, nil +} + +func (ec *executionContext) field_Mutation_updateAPIToken_argsInput( + ctx context.Context, + rawArgs map[string]interface{}, +) (model.UpdateAPITokenInput, error) { + // We won't call the directive if the argument is null. + // Set call_argument_directives_with_null to true to call directives + // even if the argument is null. + _, ok := rawArgs["input"] + if !ok { + var zeroVal model.UpdateAPITokenInput + return zeroVal, nil + } + + ctx = graphql.WithPathContext(ctx, graphql.NewPathWithField("input")) + if tmp, ok := rawArgs["input"]; ok { + return ec.unmarshalNUpdateAPITokenInput2pentagiᚋpkgᚋgraphᚋmodelᚐUpdateAPITokenInput(ctx, tmp) + } + + var zeroVal model.UpdateAPITokenInput + return zeroVal, nil +} + func (ec *executionContext) field_Mutation_updatePrompt_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { var err error args := map[string]interface{}{} @@ -4744,6 +5119,38 @@ func (ec *executionContext) field_Query_agentLogs_argsFlowID( return zeroVal, nil } +func (ec *executionContext) field_Query_apiToken_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { + var err error + args := map[string]interface{}{} + arg0, err := ec.field_Query_apiToken_argsTokenID(ctx, rawArgs) + if err != nil { + return nil, err + } + args["tokenId"] = arg0 + return args, nil +} +func (ec *executionContext) field_Query_apiToken_argsTokenID( + ctx context.Context, + rawArgs map[string]interface{}, +) (string, error) { + // We won't call the directive if the argument is null. + // Set call_argument_directives_with_null to true to call directives + // even if the argument is null. + _, ok := rawArgs["tokenId"] + if !ok { + var zeroVal string + return zeroVal, nil + } + + ctx = graphql.WithPathContext(ctx, graphql.NewPathWithField("tokenId")) + if tmp, ok := rawArgs["tokenId"]; ok { + return ec.unmarshalNString2string(ctx, tmp) + } + + var zeroVal string + return zeroVal, nil +} + func (ec *executionContext) field_Query_assistantLogs_args(ctx context.Context, rawArgs map[string]interface{}) (map[string]interface{}, error) { var err error args := map[string]interface{}{} @@ -5867,6 +6274,836 @@ func (ec *executionContext) field___Type_fields_argsIncludeDeprecated( // region **************************** field.gotpl ***************************** +func (ec *executionContext) _APIToken_id(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_id(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.ID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_id(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_tokenId(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_tokenId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.TokenID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(string) + fc.Result = res + return ec.marshalNString2string(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_tokenId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type String does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_userId(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_userId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.UserID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_userId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_roleId(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_roleId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.RoleID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_roleId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_name(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_name(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.Name, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + return graphql.Null + } + res := resTmp.(*string) + fc.Result = res + return ec.marshalOString2ᚖstring(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_name(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type String does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_ttl(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_ttl(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.TTL, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int) + fc.Result = res + return ec.marshalNInt2int(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_ttl(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Int does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_status(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_status(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.Status, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(model.TokenStatus) + fc.Result = res + return ec.marshalNTokenStatus2pentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_status(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type TokenStatus does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_createdAt(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_createdAt(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.CreatedAt, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(time.Time) + fc.Result = res + return ec.marshalNTime2timeᚐTime(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_createdAt(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Time does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APIToken_updatedAt(ctx context.Context, field graphql.CollectedField, obj *model.APIToken) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APIToken_updatedAt(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.UpdatedAt, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(time.Time) + fc.Result = res + return ec.marshalNTime2timeᚐTime(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APIToken_updatedAt(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APIToken", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Time does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_id(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_id(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.ID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_id(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_tokenId(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_tokenId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.TokenID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(string) + fc.Result = res + return ec.marshalNString2string(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_tokenId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type String does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_userId(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_userId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.UserID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_userId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_roleId(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_roleId(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.RoleID, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int64) + fc.Result = res + return ec.marshalNID2int64(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_roleId(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type ID does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_name(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_name(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.Name, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + return graphql.Null + } + res := resTmp.(*string) + fc.Result = res + return ec.marshalOString2ᚖstring(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_name(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type String does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_ttl(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_ttl(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.TTL, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(int) + fc.Result = res + return ec.marshalNInt2int(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_ttl(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Int does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_status(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_status(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.Status, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(model.TokenStatus) + fc.Result = res + return ec.marshalNTokenStatus2pentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_status(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type TokenStatus does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_createdAt(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_createdAt(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.CreatedAt, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(time.Time) + fc.Result = res + return ec.marshalNTime2timeᚐTime(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_createdAt(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Time does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_updatedAt(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_updatedAt(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.UpdatedAt, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(time.Time) + fc.Result = res + return ec.marshalNTime2timeᚐTime(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_updatedAt(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Time does not have child fields") + }, + } + return fc, nil +} + +func (ec *executionContext) _APITokenWithSecret_token(ctx context.Context, field graphql.CollectedField, obj *model.APITokenWithSecret) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_APITokenWithSecret_token(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return obj.Token, nil + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(string) + fc.Result = res + return ec.marshalNString2string(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_APITokenWithSecret_token(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "APITokenWithSecret", + Field: field, + IsMethod: false, + IsResolver: false, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type String does not have child fields") + }, + } + return fc, nil +} + func (ec *executionContext) _AgentConfig_model(ctx context.Context, field graphql.CollectedField, obj *model.AgentConfig) (ret graphql.Marshaler) { fc, err := ec.fieldContext_AgentConfig_model(ctx, field) if err != nil { @@ -13785,6 +15022,213 @@ func (ec *executionContext) fieldContext_Mutation_deletePrompt(ctx context.Conte return fc, nil } +func (ec *executionContext) _Mutation_createAPIToken(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_Mutation_createAPIToken(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Mutation().CreateAPIToken(rctx, fc.Args["input"].(model.CreateAPITokenInput)) + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(*model.APITokenWithSecret) + fc.Result = res + return ec.marshalNAPITokenWithSecret2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPITokenWithSecret(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_Mutation_createAPIToken(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Mutation", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_APITokenWithSecret_id(ctx, field) + case "tokenId": + return ec.fieldContext_APITokenWithSecret_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APITokenWithSecret_userId(ctx, field) + case "roleId": + return ec.fieldContext_APITokenWithSecret_roleId(ctx, field) + case "name": + return ec.fieldContext_APITokenWithSecret_name(ctx, field) + case "ttl": + return ec.fieldContext_APITokenWithSecret_ttl(ctx, field) + case "status": + return ec.fieldContext_APITokenWithSecret_status(ctx, field) + case "createdAt": + return ec.fieldContext_APITokenWithSecret_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_APITokenWithSecret_updatedAt(ctx, field) + case "token": + return ec.fieldContext_APITokenWithSecret_token(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type APITokenWithSecret", field.Name) + }, + } + defer func() { + if r := recover(); r != nil { + err = ec.Recover(ctx, r) + ec.Error(ctx, err) + } + }() + ctx = graphql.WithFieldContext(ctx, fc) + if fc.Args, err = ec.field_Mutation_createAPIToken_args(ctx, field.ArgumentMap(ec.Variables)); err != nil { + ec.Error(ctx, err) + return fc, err + } + return fc, nil +} + +func (ec *executionContext) _Mutation_updateAPIToken(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_Mutation_updateAPIToken(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Mutation().UpdateAPIToken(rctx, fc.Args["tokenId"].(string), fc.Args["input"].(model.UpdateAPITokenInput)) + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(*model.APIToken) + fc.Result = res + return ec.marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_Mutation_updateAPIToken(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Mutation", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) + case "name": + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) + case "createdAt": + return ec.fieldContext_APIToken_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_APIToken_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) + }, + } + defer func() { + if r := recover(); r != nil { + err = ec.Recover(ctx, r) + ec.Error(ctx, err) + } + }() + ctx = graphql.WithFieldContext(ctx, fc) + if fc.Args, err = ec.field_Mutation_updateAPIToken_args(ctx, field.ArgumentMap(ec.Variables)); err != nil { + ec.Error(ctx, err) + return fc, err + } + return fc, nil +} + +func (ec *executionContext) _Mutation_deleteAPIToken(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_Mutation_deleteAPIToken(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Mutation().DeleteAPIToken(rctx, fc.Args["tokenId"].(string)) + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.(bool) + fc.Result = res + return ec.marshalNBoolean2bool(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_Mutation_deleteAPIToken(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Mutation", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + return nil, errors.New("field of type Boolean does not have child fields") + }, + } + defer func() { + if r := recover(); r != nil { + err = ec.Recover(ctx, r) + ec.Error(ctx, err) + } + }() + ctx = graphql.WithFieldContext(ctx, fc) + if fc.Args, err = ec.field_Mutation_deleteAPIToken_args(ctx, field.ArgumentMap(ec.Variables)); err != nil { + ec.Error(ctx, err) + return fc, err + } + return fc, nil +} + func (ec *executionContext) _PromptValidationResult_result(ctx context.Context, field graphql.CollectedField, obj *model.PromptValidationResult) (ret graphql.Marshaler) { fc, err := ec.fieldContext_PromptValidationResult_result(ctx, field) if err != nil { @@ -17951,6 +19395,142 @@ func (ec *executionContext) fieldContext_Query_settingsPrompts(_ context.Context return fc, nil } +func (ec *executionContext) _Query_apiToken(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_Query_apiToken(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Query().APIToken(rctx, fc.Args["tokenId"].(string)) + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + return graphql.Null + } + res := resTmp.(*model.APIToken) + fc.Result = res + return ec.marshalOAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_Query_apiToken(ctx context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Query", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) + case "name": + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) + case "createdAt": + return ec.fieldContext_APIToken_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_APIToken_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) + }, + } + defer func() { + if r := recover(); r != nil { + err = ec.Recover(ctx, r) + ec.Error(ctx, err) + } + }() + ctx = graphql.WithFieldContext(ctx, fc) + if fc.Args, err = ec.field_Query_apiToken_args(ctx, field.ArgumentMap(ec.Variables)); err != nil { + ec.Error(ctx, err) + return fc, err + } + return fc, nil +} + +func (ec *executionContext) _Query_apiTokens(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { + fc, err := ec.fieldContext_Query_apiTokens(ctx, field) + if err != nil { + return graphql.Null + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = graphql.Null + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Query().APITokens(rctx) + }) + if err != nil { + ec.Error(ctx, err) + return graphql.Null + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return graphql.Null + } + res := resTmp.([]*model.APIToken) + fc.Result = res + return ec.marshalNAPIToken2ᚕᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPITokenᚄ(ctx, field.Selections, res) +} + +func (ec *executionContext) fieldContext_Query_apiTokens(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Query", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) + case "name": + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) + case "createdAt": + return ec.fieldContext_APIToken_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_APIToken_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) + }, + } + return fc, nil +} + func (ec *executionContext) _Query___type(ctx context.Context, field graphql.CollectedField) (ret graphql.Marshaler) { fc, err := ec.fieldContext_Query___type(ctx, field) if err != nil { @@ -20616,8 +22196,152 @@ func (ec *executionContext) fieldContext_Subscription_providerCreated(_ context. return fc, nil } -func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { - fc, err := ec.fieldContext_Subscription_providerUpdated(ctx, field) +func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { + fc, err := ec.fieldContext_Subscription_providerUpdated(ctx, field) + if err != nil { + return nil + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = nil + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Subscription().ProviderUpdated(rctx) + }) + if err != nil { + ec.Error(ctx, err) + return nil + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return nil + } + return func(ctx context.Context) graphql.Marshaler { + select { + case res, ok := <-resTmp.(<-chan *model.ProviderConfig): + if !ok { + return nil + } + return graphql.WriterFunc(func(w io.Writer) { + w.Write([]byte{'{'}) + graphql.MarshalString(field.Alias).MarshalGQL(w) + w.Write([]byte{':'}) + ec.marshalNProviderConfig2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐProviderConfig(ctx, field.Selections, res).MarshalGQL(w) + w.Write([]byte{'}'}) + }) + case <-ctx.Done(): + return nil + } + } +} + +func (ec *executionContext) fieldContext_Subscription_providerUpdated(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Subscription", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_ProviderConfig_id(ctx, field) + case "name": + return ec.fieldContext_ProviderConfig_name(ctx, field) + case "type": + return ec.fieldContext_ProviderConfig_type(ctx, field) + case "agents": + return ec.fieldContext_ProviderConfig_agents(ctx, field) + case "createdAt": + return ec.fieldContext_ProviderConfig_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_ProviderConfig_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type ProviderConfig", field.Name) + }, + } + return fc, nil +} + +func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { + fc, err := ec.fieldContext_Subscription_providerDeleted(ctx, field) + if err != nil { + return nil + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = nil + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Subscription().ProviderDeleted(rctx) + }) + if err != nil { + ec.Error(ctx, err) + return nil + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return nil + } + return func(ctx context.Context) graphql.Marshaler { + select { + case res, ok := <-resTmp.(<-chan *model.ProviderConfig): + if !ok { + return nil + } + return graphql.WriterFunc(func(w io.Writer) { + w.Write([]byte{'{'}) + graphql.MarshalString(field.Alias).MarshalGQL(w) + w.Write([]byte{':'}) + ec.marshalNProviderConfig2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐProviderConfig(ctx, field.Selections, res).MarshalGQL(w) + w.Write([]byte{'}'}) + }) + case <-ctx.Done(): + return nil + } + } +} + +func (ec *executionContext) fieldContext_Subscription_providerDeleted(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Subscription", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_ProviderConfig_id(ctx, field) + case "name": + return ec.fieldContext_ProviderConfig_name(ctx, field) + case "type": + return ec.fieldContext_ProviderConfig_type(ctx, field) + case "agents": + return ec.fieldContext_ProviderConfig_agents(ctx, field) + case "createdAt": + return ec.fieldContext_ProviderConfig_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_ProviderConfig_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type ProviderConfig", field.Name) + }, + } + return fc, nil +} + +func (ec *executionContext) _Subscription_apiTokenCreated(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { + fc, err := ec.fieldContext_Subscription_apiTokenCreated(ctx, field) if err != nil { return nil } @@ -20630,7 +22354,7 @@ func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, f }() resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { ctx = rctx // use context from middleware stack in children - return ec.resolvers.Subscription().ProviderUpdated(rctx) + return ec.resolvers.Subscription().APITokenCreated(rctx) }) if err != nil { ec.Error(ctx, err) @@ -20644,7 +22368,7 @@ func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, f } return func(ctx context.Context) graphql.Marshaler { select { - case res, ok := <-resTmp.(<-chan *model.ProviderConfig): + case res, ok := <-resTmp.(<-chan *model.APIToken): if !ok { return nil } @@ -20652,7 +22376,7 @@ func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, f w.Write([]byte{'{'}) graphql.MarshalString(field.Alias).MarshalGQL(w) w.Write([]byte{':'}) - ec.marshalNProviderConfig2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐProviderConfig(ctx, field.Selections, res).MarshalGQL(w) + ec.marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, field.Selections, res).MarshalGQL(w) w.Write([]byte{'}'}) }) case <-ctx.Done(): @@ -20661,7 +22385,7 @@ func (ec *executionContext) _Subscription_providerUpdated(ctx context.Context, f } } -func (ec *executionContext) fieldContext_Subscription_providerUpdated(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { +func (ec *executionContext) fieldContext_Subscription_apiTokenCreated(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { fc = &graphql.FieldContext{ Object: "Subscription", Field: field, @@ -20670,26 +22394,32 @@ func (ec *executionContext) fieldContext_Subscription_providerUpdated(_ context. Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { switch field.Name { case "id": - return ec.fieldContext_ProviderConfig_id(ctx, field) + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) case "name": - return ec.fieldContext_ProviderConfig_name(ctx, field) - case "type": - return ec.fieldContext_ProviderConfig_type(ctx, field) - case "agents": - return ec.fieldContext_ProviderConfig_agents(ctx, field) + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) case "createdAt": - return ec.fieldContext_ProviderConfig_createdAt(ctx, field) + return ec.fieldContext_APIToken_createdAt(ctx, field) case "updatedAt": - return ec.fieldContext_ProviderConfig_updatedAt(ctx, field) + return ec.fieldContext_APIToken_updatedAt(ctx, field) } - return nil, fmt.Errorf("no field named %q was found under type ProviderConfig", field.Name) + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) }, } return fc, nil } -func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { - fc, err := ec.fieldContext_Subscription_providerDeleted(ctx, field) +func (ec *executionContext) _Subscription_apiTokenUpdated(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { + fc, err := ec.fieldContext_Subscription_apiTokenUpdated(ctx, field) if err != nil { return nil } @@ -20702,7 +22432,7 @@ func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, f }() resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { ctx = rctx // use context from middleware stack in children - return ec.resolvers.Subscription().ProviderDeleted(rctx) + return ec.resolvers.Subscription().APITokenUpdated(rctx) }) if err != nil { ec.Error(ctx, err) @@ -20716,7 +22446,7 @@ func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, f } return func(ctx context.Context) graphql.Marshaler { select { - case res, ok := <-resTmp.(<-chan *model.ProviderConfig): + case res, ok := <-resTmp.(<-chan *model.APIToken): if !ok { return nil } @@ -20724,7 +22454,7 @@ func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, f w.Write([]byte{'{'}) graphql.MarshalString(field.Alias).MarshalGQL(w) w.Write([]byte{':'}) - ec.marshalNProviderConfig2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐProviderConfig(ctx, field.Selections, res).MarshalGQL(w) + ec.marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, field.Selections, res).MarshalGQL(w) w.Write([]byte{'}'}) }) case <-ctx.Done(): @@ -20733,7 +22463,7 @@ func (ec *executionContext) _Subscription_providerDeleted(ctx context.Context, f } } -func (ec *executionContext) fieldContext_Subscription_providerDeleted(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { +func (ec *executionContext) fieldContext_Subscription_apiTokenUpdated(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { fc = &graphql.FieldContext{ Object: "Subscription", Field: field, @@ -20742,19 +22472,103 @@ func (ec *executionContext) fieldContext_Subscription_providerDeleted(_ context. Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { switch field.Name { case "id": - return ec.fieldContext_ProviderConfig_id(ctx, field) + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) case "name": - return ec.fieldContext_ProviderConfig_name(ctx, field) - case "type": - return ec.fieldContext_ProviderConfig_type(ctx, field) - case "agents": - return ec.fieldContext_ProviderConfig_agents(ctx, field) + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) case "createdAt": - return ec.fieldContext_ProviderConfig_createdAt(ctx, field) + return ec.fieldContext_APIToken_createdAt(ctx, field) case "updatedAt": - return ec.fieldContext_ProviderConfig_updatedAt(ctx, field) + return ec.fieldContext_APIToken_updatedAt(ctx, field) } - return nil, fmt.Errorf("no field named %q was found under type ProviderConfig", field.Name) + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) + }, + } + return fc, nil +} + +func (ec *executionContext) _Subscription_apiTokenDeleted(ctx context.Context, field graphql.CollectedField) (ret func(ctx context.Context) graphql.Marshaler) { + fc, err := ec.fieldContext_Subscription_apiTokenDeleted(ctx, field) + if err != nil { + return nil + } + ctx = graphql.WithFieldContext(ctx, fc) + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = nil + } + }() + resTmp, err := ec.ResolverMiddleware(ctx, func(rctx context.Context) (interface{}, error) { + ctx = rctx // use context from middleware stack in children + return ec.resolvers.Subscription().APITokenDeleted(rctx) + }) + if err != nil { + ec.Error(ctx, err) + return nil + } + if resTmp == nil { + if !graphql.HasFieldError(ctx, fc) { + ec.Errorf(ctx, "must not be null") + } + return nil + } + return func(ctx context.Context) graphql.Marshaler { + select { + case res, ok := <-resTmp.(<-chan *model.APIToken): + if !ok { + return nil + } + return graphql.WriterFunc(func(w io.Writer) { + w.Write([]byte{'{'}) + graphql.MarshalString(field.Alias).MarshalGQL(w) + w.Write([]byte{':'}) + ec.marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, field.Selections, res).MarshalGQL(w) + w.Write([]byte{'}'}) + }) + case <-ctx.Done(): + return nil + } + } +} + +func (ec *executionContext) fieldContext_Subscription_apiTokenDeleted(_ context.Context, field graphql.CollectedField) (fc *graphql.FieldContext, err error) { + fc = &graphql.FieldContext{ + Object: "Subscription", + Field: field, + IsMethod: true, + IsResolver: true, + Child: func(ctx context.Context, field graphql.CollectedField) (*graphql.FieldContext, error) { + switch field.Name { + case "id": + return ec.fieldContext_APIToken_id(ctx, field) + case "tokenId": + return ec.fieldContext_APIToken_tokenId(ctx, field) + case "userId": + return ec.fieldContext_APIToken_userId(ctx, field) + case "roleId": + return ec.fieldContext_APIToken_roleId(ctx, field) + case "name": + return ec.fieldContext_APIToken_name(ctx, field) + case "ttl": + return ec.fieldContext_APIToken_ttl(ctx, field) + case "status": + return ec.fieldContext_APIToken_status(ctx, field) + case "createdAt": + return ec.fieldContext_APIToken_createdAt(ctx, field) + case "updatedAt": + return ec.fieldContext_APIToken_updatedAt(ctx, field) + } + return nil, fmt.Errorf("no field named %q was found under type APIToken", field.Name) }, } return fc, nil @@ -26347,6 +28161,40 @@ func (ec *executionContext) unmarshalInputAgentsConfigInput(ctx context.Context, return it, nil } +func (ec *executionContext) unmarshalInputCreateAPITokenInput(ctx context.Context, obj interface{}) (model.CreateAPITokenInput, error) { + var it model.CreateAPITokenInput + asMap := map[string]interface{}{} + for k, v := range obj.(map[string]interface{}) { + asMap[k] = v + } + + fieldsInOrder := [...]string{"name", "ttl"} + for _, k := range fieldsInOrder { + v, ok := asMap[k] + if !ok { + continue + } + switch k { + case "name": + ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("name")) + data, err := ec.unmarshalOString2ᚖstring(ctx, v) + if err != nil { + return it, err + } + it.Name = data + case "ttl": + ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("ttl")) + data, err := ec.unmarshalNInt2int(ctx, v) + if err != nil { + return it, err + } + it.TTL = data + } + } + + return it, nil +} + func (ec *executionContext) unmarshalInputModelPriceInput(ctx context.Context, obj interface{}) (model.ModelPrice, error) { var it model.ModelPrice asMap := map[string]interface{}{} @@ -26429,6 +28277,40 @@ func (ec *executionContext) unmarshalInputReasoningConfigInput(ctx context.Conte return it, nil } +func (ec *executionContext) unmarshalInputUpdateAPITokenInput(ctx context.Context, obj interface{}) (model.UpdateAPITokenInput, error) { + var it model.UpdateAPITokenInput + asMap := map[string]interface{}{} + for k, v := range obj.(map[string]interface{}) { + asMap[k] = v + } + + fieldsInOrder := [...]string{"name", "status"} + for _, k := range fieldsInOrder { + v, ok := asMap[k] + if !ok { + continue + } + switch k { + case "name": + ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("name")) + data, err := ec.unmarshalOString2ᚖstring(ctx, v) + if err != nil { + return it, err + } + it.Name = data + case "status": + ctx := graphql.WithPathContext(ctx, graphql.NewPathWithField("status")) + data, err := ec.unmarshalOTokenStatus2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx, v) + if err != nil { + return it, err + } + it.Status = data + } + } + + return it, nil +} + // endregion **************************** input.gotpl ***************************** // region ************************** interface.gotpl *************************** @@ -26437,6 +28319,163 @@ func (ec *executionContext) unmarshalInputReasoningConfigInput(ctx context.Conte // region **************************** object.gotpl **************************** +var aPITokenImplementors = []string{"APIToken"} + +func (ec *executionContext) _APIToken(ctx context.Context, sel ast.SelectionSet, obj *model.APIToken) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, aPITokenImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("APIToken") + case "id": + out.Values[i] = ec._APIToken_id(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "tokenId": + out.Values[i] = ec._APIToken_tokenId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "userId": + out.Values[i] = ec._APIToken_userId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "roleId": + out.Values[i] = ec._APIToken_roleId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "name": + out.Values[i] = ec._APIToken_name(ctx, field, obj) + case "ttl": + out.Values[i] = ec._APIToken_ttl(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "status": + out.Values[i] = ec._APIToken_status(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "createdAt": + out.Values[i] = ec._APIToken_createdAt(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "updatedAt": + out.Values[i] = ec._APIToken_updatedAt(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + +var aPITokenWithSecretImplementors = []string{"APITokenWithSecret"} + +func (ec *executionContext) _APITokenWithSecret(ctx context.Context, sel ast.SelectionSet, obj *model.APITokenWithSecret) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, aPITokenWithSecretImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("APITokenWithSecret") + case "id": + out.Values[i] = ec._APITokenWithSecret_id(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "tokenId": + out.Values[i] = ec._APITokenWithSecret_tokenId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "userId": + out.Values[i] = ec._APITokenWithSecret_userId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "roleId": + out.Values[i] = ec._APITokenWithSecret_roleId(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "name": + out.Values[i] = ec._APITokenWithSecret_name(ctx, field, obj) + case "ttl": + out.Values[i] = ec._APITokenWithSecret_ttl(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "status": + out.Values[i] = ec._APITokenWithSecret_status(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "createdAt": + out.Values[i] = ec._APITokenWithSecret_createdAt(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "updatedAt": + out.Values[i] = ec._APITokenWithSecret_updatedAt(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "token": + out.Values[i] = ec._APITokenWithSecret_token(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + var agentConfigImplementors = []string{"AgentConfig"} func (ec *executionContext) _AgentConfig(ctx context.Context, sel ast.SelectionSet, obj *model.AgentConfig) graphql.Marshaler { @@ -28083,6 +30122,27 @@ func (ec *executionContext) _Mutation(ctx context.Context, sel ast.SelectionSet) if out.Values[i] == graphql.Null { out.Invalids++ } + case "createAPIToken": + out.Values[i] = ec.OperationContext.RootResolverMiddleware(innerCtx, func(ctx context.Context) (res graphql.Marshaler) { + return ec._Mutation_createAPIToken(ctx, field) + }) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "updateAPIToken": + out.Values[i] = ec.OperationContext.RootResolverMiddleware(innerCtx, func(ctx context.Context) (res graphql.Marshaler) { + return ec._Mutation_updateAPIToken(ctx, field) + }) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "deleteAPIToken": + out.Values[i] = ec.OperationContext.RootResolverMiddleware(innerCtx, func(ctx context.Context) (res graphql.Marshaler) { + return ec._Mutation_deleteAPIToken(ctx, field) + }) + if out.Values[i] == graphql.Null { + out.Invalids++ + } default: panic("unknown field " + strconv.Quote(field.Name)) } @@ -29285,6 +31345,47 @@ func (ec *executionContext) _Query(ctx context.Context, sel ast.SelectionSet) gr func(ctx context.Context) graphql.Marshaler { return innerFunc(ctx, out) }) } + out.Concurrently(i, func(ctx context.Context) graphql.Marshaler { return rrm(innerCtx) }) + case "apiToken": + field := field + + innerFunc := func(ctx context.Context, _ *graphql.FieldSet) (res graphql.Marshaler) { + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + } + }() + res = ec._Query_apiToken(ctx, field) + return res + } + + rrm := func(ctx context.Context) graphql.Marshaler { + return ec.OperationContext.RootResolverMiddleware(ctx, + func(ctx context.Context) graphql.Marshaler { return innerFunc(ctx, out) }) + } + + out.Concurrently(i, func(ctx context.Context) graphql.Marshaler { return rrm(innerCtx) }) + case "apiTokens": + field := field + + innerFunc := func(ctx context.Context, fs *graphql.FieldSet) (res graphql.Marshaler) { + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + } + }() + res = ec._Query_apiTokens(ctx, field) + if res == graphql.Null { + atomic.AddUint32(&fs.Invalids, 1) + } + return res + } + + rrm := func(ctx context.Context) graphql.Marshaler { + return ec.OperationContext.RootResolverMiddleware(ctx, + func(ctx context.Context) graphql.Marshaler { return innerFunc(ctx, out) }) + } + out.Concurrently(i, func(ctx context.Context) graphql.Marshaler { return rrm(innerCtx) }) case "__type": out.Values[i] = ec.OperationContext.RootResolverMiddleware(innerCtx, func(ctx context.Context) (res graphql.Marshaler) { @@ -29603,6 +31704,12 @@ func (ec *executionContext) _Subscription(ctx context.Context, sel ast.Selection return ec._Subscription_providerUpdated(ctx, fields[0]) case "providerDeleted": return ec._Subscription_providerDeleted(ctx, fields[0]) + case "apiTokenCreated": + return ec._Subscription_apiTokenCreated(ctx, fields[0]) + case "apiTokenUpdated": + return ec._Subscription_apiTokenUpdated(ctx, fields[0]) + case "apiTokenDeleted": + return ec._Subscription_apiTokenDeleted(ctx, fields[0]) default: panic("unknown field " + strconv.Quote(fields[0].Name)) } @@ -30499,228 +32606,300 @@ func (ec *executionContext) ___EnumValue(ctx context.Context, sel ast.SelectionS return out } -var __FieldImplementors = []string{"__Field"} - -func (ec *executionContext) ___Field(ctx context.Context, sel ast.SelectionSet, obj *introspection.Field) graphql.Marshaler { - fields := graphql.CollectFields(ec.OperationContext, sel, __FieldImplementors) - - out := graphql.NewFieldSet(fields) - deferred := make(map[string]*graphql.FieldSet) - for i, field := range fields { - switch field.Name { - case "__typename": - out.Values[i] = graphql.MarshalString("__Field") - case "name": - out.Values[i] = ec.___Field_name(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "description": - out.Values[i] = ec.___Field_description(ctx, field, obj) - case "args": - out.Values[i] = ec.___Field_args(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "type": - out.Values[i] = ec.___Field_type(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "isDeprecated": - out.Values[i] = ec.___Field_isDeprecated(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "deprecationReason": - out.Values[i] = ec.___Field_deprecationReason(ctx, field, obj) - default: - panic("unknown field " + strconv.Quote(field.Name)) - } - } - out.Dispatch(ctx) - if out.Invalids > 0 { - return graphql.Null - } - - atomic.AddInt32(&ec.deferred, int32(len(deferred))) +var __FieldImplementors = []string{"__Field"} + +func (ec *executionContext) ___Field(ctx context.Context, sel ast.SelectionSet, obj *introspection.Field) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, __FieldImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("__Field") + case "name": + out.Values[i] = ec.___Field_name(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "description": + out.Values[i] = ec.___Field_description(ctx, field, obj) + case "args": + out.Values[i] = ec.___Field_args(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "type": + out.Values[i] = ec.___Field_type(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "isDeprecated": + out.Values[i] = ec.___Field_isDeprecated(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "deprecationReason": + out.Values[i] = ec.___Field_deprecationReason(ctx, field, obj) + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + +var __InputValueImplementors = []string{"__InputValue"} + +func (ec *executionContext) ___InputValue(ctx context.Context, sel ast.SelectionSet, obj *introspection.InputValue) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, __InputValueImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("__InputValue") + case "name": + out.Values[i] = ec.___InputValue_name(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "description": + out.Values[i] = ec.___InputValue_description(ctx, field, obj) + case "type": + out.Values[i] = ec.___InputValue_type(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "defaultValue": + out.Values[i] = ec.___InputValue_defaultValue(ctx, field, obj) + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + +var __SchemaImplementors = []string{"__Schema"} + +func (ec *executionContext) ___Schema(ctx context.Context, sel ast.SelectionSet, obj *introspection.Schema) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, __SchemaImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("__Schema") + case "description": + out.Values[i] = ec.___Schema_description(ctx, field, obj) + case "types": + out.Values[i] = ec.___Schema_types(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "queryType": + out.Values[i] = ec.___Schema_queryType(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "mutationType": + out.Values[i] = ec.___Schema_mutationType(ctx, field, obj) + case "subscriptionType": + out.Values[i] = ec.___Schema_subscriptionType(ctx, field, obj) + case "directives": + out.Values[i] = ec.___Schema_directives(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + +var __TypeImplementors = []string{"__Type"} + +func (ec *executionContext) ___Type(ctx context.Context, sel ast.SelectionSet, obj *introspection.Type) graphql.Marshaler { + fields := graphql.CollectFields(ec.OperationContext, sel, __TypeImplementors) + + out := graphql.NewFieldSet(fields) + deferred := make(map[string]*graphql.FieldSet) + for i, field := range fields { + switch field.Name { + case "__typename": + out.Values[i] = graphql.MarshalString("__Type") + case "kind": + out.Values[i] = ec.___Type_kind(ctx, field, obj) + if out.Values[i] == graphql.Null { + out.Invalids++ + } + case "name": + out.Values[i] = ec.___Type_name(ctx, field, obj) + case "description": + out.Values[i] = ec.___Type_description(ctx, field, obj) + case "fields": + out.Values[i] = ec.___Type_fields(ctx, field, obj) + case "interfaces": + out.Values[i] = ec.___Type_interfaces(ctx, field, obj) + case "possibleTypes": + out.Values[i] = ec.___Type_possibleTypes(ctx, field, obj) + case "enumValues": + out.Values[i] = ec.___Type_enumValues(ctx, field, obj) + case "inputFields": + out.Values[i] = ec.___Type_inputFields(ctx, field, obj) + case "ofType": + out.Values[i] = ec.___Type_ofType(ctx, field, obj) + case "specifiedByURL": + out.Values[i] = ec.___Type_specifiedByURL(ctx, field, obj) + default: + panic("unknown field " + strconv.Quote(field.Name)) + } + } + out.Dispatch(ctx) + if out.Invalids > 0 { + return graphql.Null + } + + atomic.AddInt32(&ec.deferred, int32(len(deferred))) + + for label, dfs := range deferred { + ec.processDeferredGroup(graphql.DeferredGroup{ + Label: label, + Path: graphql.GetPath(ctx), + FieldSet: dfs, + Context: ctx, + }) + } + + return out +} + +// endregion **************************** object.gotpl **************************** - for label, dfs := range deferred { - ec.processDeferredGroup(graphql.DeferredGroup{ - Label: label, - Path: graphql.GetPath(ctx), - FieldSet: dfs, - Context: ctx, - }) - } +// region ***************************** type.gotpl ***************************** - return out +func (ec *executionContext) marshalNAPIToken2pentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx context.Context, sel ast.SelectionSet, v model.APIToken) graphql.Marshaler { + return ec._APIToken(ctx, sel, &v) } -var __InputValueImplementors = []string{"__InputValue"} - -func (ec *executionContext) ___InputValue(ctx context.Context, sel ast.SelectionSet, obj *introspection.InputValue) graphql.Marshaler { - fields := graphql.CollectFields(ec.OperationContext, sel, __InputValueImplementors) - - out := graphql.NewFieldSet(fields) - deferred := make(map[string]*graphql.FieldSet) - for i, field := range fields { - switch field.Name { - case "__typename": - out.Values[i] = graphql.MarshalString("__InputValue") - case "name": - out.Values[i] = ec.___InputValue_name(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "description": - out.Values[i] = ec.___InputValue_description(ctx, field, obj) - case "type": - out.Values[i] = ec.___InputValue_type(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ +func (ec *executionContext) marshalNAPIToken2ᚕᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPITokenᚄ(ctx context.Context, sel ast.SelectionSet, v []*model.APIToken) graphql.Marshaler { + ret := make(graphql.Array, len(v)) + var wg sync.WaitGroup + isLen1 := len(v) == 1 + if !isLen1 { + wg.Add(len(v)) + } + for i := range v { + i := i + fc := &graphql.FieldContext{ + Index: &i, + Result: &v[i], + } + ctx := graphql.WithFieldContext(ctx, fc) + f := func(i int) { + defer func() { + if r := recover(); r != nil { + ec.Error(ctx, ec.Recover(ctx, r)) + ret = nil + } + }() + if !isLen1 { + defer wg.Done() } - case "defaultValue": - out.Values[i] = ec.___InputValue_defaultValue(ctx, field, obj) - default: - panic("unknown field " + strconv.Quote(field.Name)) + ret[i] = ec.marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx, sel, v[i]) + } + if isLen1 { + f(i) + } else { + go f(i) } - } - out.Dispatch(ctx) - if out.Invalids > 0 { - return graphql.Null - } - atomic.AddInt32(&ec.deferred, int32(len(deferred))) + } + wg.Wait() - for label, dfs := range deferred { - ec.processDeferredGroup(graphql.DeferredGroup{ - Label: label, - Path: graphql.GetPath(ctx), - FieldSet: dfs, - Context: ctx, - }) + for _, e := range ret { + if e == graphql.Null { + return graphql.Null + } } - return out + return ret } -var __SchemaImplementors = []string{"__Schema"} - -func (ec *executionContext) ___Schema(ctx context.Context, sel ast.SelectionSet, obj *introspection.Schema) graphql.Marshaler { - fields := graphql.CollectFields(ec.OperationContext, sel, __SchemaImplementors) - - out := graphql.NewFieldSet(fields) - deferred := make(map[string]*graphql.FieldSet) - for i, field := range fields { - switch field.Name { - case "__typename": - out.Values[i] = graphql.MarshalString("__Schema") - case "description": - out.Values[i] = ec.___Schema_description(ctx, field, obj) - case "types": - out.Values[i] = ec.___Schema_types(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "queryType": - out.Values[i] = ec.___Schema_queryType(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "mutationType": - out.Values[i] = ec.___Schema_mutationType(ctx, field, obj) - case "subscriptionType": - out.Values[i] = ec.___Schema_subscriptionType(ctx, field, obj) - case "directives": - out.Values[i] = ec.___Schema_directives(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - default: - panic("unknown field " + strconv.Quote(field.Name)) +func (ec *executionContext) marshalNAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx context.Context, sel ast.SelectionSet, v *model.APIToken) graphql.Marshaler { + if v == nil { + if !graphql.HasFieldError(ctx, graphql.GetFieldContext(ctx)) { + ec.Errorf(ctx, "the requested element is null which the schema does not allow") } - } - out.Dispatch(ctx) - if out.Invalids > 0 { return graphql.Null } - - atomic.AddInt32(&ec.deferred, int32(len(deferred))) - - for label, dfs := range deferred { - ec.processDeferredGroup(graphql.DeferredGroup{ - Label: label, - Path: graphql.GetPath(ctx), - FieldSet: dfs, - Context: ctx, - }) - } - - return out + return ec._APIToken(ctx, sel, v) } -var __TypeImplementors = []string{"__Type"} - -func (ec *executionContext) ___Type(ctx context.Context, sel ast.SelectionSet, obj *introspection.Type) graphql.Marshaler { - fields := graphql.CollectFields(ec.OperationContext, sel, __TypeImplementors) +func (ec *executionContext) marshalNAPITokenWithSecret2pentagiᚋpkgᚋgraphᚋmodelᚐAPITokenWithSecret(ctx context.Context, sel ast.SelectionSet, v model.APITokenWithSecret) graphql.Marshaler { + return ec._APITokenWithSecret(ctx, sel, &v) +} - out := graphql.NewFieldSet(fields) - deferred := make(map[string]*graphql.FieldSet) - for i, field := range fields { - switch field.Name { - case "__typename": - out.Values[i] = graphql.MarshalString("__Type") - case "kind": - out.Values[i] = ec.___Type_kind(ctx, field, obj) - if out.Values[i] == graphql.Null { - out.Invalids++ - } - case "name": - out.Values[i] = ec.___Type_name(ctx, field, obj) - case "description": - out.Values[i] = ec.___Type_description(ctx, field, obj) - case "fields": - out.Values[i] = ec.___Type_fields(ctx, field, obj) - case "interfaces": - out.Values[i] = ec.___Type_interfaces(ctx, field, obj) - case "possibleTypes": - out.Values[i] = ec.___Type_possibleTypes(ctx, field, obj) - case "enumValues": - out.Values[i] = ec.___Type_enumValues(ctx, field, obj) - case "inputFields": - out.Values[i] = ec.___Type_inputFields(ctx, field, obj) - case "ofType": - out.Values[i] = ec.___Type_ofType(ctx, field, obj) - case "specifiedByURL": - out.Values[i] = ec.___Type_specifiedByURL(ctx, field, obj) - default: - panic("unknown field " + strconv.Quote(field.Name)) +func (ec *executionContext) marshalNAPITokenWithSecret2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPITokenWithSecret(ctx context.Context, sel ast.SelectionSet, v *model.APITokenWithSecret) graphql.Marshaler { + if v == nil { + if !graphql.HasFieldError(ctx, graphql.GetFieldContext(ctx)) { + ec.Errorf(ctx, "the requested element is null which the schema does not allow") } - } - out.Dispatch(ctx) - if out.Invalids > 0 { return graphql.Null } - - atomic.AddInt32(&ec.deferred, int32(len(deferred))) - - for label, dfs := range deferred { - ec.processDeferredGroup(graphql.DeferredGroup{ - Label: label, - Path: graphql.GetPath(ctx), - FieldSet: dfs, - Context: ctx, - }) - } - - return out + return ec._APITokenWithSecret(ctx, sel, v) } -// endregion **************************** object.gotpl **************************** - -// region ***************************** type.gotpl ***************************** - func (ec *executionContext) marshalNAgentConfig2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAgentConfig(ctx context.Context, sel ast.SelectionSet, v *model.AgentConfig) graphql.Marshaler { if v == nil { if !graphql.HasFieldError(ctx, graphql.GetFieldContext(ctx)) { @@ -30931,6 +33110,11 @@ func (ec *executionContext) marshalNBoolean2bool(ctx context.Context, sel ast.Se return res } +func (ec *executionContext) unmarshalNCreateAPITokenInput2pentagiᚋpkgᚋgraphᚋmodelᚐCreateAPITokenInput(ctx context.Context, v interface{}) (model.CreateAPITokenInput, error) { + res, err := ec.unmarshalInputCreateAPITokenInput(ctx, v) + return res, graphql.ErrorOnPath(ctx, err) +} + func (ec *executionContext) marshalNDailyFlowsStats2ᚕᚖpentagiᚋpkgᚋgraphᚋmodelᚐDailyFlowsStatsᚄ(ctx context.Context, sel ast.SelectionSet, v []*model.DailyFlowsStats) graphql.Marshaler { ret := make(graphql.Array, len(v)) var wg sync.WaitGroup @@ -32046,6 +34230,16 @@ func (ec *executionContext) marshalNTime2timeᚐTime(ctx context.Context, sel as return res } +func (ec *executionContext) unmarshalNTokenStatus2pentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx context.Context, v interface{}) (model.TokenStatus, error) { + var res model.TokenStatus + err := res.UnmarshalGQL(v) + return res, graphql.ErrorOnPath(ctx, err) +} + +func (ec *executionContext) marshalNTokenStatus2pentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx context.Context, sel ast.SelectionSet, v model.TokenStatus) graphql.Marshaler { + return v +} + func (ec *executionContext) marshalNToolcallsStats2pentagiᚋpkgᚋgraphᚋmodelᚐToolcallsStats(ctx context.Context, sel ast.SelectionSet, v model.ToolcallsStats) graphql.Marshaler { return ec._ToolcallsStats(ctx, sel, &v) } @@ -32070,6 +34264,11 @@ func (ec *executionContext) marshalNToolsPrompts2ᚖpentagiᚋpkgᚋgraphᚋmode return ec._ToolsPrompts(ctx, sel, v) } +func (ec *executionContext) unmarshalNUpdateAPITokenInput2pentagiᚋpkgᚋgraphᚋmodelᚐUpdateAPITokenInput(ctx context.Context, v interface{}) (model.UpdateAPITokenInput, error) { + res, err := ec.unmarshalInputUpdateAPITokenInput(ctx, v) + return res, graphql.ErrorOnPath(ctx, err) +} + func (ec *executionContext) marshalNUsageStats2pentagiᚋpkgᚋgraphᚋmodelᚐUsageStats(ctx context.Context, sel ast.SelectionSet, v model.UsageStats) graphql.Marshaler { return ec._UsageStats(ctx, sel, &v) } @@ -32385,6 +34584,13 @@ func (ec *executionContext) marshalN__TypeKind2string(ctx context.Context, sel a return res } +func (ec *executionContext) marshalOAPIToken2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐAPIToken(ctx context.Context, sel ast.SelectionSet, v *model.APIToken) graphql.Marshaler { + if v == nil { + return graphql.Null + } + return ec._APIToken(ctx, sel, v) +} + func (ec *executionContext) marshalOAgentLog2ᚕᚖpentagiᚋpkgᚋgraphᚋmodelᚐAgentLogᚄ(ctx context.Context, sel ast.SelectionSet, v []*model.AgentLog) graphql.Marshaler { if v == nil { return graphql.Null @@ -33171,6 +35377,22 @@ func (ec *executionContext) marshalOTime2ᚖtimeᚐTime(ctx context.Context, sel return res } +func (ec *executionContext) unmarshalOTokenStatus2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx context.Context, v interface{}) (*model.TokenStatus, error) { + if v == nil { + return nil, nil + } + var res = new(model.TokenStatus) + err := res.UnmarshalGQL(v) + return res, graphql.ErrorOnPath(ctx, err) +} + +func (ec *executionContext) marshalOTokenStatus2ᚖpentagiᚋpkgᚋgraphᚋmodelᚐTokenStatus(ctx context.Context, sel ast.SelectionSet, v *model.TokenStatus) graphql.Marshaler { + if v == nil { + return graphql.Null + } + return v +} + func (ec *executionContext) marshalOUserPrompt2ᚕᚖpentagiᚋpkgᚋgraphᚋmodelᚐUserPromptᚄ(ctx context.Context, sel ast.SelectionSet, v []*model.UserPrompt) graphql.Marshaler { if v == nil { return graphql.Null diff --git a/backend/pkg/graph/model/models_gen.go b/backend/pkg/graph/model/models_gen.go index 2119167c..36b88689 100644 --- a/backend/pkg/graph/model/models_gen.go +++ b/backend/pkg/graph/model/models_gen.go @@ -9,6 +9,31 @@ import ( "time" ) +type APIToken struct { + ID int64 `json:"id"` + TokenID string `json:"tokenId"` + UserID int64 `json:"userId"` + RoleID int64 `json:"roleId"` + Name *string `json:"name,omitempty"` + TTL int `json:"ttl"` + Status TokenStatus `json:"status"` + CreatedAt time.Time `json:"createdAt"` + UpdatedAt time.Time `json:"updatedAt"` +} + +type APITokenWithSecret struct { + ID int64 `json:"id"` + TokenID string `json:"tokenId"` + UserID int64 `json:"userId"` + RoleID int64 `json:"roleId"` + Name *string `json:"name,omitempty"` + TTL int `json:"ttl"` + Status TokenStatus `json:"status"` + CreatedAt time.Time `json:"createdAt"` + UpdatedAt time.Time `json:"updatedAt"` + Token string `json:"token"` +} + type AgentConfig struct { Model string `json:"model"` MaxTokens *int `json:"maxTokens,omitempty"` @@ -112,6 +137,11 @@ type AssistantLog struct { CreatedAt time.Time `json:"createdAt"` } +type CreateAPITokenInput struct { + Name *string `json:"name,omitempty"` + TTL int `json:"ttl"` +} + type DailyFlowsStats struct { Date time.Time `json:"date"` Stats *FlowsStats `json:"stats"` @@ -428,6 +458,11 @@ type ToolsPrompts struct { DetectToolCallIDPattern *DefaultPrompt `json:"detectToolCallIDPattern"` } +type UpdateAPITokenInput struct { + Name *string `json:"name,omitempty"` + Status *TokenStatus `json:"status,omitempty"` +} + type UsageStats struct { TotalUsageIn int `json:"totalUsageIn"` TotalUsageOut int `json:"totalUsageOut"` @@ -1113,6 +1148,49 @@ func (e TerminalType) MarshalGQL(w io.Writer) { fmt.Fprint(w, strconv.Quote(e.String())) } +type TokenStatus string + +const ( + TokenStatusActive TokenStatus = "active" + TokenStatusRevoked TokenStatus = "revoked" + TokenStatusExpired TokenStatus = "expired" +) + +var AllTokenStatus = []TokenStatus{ + TokenStatusActive, + TokenStatusRevoked, + TokenStatusExpired, +} + +func (e TokenStatus) IsValid() bool { + switch e { + case TokenStatusActive, TokenStatusRevoked, TokenStatusExpired: + return true + } + return false +} + +func (e TokenStatus) String() string { + return string(e) +} + +func (e *TokenStatus) UnmarshalGQL(v interface{}) error { + str, ok := v.(string) + if !ok { + return fmt.Errorf("enums must be strings") + } + + *e = TokenStatus(str) + if !e.IsValid() { + return fmt.Errorf("%s is not a valid TokenStatus", str) + } + return nil +} + +func (e TokenStatus) MarshalGQL(w io.Writer) { + fmt.Fprint(w, strconv.Quote(e.String())) +} + type UsageStatsPeriod string const ( diff --git a/backend/pkg/graph/resolver.go b/backend/pkg/graph/resolver.go index 63ebefc6..f3f7bf34 100644 --- a/backend/pkg/graph/resolver.go +++ b/backend/pkg/graph/resolver.go @@ -6,6 +6,7 @@ import ( "pentagi/pkg/database" "pentagi/pkg/graph/subscriptions" "pentagi/pkg/providers" + "pentagi/pkg/server/auth" "pentagi/pkg/templates" "github.com/sirupsen/logrus" @@ -19,6 +20,7 @@ type Resolver struct { DB database.Querier Config *config.Config Logger *logrus.Entry + TokenCache *auth.TokenCache DefaultPrompter templates.Prompter ProvidersCtrl providers.ProviderController Controller controller.FlowController diff --git a/backend/pkg/graph/schema.graphqls b/backend/pkg/graph/schema.graphqls index f94357c4..8218ad17 100644 --- a/backend/pkg/graph/schema.graphqls +++ b/backend/pkg/graph/schema.graphqls @@ -303,6 +303,49 @@ type Screenshot { createdAt: Time! } +# =================== API Tokens types =================== + +enum TokenStatus { + active + revoked + expired +} + +type APIToken { + id: ID! + tokenId: String! + userId: ID! + roleId: ID! + name: String + ttl: Int! + status: TokenStatus! + createdAt: Time! + updatedAt: Time! +} + +type APITokenWithSecret { + id: ID! + tokenId: String! + userId: ID! + roleId: ID! + name: String + ttl: Int! + status: TokenStatus! + createdAt: Time! + updatedAt: Time! + token: String! +} + +input CreateAPITokenInput { + name: String + ttl: Int! +} + +input UpdateAPITokenInput { + name: String + status: TokenStatus +} + # ==================== Prompt Management Types ==================== # Validation error types for user-provided prompts @@ -749,6 +792,10 @@ type Query { settings: Settings! settingsProviders: ProvidersConfig! settingsPrompts: PromptsConfig! + + # API Tokens management + apiToken(tokenId: String!): APIToken + apiTokens: [APIToken!]! } type Mutation { @@ -777,6 +824,11 @@ type Mutation { createPrompt(type: PromptType!, template: String!): UserPrompt! updatePrompt(promptId: ID!, template: String!): UserPrompt! deletePrompt(promptId: ID!): ResultType! + + # API Tokens management + createAPIToken(input: CreateAPITokenInput!): APITokenWithSecret! + updateAPIToken(tokenId: String!, input: UpdateAPITokenInput!): APIToken! + deleteAPIToken(tokenId: String!): Boolean! } type Subscription { @@ -807,4 +859,9 @@ type Subscription { providerCreated: ProviderConfig! providerUpdated: ProviderConfig! providerDeleted: ProviderConfig! + + # API token events + apiTokenCreated: APIToken! + apiTokenUpdated: APIToken! + apiTokenDeleted: APIToken! } diff --git a/backend/pkg/graph/schema.resolvers.go b/backend/pkg/graph/schema.resolvers.go index 21a21b86..de789296 100644 --- a/backend/pkg/graph/schema.resolvers.go +++ b/backend/pkg/graph/schema.resolvers.go @@ -6,6 +6,7 @@ package graph import ( "context" + "database/sql" "encoding/json" "errors" "fmt" @@ -19,6 +20,7 @@ import ( "pentagi/pkg/providers/openai" "pentagi/pkg/providers/pconfig" "pentagi/pkg/providers/provider" + "pentagi/pkg/server/auth" "pentagi/pkg/templates" "pentagi/pkg/templates/validator" "time" @@ -622,6 +624,190 @@ func (r *mutationResolver) DeletePrompt(ctx context.Context, promptID int64) (mo return model.ResultTypeSuccess, nil } +// CreateAPIToken is the resolver for the createAPIToken field. +func (r *mutationResolver) CreateAPIToken(ctx context.Context, input model.CreateAPITokenInput) (*model.APITokenWithSecret, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.create") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to create API tokens") + } + + if r.Config.CookieSigningSalt == "" || r.Config.CookieSigningSalt == "salt" { + return nil, fmt.Errorf("token creation is disabled with default salt") + } + + if input.TTL < 60 || input.TTL > 94608000 { + return nil, fmt.Errorf("invalid TTL: must be between 60 and 94608000 seconds") + } + + r.Logger.WithFields(logrus.Fields{ + "uid": uid, + "name": input.Name, + "ttl": input.TTL, + }).Debug("create api token") + + user, err := r.DB.GetUser(ctx, uid) + if err != nil { + return nil, err + } + + tokenID, err := auth.GenerateTokenID() + if err != nil { + return nil, fmt.Errorf("failed to generate token ID: %w", err) + } + + claims := auth.MakeAPITokenClaims(tokenID, user.Hash, uint64(uid), uint64(user.RoleID), uint64(input.TTL)) + + tokenString, err := auth.MakeAPIToken(r.Config.CookieSigningSalt, claims) + if err != nil { + return nil, fmt.Errorf("failed to create token: %w", err) + } + + var nameStr sql.NullString + if input.Name != nil && *input.Name != "" { + nameStr = sql.NullString{String: *input.Name, Valid: true} + } + + apiToken, err := r.DB.CreateAPIToken(ctx, database.CreateAPITokenParams{ + TokenID: tokenID, + UserID: uid, + RoleID: user.RoleID, + Name: nameStr, + Ttl: int64(input.TTL), + Status: database.TokenStatusActive, + }) + if err != nil { + return nil, fmt.Errorf("failed to create token in database: %w", err) + } + + tokenWithSecret := database.APITokenWithSecret{ + ApiToken: apiToken, + Token: tokenString, + } + + r.TokenCache.Invalidate(tokenID) + r.TokenCache.InvalidateUser(uint64(uid)) + + r.Subscriptions.NewFlowPublisher(uid, 0).APITokenCreated(ctx, tokenWithSecret) + + return converter.ConvertAPITokenWithSecret(tokenWithSecret), nil +} + +// UpdateAPIToken is the resolver for the updateAPIToken field. +func (r *mutationResolver) UpdateAPIToken(ctx context.Context, tokenID string, input model.UpdateAPITokenInput) (*model.APIToken, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.edit") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to update API tokens") + } + + r.Logger.WithFields(logrus.Fields{ + "uid": uid, + "tokenID": tokenID, + }).Debug("update api token") + + token, err := r.DB.GetUserAPITokenByTokenID(ctx, database.GetUserAPITokenByTokenIDParams{ + TokenID: tokenID, + UserID: uid, + }) + if err != nil { + return nil, fmt.Errorf("token not found: %w", err) + } + + var nameStr sql.NullString + if input.Name != nil { + if *input.Name != "" { + nameStr = sql.NullString{String: *input.Name, Valid: true} + } + } else { + nameStr = token.Name + } + + status := token.Status + if input.Status != nil { + switch s := *input.Status; s { + case model.TokenStatusActive: + status = database.TokenStatusActive + case model.TokenStatusRevoked: + status = database.TokenStatusRevoked + default: + return nil, fmt.Errorf("invalid token status: %s", s.String()) + } + } + + updatedToken, err := r.DB.UpdateUserAPIToken(ctx, database.UpdateUserAPITokenParams{ + ID: token.ID, + UserID: uid, + Name: nameStr, + Status: status, + }) + if err != nil { + return nil, fmt.Errorf("failed to update token: %w", err) + } + + if input.Status != nil { + r.TokenCache.Invalidate(tokenID) + r.TokenCache.InvalidateUser(uint64(uid)) + } + + r.Subscriptions.NewFlowPublisher(uid, 0).APITokenUpdated(ctx, updatedToken) + + return converter.ConvertAPIToken(updatedToken), nil +} + +// DeleteAPIToken is the resolver for the deleteAPIToken field. +func (r *mutationResolver) DeleteAPIToken(ctx context.Context, tokenID string) (bool, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.delete") + if err != nil { + return false, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return false, err + } + + if !isUserSession { + return false, fmt.Errorf("unauthorized: non-user session is not allowed to delete API tokens") + } + + r.Logger.WithFields(logrus.Fields{ + "uid": uid, + "tokenID": tokenID, + }).Debug("delete api token") + + token, err := r.DB.DeleteUserAPITokenByTokenID(ctx, database.DeleteUserAPITokenByTokenIDParams{ + TokenID: tokenID, + UserID: uid, + }) + if err != nil { + return false, fmt.Errorf("failed to delete token: %w", err) + } + + r.TokenCache.Invalidate(tokenID) + r.TokenCache.InvalidateUser(uint64(uid)) + + r.Subscriptions.NewFlowPublisher(uid, 0).APITokenDeleted(ctx, token) + + return true, nil +} + // Providers is the resolver for the providers field. func (r *queryResolver) Providers(ctx context.Context) ([]*model.Provider, error) { uid, _, err := validatePermission(ctx, "providers.view") @@ -1505,6 +1691,78 @@ func (r *queryResolver) SettingsPrompts(ctx context.Context) (*model.PromptsConf return &promptsConfig, nil } +// APIToken is the resolver for the apiToken field. +func (r *queryResolver) APIToken(ctx context.Context, tokenID string) (*model.APIToken, error) { + uid, admin, err := validatePermission(ctx, "settings.tokens.view") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to get API tokens") + } + + r.Logger.WithFields(logrus.Fields{ + "uid": uid, + "tokenID": tokenID, + }).Debug("get api token") + + var token database.ApiToken + + if admin { + token, err = r.DB.GetAPITokenByTokenID(ctx, tokenID) + } else { + token, err = r.DB.GetUserAPITokenByTokenID(ctx, database.GetUserAPITokenByTokenIDParams{ + TokenID: tokenID, + UserID: uid, + }) + } + if err != nil { + return nil, fmt.Errorf("token not found: %w", err) + } + + return converter.ConvertAPIToken(token), nil +} + +// APITokens is the resolver for the apiTokens field. +func (r *queryResolver) APITokens(ctx context.Context) ([]*model.APIToken, error) { + uid, admin, err := validatePermission(ctx, "settings.tokens.view") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to get API tokens") + } + + r.Logger.WithFields(logrus.Fields{ + "uid": uid, + }).Debug("get api tokens") + + var tokens []database.ApiToken + + if admin { + tokens, err = r.DB.GetAPITokens(ctx) + } else { + tokens, err = r.DB.GetUserAPITokens(ctx, uid) + } + if err != nil { + return nil, fmt.Errorf("failed to get tokens: %w", err) + } + + return converter.ConvertAPITokens(tokens), nil +} + // FlowCreated is the resolver for the flowCreated field. func (r *subscriptionResolver) FlowCreated(ctx context.Context) (<-chan *model.Flow, error) { uid, admin, err := validatePermission(ctx, "flows.subscribe") @@ -1720,6 +1978,63 @@ func (r *subscriptionResolver) ProviderDeleted(ctx context.Context) (<-chan *mod return r.Subscriptions.NewFlowSubscriber(uid, 0).ProviderDeleted(ctx) } +// APITokenCreated is the resolver for the apiTokenCreated field. +func (r *subscriptionResolver) APITokenCreated(ctx context.Context) (<-chan *model.APIToken, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.subscribe") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to subscribe to API tokens") + } + + return r.Subscriptions.NewFlowSubscriber(uid, 0).APITokenCreated(ctx) +} + +// APITokenUpdated is the resolver for the apiTokenUpdated field. +func (r *subscriptionResolver) APITokenUpdated(ctx context.Context) (<-chan *model.APIToken, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.subscribe") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to subscribe to API tokens") + } + + return r.Subscriptions.NewFlowSubscriber(uid, 0).APITokenUpdated(ctx) +} + +// APITokenDeleted is the resolver for the apiTokenDeleted field. +func (r *subscriptionResolver) APITokenDeleted(ctx context.Context) (<-chan *model.APIToken, error) { + uid, _, err := validatePermission(ctx, "settings.tokens.subscribe") + if err != nil { + return nil, err + } + + isUserSession, err := validateUserType(ctx, userSessionTypes...) + if err != nil { + return nil, err + } + + if !isUserSession { + return nil, fmt.Errorf("unauthorized: non-user session is not allowed to subscribe to API tokens") + } + + return r.Subscriptions.NewFlowSubscriber(uid, 0).APITokenDeleted(ctx) +} + // Mutation returns MutationResolver implementation. func (r *Resolver) Mutation() MutationResolver { return &mutationResolver{r} } diff --git a/backend/pkg/graph/subscriptions/controller.go b/backend/pkg/graph/subscriptions/controller.go index c3c7f617..61113e2f 100644 --- a/backend/pkg/graph/subscriptions/controller.go +++ b/backend/pkg/graph/subscriptions/controller.go @@ -51,6 +51,9 @@ type FlowSubscriber interface { ProviderCreated(ctx context.Context) (<-chan *model.ProviderConfig, error) ProviderUpdated(ctx context.Context) (<-chan *model.ProviderConfig, error) ProviderDeleted(ctx context.Context) (<-chan *model.ProviderConfig, error) + APITokenCreated(ctx context.Context) (<-chan *model.APIToken, error) + APITokenUpdated(ctx context.Context) (<-chan *model.APIToken, error) + APITokenDeleted(ctx context.Context) (<-chan *model.APIToken, error) FlowContext } @@ -75,6 +78,9 @@ type FlowPublisher interface { ProviderCreated(ctx context.Context, provider database.Provider, cfg *pconfig.ProviderConfig) ProviderUpdated(ctx context.Context, provider database.Provider, cfg *pconfig.ProviderConfig) ProviderDeleted(ctx context.Context, provider database.Provider, cfg *pconfig.ProviderConfig) + APITokenCreated(ctx context.Context, apiToken database.APITokenWithSecret) + APITokenUpdated(ctx context.Context, apiToken database.ApiToken) + APITokenDeleted(ctx context.Context, apiToken database.ApiToken) FlowContext } @@ -102,6 +108,9 @@ type controller struct { providerCreated Channel[*model.ProviderConfig] providerUpdated Channel[*model.ProviderConfig] providerDeleted Channel[*model.ProviderConfig] + apiTokenCreated Channel[*model.APIToken] + apiTokenUpdated Channel[*model.APIToken] + apiTokenDeleted Channel[*model.APIToken] } func NewSubscriptionsController() SubscriptionsController { @@ -129,6 +138,9 @@ func NewSubscriptionsController() SubscriptionsController { providerCreated: NewChannel[*model.ProviderConfig](), providerUpdated: NewChannel[*model.ProviderConfig](), providerDeleted: NewChannel[*model.ProviderConfig](), + apiTokenCreated: NewChannel[*model.APIToken](), + apiTokenUpdated: NewChannel[*model.APIToken](), + apiTokenDeleted: NewChannel[*model.APIToken](), } } diff --git a/backend/pkg/graph/subscriptions/publisher.go b/backend/pkg/graph/subscriptions/publisher.go index 07a00a88..2da8f10b 100644 --- a/backend/pkg/graph/subscriptions/publisher.go +++ b/backend/pkg/graph/subscriptions/publisher.go @@ -115,3 +115,15 @@ func (p *flowPublisher) ProviderUpdated(ctx context.Context, provider database.P func (p *flowPublisher) ProviderDeleted(ctx context.Context, provider database.Provider, cfg *pconfig.ProviderConfig) { p.ctrl.providerDeleted.Publish(ctx, p.userID, converter.ConvertProvider(provider, cfg)) } + +func (p *flowPublisher) APITokenCreated(ctx context.Context, apiToken database.APITokenWithSecret) { + p.ctrl.apiTokenCreated.Publish(ctx, p.userID, converter.ConvertAPITokenRemoveSecret(apiToken)) +} + +func (p *flowPublisher) APITokenUpdated(ctx context.Context, apiToken database.ApiToken) { + p.ctrl.apiTokenUpdated.Publish(ctx, p.userID, converter.ConvertAPIToken(apiToken)) +} + +func (p *flowPublisher) APITokenDeleted(ctx context.Context, apiToken database.ApiToken) { + p.ctrl.apiTokenDeleted.Publish(ctx, p.userID, converter.ConvertAPIToken(apiToken)) +} diff --git a/backend/pkg/graph/subscriptions/subscriber.go b/backend/pkg/graph/subscriptions/subscriber.go index 12da9225..b406fff9 100644 --- a/backend/pkg/graph/subscriptions/subscriber.go +++ b/backend/pkg/graph/subscriptions/subscriber.go @@ -119,3 +119,15 @@ func (s *flowSubscriber) ProviderUpdated(ctx context.Context) (<-chan *model.Pro func (s *flowSubscriber) ProviderDeleted(ctx context.Context) (<-chan *model.ProviderConfig, error) { return s.ctrl.providerDeleted.Subscribe(ctx, s.userID), nil } + +func (s *flowSubscriber) APITokenCreated(ctx context.Context) (<-chan *model.APIToken, error) { + return s.ctrl.apiTokenCreated.Subscribe(ctx, s.userID), nil +} + +func (s *flowSubscriber) APITokenUpdated(ctx context.Context) (<-chan *model.APIToken, error) { + return s.ctrl.apiTokenUpdated.Subscribe(ctx, s.userID), nil +} + +func (s *flowSubscriber) APITokenDeleted(ctx context.Context) (<-chan *model.APIToken, error) { + return s.ctrl.apiTokenDeleted.Subscribe(ctx, s.userID), nil +} diff --git a/backend/pkg/server/auth/api_token_cache.go b/backend/pkg/server/auth/api_token_cache.go new file mode 100644 index 00000000..9f3d196f --- /dev/null +++ b/backend/pkg/server/auth/api_token_cache.go @@ -0,0 +1,121 @@ +package auth + +import ( + "sync" + "time" + + "pentagi/pkg/server/models" + + "github.com/jinzhu/gorm" +) + +// tokenCacheEntry represents a cached token status entry +type tokenCacheEntry struct { + status models.TokenStatus + privileges []string + notFound bool // negative caching + expiresAt time.Time +} + +// TokenCache provides caching for token status lookups +type TokenCache struct { + cache sync.Map + ttl time.Duration + db *gorm.DB +} + +// NewTokenCache creates a new token cache instance +func NewTokenCache(db *gorm.DB) *TokenCache { + return &TokenCache{ + ttl: 5 * time.Minute, + db: db, + } +} + +// SetTTL sets the TTL for the token cache +func (tc *TokenCache) SetTTL(ttl time.Duration) { + tc.ttl = ttl +} + +// GetStatus retrieves token status and privileges from cache or database +func (tc *TokenCache) GetStatus(tokenID string) (models.TokenStatus, []string, error) { + // check cache first + if entry, ok := tc.cache.Load(tokenID); ok { + cached := entry.(tokenCacheEntry) + if time.Now().Before(cached.expiresAt) { + // return cached "not found" error + if cached.notFound { + return "", nil, gorm.ErrRecordNotFound + } + return cached.status, cached.privileges, nil + } + // cache entry expired, remove it + tc.cache.Delete(tokenID) + } + + // load from database with role privileges + var token models.APIToken + if err := tc.db.Where("token_id = ? AND deleted_at IS NULL", tokenID).First(&token).Error; err != nil { + if gorm.IsRecordNotFoundError(err) { + // cache negative result (token not found) + tc.cache.Store(tokenID, tokenCacheEntry{ + notFound: true, + expiresAt: time.Now().Add(tc.ttl), + }) + return "", nil, gorm.ErrRecordNotFound + } + return "", nil, err + } + + // load privileges for the token's role + var privileges []models.Privilege + if err := tc.db.Where("role_id = ?", token.RoleID).Find(&privileges).Error; err != nil { + return "", nil, err + } + + // extract privilege names + privNames := make([]string, len(privileges)) + for i, priv := range privileges { + privNames[i] = priv.Name + } + + // always add automation privilege for API tokens + privNames = append(privNames, PrivilegeAutomation) + + // update cache with positive result + tc.cache.Store(tokenID, tokenCacheEntry{ + status: token.Status, + privileges: privNames, + notFound: false, + expiresAt: time.Now().Add(tc.ttl), + }) + + return token.Status, privNames, nil +} + +// Invalidate removes a specific token from cache +func (tc *TokenCache) Invalidate(tokenID string) { + tc.cache.Delete(tokenID) +} + +// InvalidateUser removes all tokens for a specific user from cache +func (tc *TokenCache) InvalidateUser(userID uint64) { + // load all tokens for this user + var tokens []models.APIToken + if err := tc.db.Where("user_id = ? AND deleted_at IS NULL", userID).Find(&tokens).Error; err != nil { + return + } + + // invalidate each token in cache + for _, token := range tokens { + tc.cache.Delete(token.TokenID) + } +} + +// InvalidateAll clears the entire cache +func (tc *TokenCache) InvalidateAll() { + tc.cache.Range(func(key, value any) bool { + tc.cache.Delete(key) + return true + }) +} diff --git a/backend/pkg/server/auth/api_token_cache_test.go b/backend/pkg/server/auth/api_token_cache_test.go new file mode 100644 index 00000000..21be8ae7 --- /dev/null +++ b/backend/pkg/server/auth/api_token_cache_test.go @@ -0,0 +1,322 @@ +package auth_test + +import ( + "testing" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/models" + + "github.com/jinzhu/gorm" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestTokenCache_GetStatus(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + tokenID := "testtoken1" + + // Insert test token + token := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&token).Error + require.NoError(t, err) + + // Test: Get status (should hit database) + status, privileges, err := cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + assert.Contains(t, privileges, auth.PrivilegeAutomation) + assert.Contains(t, privileges, "flows.create") + assert.Contains(t, privileges, "settings.tokens.view") + + // Test: Get status again (should hit cache) + status, privileges, err = cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + assert.Contains(t, privileges, auth.PrivilegeAutomation) + + // Test: Non-existent token + _, _, err = cache.GetStatus("nonexistent") + assert.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) +} + +func TestTokenCache_Invalidate(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + tokenID := "testtoken2" + + // Insert test token + token := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&token).Error + require.NoError(t, err) + + // Get status to populate cache + status, privileges, err := cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + + // Update token in database + db.Model(&token).Update("status", models.TokenStatusRevoked) + + // Status should still be active (from cache) + status, privileges, err = cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + + // Invalidate cache + cache.Invalidate(tokenID) + + // Status should now be revoked (from database) + status, privileges, err = cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status) + assert.NotEmpty(t, privileges) +} + +func TestTokenCache_InvalidateUser(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + userID := uint64(1) + + // Insert multiple tokens for user + tokens := []models.APIToken{ + { + TokenID: "token1", + UserID: userID, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + }, + { + TokenID: "token2", + UserID: userID, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + }, + } + + for _, token := range tokens { + err := db.Create(&token).Error + require.NoError(t, err) + } + + // Populate cache + for _, token := range tokens { + _, _, err := cache.GetStatus(token.TokenID) + require.NoError(t, err) + } + + // Update tokens in database + db.Model(&models.APIToken{}).Where("user_id = ?", userID).Update("status", models.TokenStatusRevoked) + + // Invalidate all user tokens + cache.InvalidateUser(userID) + + // All tokens should now show revoked status + for _, token := range tokens { + status, privileges, err := cache.GetStatus(token.TokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status) + assert.NotEmpty(t, privileges) + } +} + +func TestTokenCache_Expiration(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + // Create cache with very short TTL for testing + cache := auth.NewTokenCache(db) + cache.SetTTL(300 * time.Millisecond) + + tokenID := "testtoken3" + + // Insert test token + token := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&token).Error + require.NoError(t, err) + + // Get status to populate cache + status, privileges, err := cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + + // Update token in database + db.Model(&token).Update("status", models.TokenStatusRevoked) + + // Wait for cache to expire + time.Sleep(500 * time.Millisecond) + + // Status should now be revoked (cache expired, reading from DB) + status, privileges, err = cache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status) + assert.NotEmpty(t, privileges) +} + +func TestTokenCache_PrivilegesByRole(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + + // Test Admin token (role_id = 1) + adminTokenID := "admin_token" + adminToken := models.APIToken{ + TokenID: adminTokenID, + UserID: 1, + RoleID: 1, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&adminToken).Error + require.NoError(t, err) + + status, adminPrivs, err := cache.GetStatus(adminTokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, adminPrivs) + assert.Contains(t, adminPrivs, auth.PrivilegeAutomation) + assert.Contains(t, adminPrivs, "users.create") + assert.Contains(t, adminPrivs, "users.delete") + assert.Contains(t, adminPrivs, "settings.tokens.admin") + + // Test User token (role_id = 2) + userTokenID := "user_token" + userToken := models.APIToken{ + TokenID: userTokenID, + UserID: 2, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&userToken).Error + require.NoError(t, err) + + status, userPrivs, err := cache.GetStatus(userTokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, userPrivs) + assert.Contains(t, userPrivs, auth.PrivilegeAutomation) + assert.Contains(t, userPrivs, "flows.create") + assert.Contains(t, userPrivs, "settings.tokens.view") + + // User should NOT have admin privileges + assert.NotContains(t, userPrivs, "users.create") + assert.NotContains(t, userPrivs, "users.delete") + assert.NotContains(t, userPrivs, "settings.tokens.admin") + + // Admin should have more privileges than User + assert.Greater(t, len(adminPrivs), len(userPrivs)) +} + +func TestTokenCache_NegativeCaching(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + nonExistentTokenID := "nonexistent" + + // First call - should hit database and cache the "not found" + _, _, err := cache.GetStatus(nonExistentTokenID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) + + // Second call - should return from cache without hitting DB + // We can verify this by checking error is still the same + _, _, err = cache.GetStatus(nonExistentTokenID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err, "Should return cached not found error") + + // Now create the token in DB + token := models.APIToken{ + TokenID: nonExistentTokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&token).Error + require.NoError(t, err) + + // Should still return cached "not found" until invalidated + _, _, err = cache.GetStatus(nonExistentTokenID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err, "Should still return cached not found") + + // Invalidate cache + cache.Invalidate(nonExistentTokenID) + + // Now should find the token + status, privileges, err := cache.GetStatus(nonExistentTokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) +} + +func TestTokenCache_NegativeCachingExpiration(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + cache := auth.NewTokenCache(db) + cache.SetTTL(300 * time.Millisecond) + + nonExistentTokenID := "temp_nonexistent" + + // First call - cache the "not found" + _, _, err := cache.GetStatus(nonExistentTokenID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) + + // Create token in DB + token := models.APIToken{ + TokenID: nonExistentTokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&token).Error + require.NoError(t, err) + + // Wait for cache to expire + time.Sleep(500 * time.Millisecond) + + // Now should find the token (cache expired) + status, privileges, err := cache.GetStatus(nonExistentTokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) +} diff --git a/backend/pkg/server/auth/api_token_id.go b/backend/pkg/server/auth/api_token_id.go new file mode 100644 index 00000000..777a7c36 --- /dev/null +++ b/backend/pkg/server/auth/api_token_id.go @@ -0,0 +1,28 @@ +package auth + +import ( + "crypto/rand" + "fmt" + "math/big" +) + +const ( + Base62Chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" + TokenIDLength = 10 +) + +// GenerateTokenID generates a random base62 string of specified length +func GenerateTokenID() (string, error) { + b := make([]byte, TokenIDLength) + maxIdx := big.NewInt(int64(len(Base62Chars))) + + for i := range b { + idx, err := rand.Int(rand.Reader, maxIdx) + if err != nil { + return "", fmt.Errorf("error generating token ID: %w", err) + } + b[i] = Base62Chars[idx.Int64()] + } + + return string(b), nil +} diff --git a/backend/pkg/server/auth/api_token_id_test.go b/backend/pkg/server/auth/api_token_id_test.go new file mode 100644 index 00000000..2f63f0db --- /dev/null +++ b/backend/pkg/server/auth/api_token_id_test.go @@ -0,0 +1,49 @@ +package auth_test + +import ( + "testing" + + "pentagi/pkg/server/auth" + + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func TestGenerateTokenID(t *testing.T) { + // Test basic generation + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + assert.Len(t, tokenID, auth.TokenIDLength, "Token ID should have correct length") + + // Test that all characters are from base62 charset + for _, char := range tokenID { + assert.Contains(t, auth.Base62Chars, string(char), "Token ID should only contain base62 characters") + } + + // Test uniqueness (generate multiple tokens and check they're different) + tokens := make(map[string]bool) + for i := 0; i < 100; i++ { + token, err := auth.GenerateTokenID() + require.NoError(t, err) + assert.Len(t, token, auth.TokenIDLength) + assert.False(t, tokens[token], "Generated tokens should be unique") + tokens[token] = true + } +} + +func TestGenerateTokenIDFormat(t *testing.T) { + // Test that token IDs match expected format + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + // Should be exactly 10 characters + assert.Equal(t, 10, len(tokenID)) + + // Should only contain alphanumeric characters + for _, char := range tokenID { + isValid := (char >= '0' && char <= '9') || + (char >= 'A' && char <= 'Z') || + (char >= 'a' && char <= 'z') + assert.True(t, isValid, "Character %c should be alphanumeric", char) + } +} diff --git a/backend/pkg/server/auth/api_token_jwt.go b/backend/pkg/server/auth/api_token_jwt.go new file mode 100644 index 00000000..1f5dc93b --- /dev/null +++ b/backend/pkg/server/auth/api_token_jwt.go @@ -0,0 +1,62 @@ +package auth + +import ( + "errors" + "fmt" + "time" + + "pentagi/pkg/server/models" + + "github.com/golang-jwt/jwt/v5" +) + +func MakeAPIToken(globalSalt string, claims jwt.Claims) (string, error) { + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(MakeJWTSigningKey(globalSalt)) + if err != nil { + return "", fmt.Errorf("failed to sign token: %w", err) + } + + return tokenString, nil +} + +func MakeAPITokenClaims(tokenID, uhash string, uid, rid, ttl uint64) jwt.Claims { + now := time.Now() + return models.APITokenClaims{ + TokenID: tokenID, + RID: rid, + UID: uid, + UHASH: uhash, + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(now.Add(time.Duration(ttl) * time.Second)), + IssuedAt: jwt.NewNumericDate(now), + Subject: "api_token", + }, + } +} + +func ValidateAPIToken(tokenString, globalSalt string) (*models.APITokenClaims, error) { + var claims models.APITokenClaims + token, err := jwt.ParseWithClaims(tokenString, &claims, func(token *jwt.Token) (any, error) { + // verify signing algorithm to prevent "alg: none" + if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok { + return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"]) + } + return MakeJWTSigningKey(globalSalt), nil + }) + if err != nil { + if errors.Is(err, jwt.ErrTokenMalformed) { + return nil, fmt.Errorf("token is malformed") + } else if errors.Is(err, jwt.ErrTokenExpired) || errors.Is(err, jwt.ErrTokenNotValidYet) { + return nil, fmt.Errorf("token is either expired or not active yet") + } else { + return nil, fmt.Errorf("token invalid: %w", err) + } + } + + if !token.Valid { + return nil, fmt.Errorf("token is invalid") + } + + return &claims, nil +} diff --git a/backend/pkg/server/auth/api_token_test.go b/backend/pkg/server/auth/api_token_test.go new file mode 100644 index 00000000..b62a053f --- /dev/null +++ b/backend/pkg/server/auth/api_token_test.go @@ -0,0 +1,463 @@ +package auth_test + +import ( + "testing" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/models" + + "github.com/gin-gonic/gin" + "github.com/golang-jwt/jwt/v5" + "github.com/jinzhu/gorm" + _ "github.com/jinzhu/gorm/dialects/sqlite" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +// setupTestDB creates an in-memory SQLite database for testing +func setupTestDB(t *testing.T) *gorm.DB { + t.Helper() + db, err := gorm.Open("sqlite3", ":memory:") + require.NoError(t, err) + + // Create roles table + result := db.Exec(` + CREATE TABLE roles ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + name TEXT NOT NULL UNIQUE + ) + `) + require.NoError(t, result.Error, "Failed to create roles table") + + // Create privileges table + result = db.Exec(` + CREATE TABLE privileges ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + role_id INTEGER NOT NULL, + name TEXT NOT NULL, + UNIQUE(role_id, name) + ) + `) + require.NoError(t, result.Error, "Failed to create privileges table") + + // Create api_tokens table for testing + result = db.Exec(` + CREATE TABLE api_tokens ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + token_id TEXT NOT NULL UNIQUE, + user_id INTEGER NOT NULL, + role_id INTEGER NOT NULL, + name TEXT, + ttl INTEGER NOT NULL, + status TEXT NOT NULL DEFAULT 'active', + created_at DATETIME DEFAULT CURRENT_TIMESTAMP, + updated_at DATETIME DEFAULT CURRENT_TIMESTAMP, + deleted_at DATETIME + ) + `) + require.NoError(t, result.Error, "Failed to create api_tokens table") + + // Insert test roles + db.Exec("INSERT INTO roles (id, name) VALUES (1, 'Admin'), (2, 'User')") + + // Insert test privileges for Admin role + db.Exec(`INSERT INTO privileges (role_id, name) VALUES + (1, 'users.create'), + (1, 'users.delete'), + (1, 'users.edit'), + (1, 'users.view'), + (1, 'roles.view'), + (1, 'flows.admin'), + (1, 'flows.create'), + (1, 'flows.delete'), + (1, 'flows.edit'), + (1, 'flows.view'), + (1, 'settings.tokens.create'), + (1, 'settings.tokens.view'), + (1, 'settings.tokens.edit'), + (1, 'settings.tokens.delete'), + (1, 'settings.tokens.admin')`) + + // Insert test privileges for User role + db.Exec(`INSERT INTO privileges (role_id, name) VALUES + (2, 'roles.view'), + (2, 'flows.create'), + (2, 'flows.delete'), + (2, 'flows.edit'), + (2, 'flows.view'), + (2, 'settings.tokens.create'), + (2, 'settings.tokens.view'), + (2, 'settings.tokens.edit'), + (2, 'settings.tokens.delete')`) + + // Create users table + db.Exec(` + CREATE TABLE users ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + hash TEXT NOT NULL UNIQUE, + type TEXT NOT NULL DEFAULT 'local', + mail TEXT NOT NULL UNIQUE, + name TEXT NOT NULL DEFAULT '', + status TEXT NOT NULL DEFAULT 'active', + role_id INTEGER NOT NULL DEFAULT 2, + password TEXT, + password_change_required BOOLEAN NOT NULL DEFAULT false, + provider TEXT, + created_at DATETIME DEFAULT CURRENT_TIMESTAMP, + deleted_at DATETIME + ) + `) + + // Insert test users + db.Exec("INSERT INTO users (id, hash, mail, name, status, role_id) VALUES (1, 'testhash', 'user1@test.com', 'User 1', 'active', 2)") + db.Exec("INSERT INTO users (id, hash, mail, name, status, role_id) VALUES (2, 'testhash2', 'user2@test.com', 'User 2', 'active', 2)") + + time.Sleep(200 * time.Millisecond) // wait for database to be ready + + return db +} + +func TestValidateAPIToken(t *testing.T) { + globalSalt := "test_salt" + + testCases := []struct { + name string + setup func() string + expectError bool + errorMsg string + }{ + { + name: "valid token", + setup: func() string { + claims := models.APITokenClaims{ + TokenID: "abc123xyz9", + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey(globalSalt)) + return tokenString + }, + expectError: false, + }, + { + name: "expired token", + setup: func() string { + claims := models.APITokenClaims{ + TokenID: "abc123xyz9", + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(-1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now().Add(-2 * time.Hour)), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey(globalSalt)) + return tokenString + }, + expectError: true, + errorMsg: "expired", + }, + { + name: "invalid signature", + setup: func() string { + claims := models.APITokenClaims{ + TokenID: "abc123xyz9", + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString([]byte("wrong_key")) + return tokenString + }, + expectError: true, + errorMsg: "invalid", + }, + { + name: "malformed token", + setup: func() string { + return "not.a.valid.jwt.token" + }, + expectError: true, + errorMsg: "malformed", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenString := tc.setup() + claims, err := auth.ValidateAPIToken(tokenString, globalSalt) + + if tc.expectError { + assert.Error(t, err) + if tc.errorMsg != "" { + assert.Contains(t, err.Error(), tc.errorMsg) + } + assert.Nil(t, claims) + } else { + assert.NoError(t, err) + assert.NotNil(t, claims) + assert.Equal(t, "abc123xyz9", claims.TokenID) + assert.Equal(t, uint64(1), claims.UID) + assert.Equal(t, uint64(2), claims.RID) + } + }) + } +} + +func TestAPITokenAuthentication_CacheExpiration(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + // Create cache with short TTL for testing + tokenCache := auth.NewTokenCache(db) + tokenCache.SetTTL(100 * time.Millisecond) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + // Create active token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // First call: should work (status active, cached) + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) + + // Revoke token in DB + db.Model(&apiToken).Update("status", models.TokenStatusRevoked) + + // Second call: should still work (cache not expired) + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) + + // Wait for cache to expire + time.Sleep(150 * time.Millisecond) + + // Third call: should fail (cache expired, reads from DB) + assert.False(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) +} + +func TestAPITokenAuthentication_DefaultSalt(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + testCases := []struct { + name string + globalSalt string + shouldSkip bool + }{ + { + name: "default salt 'salt'", + globalSalt: "salt", + shouldSkip: true, + }, + { + name: "empty salt", + globalSalt: "", + shouldSkip: true, + }, + { + name: "custom salt", + globalSalt: "custom_secure_salt", + shouldSkip: false, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", tc.globalSalt, tokenCache, userCache) + + // Create a token (even with default salt, for testing) + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey(tc.globalSalt)) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // With default salt, token validation should be skipped + result := server.CallAndGetStatus(t, "Bearer "+tokenString) + + if tc.shouldSkip { + // Should skip token auth and try cookie (which will fail) + assert.False(t, result) + } else { + // With custom salt but no DB record, should fail with "not found" + assert.False(t, result) + } + }) + } +} + +func TestAPITokenAuthentication_SoftDelete(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + // Create token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Should work initially + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) + + // Soft delete + now := time.Now() + db.Model(&apiToken).Update("deleted_at", now) + tokenCache.Invalidate(tokenID) + + // Should fail after soft delete + assert.False(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) +} + +func TestAPITokenAuthentication_AlgNoneAttack(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + // Create token with "none" algorithm (security attack) + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + + // Try to use "none" algorithm + token := jwt.NewWithClaims(jwt.SigningMethodNone, claims) + tokenString, err := token.SignedString(jwt.UnsafeAllowNoneSignatureType) + require.NoError(t, err) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Should reject "none" algorithm + assert.False(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) +} + +func TestAPITokenAuthentication_LegacyProtoToken(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Authorize with cookie to get legacy proto token + server.Authorize(t, []string{auth.PrivilegeAutomation}) + legacyToken := server.GetToken(t) + require.NotEmpty(t, legacyToken) + + // Unauthorize cookie + server.Unauthorize(t) + + // Legacy proto token should still work (fallback mechanism) + server.SetSessionCheckFunc(func(t *testing.T, c *gin.Context) { + t.Helper() + assert.Equal(t, uint64(1), c.GetUint64("uid")) + assert.Equal(t, "automation", c.GetString("cpt")) + }) + + assert.True(t, server.CallAndGetStatus(t, "Bearer "+legacyToken)) +} diff --git a/backend/pkg/server/auth/auth_middleware.go b/backend/pkg/server/auth/auth_middleware.go index de26058f..a8ee4140 100644 --- a/backend/pkg/server/auth/auth_middleware.go +++ b/backend/pkg/server/auth/auth_middleware.go @@ -8,11 +8,12 @@ import ( "time" "pentagi/pkg/server/models" + "pentagi/pkg/server/rdb" "pentagi/pkg/server/response" "github.com/gin-contrib/sessions" "github.com/gin-gonic/gin" - "github.com/golang-jwt/jwt/v5" + "github.com/jinzhu/gorm" ) type authResult int @@ -26,24 +27,28 @@ const ( type AuthMiddleware struct { globalSalt string + tokenCache *TokenCache + userCache *UserCache } -func NewAuthMiddleware(baseURL, globalSalt string) *AuthMiddleware { +func NewAuthMiddleware(baseURL, globalSalt string, tokenCache *TokenCache, userCache *UserCache) *AuthMiddleware { return &AuthMiddleware{ globalSalt: globalSalt, + tokenCache: tokenCache, + userCache: userCache, } } -func (p *AuthMiddleware) AuthRequired(c *gin.Context) { +func (p *AuthMiddleware) AuthUserRequired(c *gin.Context) { p.tryAuth(c, true, p.tryUserCookieAuthentication) } -func (p *AuthMiddleware) AuthTokenProtoRequired(c *gin.Context) { +func (p *AuthMiddleware) AuthTokenRequired(c *gin.Context) { p.tryAuth(c, true, p.tryProtoTokenAuthentication, p.tryUserCookieAuthentication) } func (p *AuthMiddleware) TryAuth(c *gin.Context) { - p.tryAuth(c, false, p.tryUserCookieAuthentication) + p.tryAuth(c, false, p.tryProtoTokenAuthentication, p.tryUserCookieAuthentication) } func (p *AuthMiddleware) tryAuth( @@ -94,9 +99,9 @@ func (p *AuthMiddleware) tryUserCookieAuthentication(c *gin.Context) (authResult tid := session.Get("tid") uname := session.Get("uname") - for _, attr := range []interface{}{uid, rid, prm, exp, gtm, uname, tid} { + for _, attr := range []any{uid, rid, prm, exp, gtm, uname, uhash, tid} { if attr == nil { - return authResultFail, errors.New("token claim invalid") + return authResultFail, errors.New("cookie claim invalid") } } @@ -105,6 +110,7 @@ func (p *AuthMiddleware) tryUserCookieAuthentication(c *gin.Context) (authResult return authResultFail, errors.New("no pemissions granted") } + // Verify session expiration expVal, ok := exp.(int64) if !ok { return authResultFail, errors.New("token claim invalid") @@ -113,9 +119,33 @@ func (p *AuthMiddleware) tryUserCookieAuthentication(c *gin.Context) (authResult return authResultFail, errors.New("session expired") } + // Verify user hash matches database + userID := uid.(uint64) + sessionHash := uhash.(string) + + dbHash, userStatus, err := p.userCache.GetUserHash(userID) + if err != nil { + if errors.Is(err, gorm.ErrRecordNotFound) { + return authResultFail, errors.New("user has been deleted") + } + return authResultFail, fmt.Errorf("error checking user status: %w", err) + } + + switch userStatus { + case models.UserStatusBlocked: + return authResultFail, errors.New("user has been blocked") + case models.UserStatusCreated: + return authResultFail, errors.New("user is not ready") + case models.UserStatusActive: + } + + if dbHash != sessionHash { + return authResultFail, errors.New("user hash mismatch - session invalid for this installation") + } + c.Set("prm", prms) - c.Set("uid", uid.(uint64)) - c.Set("uhash", uhash.(string)) + c.Set("uid", userID) + c.Set("uhash", sessionHash) c.Set("rid", rid.(uint64)) c.Set("exp", exp.(int64)) c.Set("gtm", gtm.(int64)) @@ -145,45 +175,63 @@ func (p *AuthMiddleware) tryProtoTokenAuthentication(c *gin.Context) (authResult return authResultSkip, errors.New("token can't be empty") } - claims, err := ValidateToken(token, p.globalSalt) - if err != nil { - return authResultFail, errors.New("token is invalid") + // skip validation if using default salt (for backward compatibility) + if p.globalSalt == "" || p.globalSalt == "salt" { + return authResultSkip, errors.New("token validation disabled with default salt") } - c.Set("uid", claims.UID) - c.Set("tid", claims.TID) - c.Set("uhash", claims.UHASH) - c.Set("rid", claims.RID) - c.Set("cpt", claims.CPT) - c.Set("prm", []string{PrivilegeAutomation}) - - c.Next() - - return authResultOk, nil -} + // try to validate as API token first (new format with JWT signing key) + apiClaims, apiErr := ValidateAPIToken(token, p.globalSalt) + if apiErr != nil { + return authResultFail, errors.New("token is invalid") + } -func ValidateToken(tokenString, globalSalt string) (*models.ProtoAuthTokenClaims, error) { - var claims models.ProtoAuthTokenClaims - token, err := jwt.ParseWithClaims(tokenString, &claims, func(token *jwt.Token) (interface{}, error) { - // verify signing algorithm to prevent "alg: none" - if _, ok := token.Method.(*jwt.SigningMethodHMAC); !ok { - return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"]) + // check token status and get privileges through cache + status, privileges, err := p.tokenCache.GetStatus(apiClaims.TokenID) + if err != nil { + if errors.Is(err, gorm.ErrRecordNotFound) { + return authResultFail, errors.New("token not found in database") } - return MakeCookieStoreKey(globalSalt)[1], nil - }) + return authResultFail, fmt.Errorf("error checking token status: %w", err) + } + if status != models.TokenStatusActive { + return authResultFail, errors.New("token has been revoked") + } + + // Verify user hash matches database + dbHash, userStatus, err := p.userCache.GetUserHash(apiClaims.UID) if err != nil { - if errors.Is(err, jwt.ErrTokenMalformed) { - return nil, fmt.Errorf("token is malformed") - } else if errors.Is(err, jwt.ErrTokenExpired) || errors.Is(err, jwt.ErrTokenNotValidYet) { - return nil, fmt.Errorf("token is either expired or not active yet") - } else { - return nil, fmt.Errorf("token invalid: %w", err) + if errors.Is(err, gorm.ErrRecordNotFound) { + return authResultFail, errors.New("user has been deleted") } + return authResultFail, fmt.Errorf("error checking user status: %w", err) } - if !token.Valid { - return nil, fmt.Errorf("token is invalid") + if userStatus == models.UserStatusBlocked { + return authResultFail, errors.New("user has been blocked") } - return &claims, nil + if dbHash != apiClaims.UHASH { + return authResultFail, errors.New("user hash mismatch - token invalid for this installation") + } + + // generate UUID from user hash (fallback to empty string if hash is invalid) + uuid, err := rdb.MakeUuidStrFromHash(apiClaims.UHASH) + if err != nil { + // Use empty UUID for invalid hashes (e.g., in tests) + uuid = "" + } + + // set session fields similar to regular login + c.Set("uid", apiClaims.UID) + c.Set("uhash", apiClaims.UHASH) + c.Set("rid", apiClaims.RID) + c.Set("tid", models.UserTypeAPI.String()) + c.Set("prm", privileges) + c.Set("gtm", time.Now().Unix()) + c.Set("exp", apiClaims.ExpiresAt.Unix()) + c.Set("uuid", uuid) + c.Set("cpt", "automation") + + return authResultOk, nil } diff --git a/backend/pkg/server/auth/auth_middleware_test.go b/backend/pkg/server/auth/auth_middleware_test.go index ab5c8259..e5115787 100644 --- a/backend/pkg/server/auth/auth_middleware_test.go +++ b/backend/pkg/server/auth/auth_middleware_test.go @@ -1,4 +1,4 @@ -package auth +package auth_test import ( "bytes" @@ -12,20 +12,28 @@ import ( "testing" "time" + "pentagi/pkg/server/auth" "pentagi/pkg/server/models" "github.com/gin-contrib/sessions" "github.com/gin-contrib/sessions/cookie" "github.com/gin-gonic/gin" + "github.com/jinzhu/gorm" + _ "github.com/jinzhu/gorm/dialects/sqlite" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) func TestAuthTokenProtoRequiredAuthWithCookie(t *testing.T) { - authMiddleware := NewAuthMiddleware("/base/url", "test") + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) t.Run("test URL", func(t *testing.T) { - server := newTestServer(t, "/test", authMiddleware.AuthTokenProtoRequired) + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) defer server.Close() assert.False(t, server.CallAndGetStatus(t)) @@ -46,23 +54,27 @@ func TestAuthTokenProtoRequiredAuthWithCookie(t *testing.T) { assert.Equal(t, "automation", c.GetString("cpt")) }) - server.Authorize(t, []string{PrivilegeAutomation}) + server.Authorize(t, []string{auth.PrivilegeAutomation}) assert.True(t, server.CallAndGetStatus(t)) - server.Authorize(t, []string{"wrong.permission", PrivilegeAutomation}) + server.Authorize(t, []string{"wrong.permission", auth.PrivilegeAutomation}) assert.True(t, server.CallAndGetStatus(t)) }) } func TestAuthTokenProtoRequiredAuthWithToken(t *testing.T) { - authMiddleware := NewAuthMiddleware("/base/url", "test") + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) - server := newTestServer(t, "/test", authMiddleware.AuthTokenProtoRequired) + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) defer server.Close() - server.Authorize(t, []string{PrivilegeAutomation}) - kind := "automation" - token := server.GetToken(t, kind) + server.Authorize(t, []string{auth.PrivilegeAutomation}) + token := server.GetToken(t) require.NotEmpty(t, token) server.Unauthorize(t) @@ -78,10 +90,18 @@ func TestAuthTokenProtoRequiredAuthWithToken(t *testing.T) { assert.Equal(t, uint64(1), c.GetUint64("uid")) assert.Equal(t, uint64(2), c.GetUint64("rid")) assert.NotNil(t, c.GetStringSlice("prm")) - assert.Zero(t, c.GetInt64("gtm")) - assert.Zero(t, c.GetInt64("exp")) - assert.Empty(t, c.GetString("uuid")) - assert.Equal(t, kind, c.GetString("cpt")) + + // gtm and exp should now be set for API tokens + gtm := c.GetInt64("gtm") + assert.Greater(t, gtm, int64(0), "GTM should be set") + + exp := c.GetInt64("exp") + assert.Greater(t, exp, gtm, "EXP should be greater than GTM") + + // uuid will be empty for invalid hash (test uses "123" which is not valid MD5) + assert.NotNil(t, c.GetString("uuid")) + + assert.Equal(t, "automation", c.GetString("cpt")) assert.Empty(t, c.GetString("uname")) }) @@ -89,9 +109,14 @@ func TestAuthTokenProtoRequiredAuthWithToken(t *testing.T) { } func TestAuthRequiredAuthWithCookie(t *testing.T) { - authMiddleware := NewAuthMiddleware("/base/url", "test") + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) - server := newTestServer(t, "/test", authMiddleware.AuthRequired) + server := newTestServer(t, "/test", db, authMiddleware.AuthUserRequired) defer server.Close() server.SetSessionCheckFunc(func(t *testing.T, c *gin.Context) { @@ -102,7 +127,7 @@ func TestAuthRequiredAuthWithCookie(t *testing.T) { assert.NotNil(t, c.GetInt64("gtm")) assert.NotNil(t, c.GetInt64("exp")) assert.Empty(t, c.GetString("uuid")) - assert.Equal(t, "name1", c.GetString("uname")) + assert.Equal(t, "User 1", c.GetString("uname")) }) assert.False(t, server.CallAndGetStatus(t)) @@ -116,16 +141,20 @@ type testServer struct { client *http.Client calls map[string]struct{} sessionCheckFunc func(t *testing.T, c *gin.Context) + db *gorm.DB *httptest.Server } -func newTestServer(t *testing.T, testEndpoint string, middlewares ...gin.HandlerFunc) *testServer { +func newTestServer(t *testing.T, testEndpoint string, db *gorm.DB, middlewares ...gin.HandlerFunc) *testServer { t.Helper() - server := &testServer{} + server := &testServer{ + db: db, + } router := gin.New() - cookieStore := cookie.NewStore(MakeCookieStoreKey("test")...) + globalSalt := "test" + cookieStore := cookie.NewStore(auth.MakeCookieStoreKey(globalSalt)...) router.Use(sessions.Sessions("auth", cookieStore)) server.calls = map[string]struct{}{} @@ -135,8 +164,6 @@ func newTestServer(t *testing.T, testEndpoint string, middlewares ...gin.Handler } server.testEndpoint = testEndpoint - protoService := NewProtoService("test") - router.GET("/auth", func(c *gin.Context) { t.Helper() privs, _ := c.GetQueryArray("privileges") @@ -151,8 +178,10 @@ func newTestServer(t *testing.T, testEndpoint string, middlewares ...gin.Handler for _, middleware := range middlewares { authRoutes.Use(middleware) } + authRoutes.GET(server.testEndpoint, func(c *gin.Context) { t.Helper() + id, _ := c.GetQuery("id") require.NotEmpty(t, id) @@ -161,15 +190,28 @@ func newTestServer(t *testing.T, testEndpoint string, middlewares ...gin.Handler } server.calls[id] = struct{}{} }) + authRoutes.GET("/auth_token", func(c *gin.Context) { t.Helper() - cpt, ok := c.GetQuery("cpt") - assert.True(t, ok) - token, err := protoService.MakeToken(c, &models.ProtoAuthTokenRequest{ - TTL: 3600, - Type: cpt, - }) + + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + uhash := "testhash" + uid := uint64(1) + rid := uint64(2) + ttl := uint64(3600) + claims := auth.MakeAPITokenClaims(tokenID, uhash, uid, rid, ttl) + token, err := auth.MakeAPIToken(globalSalt, claims) require.NoError(t, err) + + db.Create(&models.APIToken{ + TokenID: tokenID, + UserID: uid, + RoleID: rid, + TTL: ttl, + Status: models.TokenStatusActive, + }) + c.Writer.Write([]byte(token)) }) @@ -198,13 +240,10 @@ func (s *testServer) Authorize(t *testing.T, privileges []string) { assert.Equal(t, http.StatusOK, resp.StatusCode) } -func (s *testServer) GetToken(t *testing.T, cpt string) string { +func (s *testServer) GetToken(t *testing.T) string { t.Helper() request, err := http.NewRequest(http.MethodGet, s.URL+"/auth_token", nil) require.NoError(t, err) - query := url.Values{} - query.Add("cpt", cpt) - request.URL.RawQuery = query.Encode() resp, err := s.client.Do(request) require.NoError(t, err) @@ -282,14 +321,14 @@ func setTestSession(t *testing.T, c *gin.Context, privileges []string, expires i t.Helper() session := sessions.Default(c) session.Set("uid", uint64(1)) - session.Set("uhash", "123") + session.Set("uhash", "testhash") session.Set("rid", uint64(2)) session.Set("tid", models.UserTypeLocal.String()) session.Set("prm", privileges) session.Set("gtm", time.Now().Unix()) session.Set("exp", time.Now().Add(time.Duration(expires)*time.Second).Unix()) session.Set("uuid", "uuid1") - session.Set("uname", "name1") + session.Set("uname", "User 1") session.Options(sessions.Options{ HttpOnly: true, MaxAge: expires, diff --git a/backend/pkg/server/auth/integration_test.go b/backend/pkg/server/auth/integration_test.go new file mode 100644 index 00000000..88bd16d3 --- /dev/null +++ b/backend/pkg/server/auth/integration_test.go @@ -0,0 +1,890 @@ +package auth_test + +import ( + "sync" + "testing" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/models" + + "github.com/gin-gonic/gin" + "github.com/golang-jwt/jwt/v5" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +// TestEndToEndAPITokenFlow tests complete flow from creation to usage +func TestEndToEndAPITokenFlow(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test_salt", tokenCache, userCache) + + testCases := []struct { + name string + tokenID string + status models.TokenStatus + shouldPass bool + errorContains string + }{ + { + name: "active token authenticates successfully", + tokenID: "active123", + status: models.TokenStatusActive, + shouldPass: true, + }, + { + name: "revoked token is rejected", + tokenID: "revoked456", + status: models.TokenStatusRevoked, + shouldPass: false, + errorContains: "revoked", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + // Create token in database + apiToken := models.APIToken{ + TokenID: tc.tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: tc.status, + } + err := db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT token + claims := models.APITokenClaims{ + TokenID: tc.tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test_salt")) + require.NoError(t, err) + + // Test authentication + server := newTestServer(t, "/protected", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + success := server.CallAndGetStatus(t, "Bearer "+tokenString) + assert.Equal(t, tc.shouldPass, success) + }) + } +} + +// TestAPIToken_RoleIsolation verifies that token inherits creator's role +func TestAPIToken_RoleIsolation(t *testing.T) { + testCases := []struct { + name string + creatorRole uint64 + tokenRole uint64 + expectMatch bool + }{ + { + name: "user creates token with user role", + creatorRole: 2, + tokenRole: 2, + expectMatch: true, + }, + { + name: "admin creates token with admin role", + creatorRole: 1, + tokenRole: 1, + expectMatch: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + // Create JWT with specific role + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: tc.tokenRole, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + // Validate and check role + validated, err := auth.ValidateAPIToken(tokenString, "test") + require.NoError(t, err) + + if tc.expectMatch { + assert.Equal(t, tc.tokenRole, validated.RID) + } + }) + } +} + +// TestAPIToken_SignatureVerification tests various signature attacks +func TestAPIToken_SignatureVerification(t *testing.T) { + correctSalt := "correct_salt" + wrongSalt := "wrong_salt" + + testCases := []struct { + name string + signSalt string + verifySalt string + expectValid bool + errorContains string + }{ + { + name: "matching salt - valid", + signSalt: correctSalt, + verifySalt: correctSalt, + expectValid: true, + }, + { + name: "mismatched salt - invalid", + signSalt: correctSalt, + verifySalt: wrongSalt, + expectValid: false, + errorContains: "invalid", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey(tc.signSalt)) + require.NoError(t, err) + + validated, err := auth.ValidateAPIToken(tokenString, tc.verifySalt) + + if tc.expectValid { + assert.NoError(t, err) + assert.NotNil(t, validated) + } else { + assert.Error(t, err) + if tc.errorContains != "" { + assert.Contains(t, err.Error(), tc.errorContains) + } + } + }) + } +} + +// TestAPIToken_CacheInvalidation verifies cache invalidation scenarios +func TestAPIToken_CacheInvalidation(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + + // Create token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Load into cache + status1, _, err := tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status1) + + // Update in DB + db.Model(&apiToken).Update("status", models.TokenStatusRevoked) + + // Should still return active from cache + status2, _, err := tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status2, "Cache should return stale value") + + // Invalidate cache + tokenCache.Invalidate(tokenID) + + // Should now return revoked from DB + status3, _, err := tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status3, "Cache should be refreshed from DB") +} + +// TestAPIToken_ConcurrentAccess tests thread-safety of cache +func TestAPIToken_ConcurrentAccess(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + + // Create multiple tokens + tokenIDs := make([]string, 10) + for i := range 10 { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + tokenIDs[i] = tokenID + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + } + + // Verify tokens were created + var count int + db.Model(&models.APIToken{}).Where("deleted_at IS NULL").Count(&count) + require.Equal(t, 10, count) + + // Warm up cache + for i := range 10 { + status, _, err := tokenCache.GetStatus(tokenIDs[i]) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + } + + // Concurrent cache access using channels for error reporting + type testResult struct { + success bool + err error + } + results := make(chan testResult, 10) + + var wg sync.WaitGroup + wg.Add(10) + for i := range 10 { + go func(tokenID string) { + defer wg.Done() + for range 100 { + status, _, err := tokenCache.GetStatus(tokenID) + if err != nil { + results <- testResult{success: false, err: err} + return + } + if status != models.TokenStatusActive { + results <- testResult{success: false, err: assert.AnError} + return + } + } + results <- testResult{success: true, err: nil} + }(tokenIDs[i]) + } + + wg.Wait() + close(results) + + // Wait and check all results + for result := range results { + assert.NoError(t, result.err) + assert.True(t, result.success, "Goroutine should complete successfully") + } +} + +// TestAPIToken_JSONStructure verifies JWT payload structure +func TestAPIToken_JSONStructure(t *testing.T) { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + // Parse and verify all fields + parsed, err := auth.ValidateAPIToken(tokenString, "test") + require.NoError(t, err) + + assert.Equal(t, tokenID, parsed.TokenID, "TokenID should match") + assert.Equal(t, uint64(2), parsed.RID, "RID should match") + assert.Equal(t, uint64(1), parsed.UID, "UID should match") + assert.Equal(t, "testhash", parsed.UHASH, "UHASH should match") + assert.Equal(t, "api_token", parsed.Subject, "Subject should match") + assert.NotNil(t, parsed.ExpiresAt, "ExpiresAt should be set") + assert.NotNil(t, parsed.IssuedAt, "IssuedAt should be set") +} + +// TestAPIToken_Expiration verifies TTL enforcement +func TestAPIToken_Expiration(t *testing.T) { + testCases := []struct { + name string + ttl time.Duration + expectValid bool + }{ + { + name: "future expiration - valid", + ttl: 1 * time.Hour, + expectValid: true, + }, + { + name: "past expiration - invalid", + ttl: -1 * time.Hour, + expectValid: false, + }, + { + name: "just expired - invalid", + ttl: -1 * time.Second, + expectValid: false, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(tc.ttl)), + IssuedAt: jwt.NewNumericDate(time.Now().Add(-1 * time.Hour)), + Subject: "api_token", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + validated, err := auth.ValidateAPIToken(tokenString, "test") + + if tc.expectValid { + assert.NoError(t, err) + assert.NotNil(t, validated) + } else { + assert.Error(t, err) + assert.Contains(t, err.Error(), "expired") + } + }) + } +} + +// TestDualAuthentication verifies both cookie and token auth work together +func TestDualAuthentication(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Test 1: Cookie authentication + server.Authorize(t, []string{auth.PrivilegeAutomation}) + assert.True(t, server.CallAndGetStatus(t), "Cookie auth should work") + + // Test 2: Create and use API token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey("test")) + + // Unauthorize cookie + server.Unauthorize(t) + + // Test 3: Token authentication should work + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString), "Token auth should work") + + // Test 4: Both should work simultaneously + server.Authorize(t, []string{auth.PrivilegeAutomation}) + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString), "Both auth methods should work") +} + +// TestSecurityAudit_ClaimsInJWT verifies all security-critical data is in JWT +func TestSecurityAudit_ClaimsInJWT(t *testing.T) { + // Create token in DB with certain values + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + dbToken := models.APIToken{ + TokenID: tokenID, + UserID: 1, + RoleID: 2, // User role in DB + } + + // Create JWT with different role (simulating compromise scenario) + jwtClaims := models.APITokenClaims{ + TokenID: tokenID, + RID: 1, // Admin role in JWT (different from DB!) + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + + token := jwt.NewWithClaims(jwt.SigningMethodHS256, jwtClaims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey("test")) + + // Validate token + validated, err := auth.ValidateAPIToken(tokenString, "test") + require.NoError(t, err) + + // We trust JWT claims, not DB values + assert.Equal(t, uint64(1), validated.RID, "Should use role from JWT, not DB") + assert.NotEqual(t, dbToken.RoleID, validated.RID, "JWT role differs from DB role") + assert.Equal(t, dbToken.UserID, validated.UID) + assert.Equal(t, dbToken.TokenID, validated.TokenID) + + // This is CORRECT behavior: DB only stores metadata for management + // Actual authorization data comes from signed JWT +} + +// TestSecurityAudit_TokenIDUniqueness verifies token ID collision resistance +func TestSecurityAudit_TokenIDUniqueness(t *testing.T) { + iterations := 10000 + tokens := make(map[string]bool, iterations) + + for i := 0; i < iterations; i++ { + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + // Check format + assert.Len(t, tokenID, 10) + + // Check uniqueness + if tokens[tokenID] { + t.Fatalf("Duplicate token ID generated: %s", tokenID) + } + tokens[tokenID] = true + } + + t.Logf("Generated %d unique token IDs without collision", iterations) +} + +// TestSecurityAudit_SaltIsolation verifies JWT and Cookie keys are different +func TestSecurityAudit_SaltIsolation(t *testing.T) { + salts := []string{"salt1", "salt2", "production_salt"} + + for _, salt := range salts { + t.Run("salt="+salt, func(t *testing.T) { + jwtKey := auth.MakeJWTSigningKey(salt) + cookieKeys := auth.MakeCookieStoreKey(salt) + + // JWT key must be different from both cookie keys + assert.NotEqual(t, jwtKey, cookieKeys[0], "JWT key must differ from cookie auth key") + assert.NotEqual(t, jwtKey, cookieKeys[1], "JWT key must differ from cookie encryption key") + + // Verify key lengths + assert.Len(t, jwtKey, 32, "JWT key must be 32 bytes") + assert.Len(t, cookieKeys[0], 64, "Cookie auth key must be 64 bytes") + assert.Len(t, cookieKeys[1], 32, "Cookie encryption key must be 32 bytes") + }) + } +} + +// TestAPIToken_ContextSetup verifies correct context values are set +func TestAPIToken_ContextSetup(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + // Create token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 5, + RoleID: 3, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + user := models.User{ + ID: 5, + Hash: "user5hash", + Mail: "user5@example.com", + Name: "User 5", + Status: models.UserStatusActive, + RoleID: 2, + } + err = db.Create(&user).Error + require.NoError(t, err) + + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 5, + UHASH: "user5hash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, _ := token.SignedString(auth.MakeJWTSigningKey("test")) + + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + server.SetSessionCheckFunc(func(t *testing.T, c *gin.Context) { + t.Helper() + + // Verify all context values are set correctly + assert.Equal(t, uint64(5), c.GetUint64("uid"), "UID from JWT") + assert.Equal(t, uint64(2), c.GetUint64("rid"), "RID from JWT") + assert.Equal(t, "user5hash", c.GetString("uhash"), "UHASH from JWT") + assert.Equal(t, "automation", c.GetString("cpt"), "CPT from JWT") + assert.Equal(t, "api", c.GetString("tid"), "TID should be 'api' for API tokens") + + prms := c.GetStringSlice("prm") + assert.Contains(t, prms, auth.PrivilegeAutomation, "Should have automation privilege") + + // Verify session timing fields + gtm := c.GetInt64("gtm") + assert.Greater(t, gtm, int64(0), "GTM (generation time) should be set") + + exp := c.GetInt64("exp") + assert.Greater(t, exp, gtm, "EXP (expiration time) should be greater than GTM") + + // UUID might be empty if hash is invalid (which is expected in tests) + uuid := c.GetString("uuid") + assert.NotNil(t, uuid, "UUID should be set (even if empty)") + }) + + assert.True(t, server.CallAndGetStatus(t, "Bearer "+tokenString)) +} + +// TestUserHashValidation_CookieAuth tests uhash validation with cookie authentication +func TestUserHashValidation_CookieAuth(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + + server := newTestServer(t, "/test", db, authMiddleware.AuthUserRequired) + defer server.Close() + + // Create test user with ID=1 and hash="123" to match session + var count int + db.Model(&models.User{}).Where("id = ?", 1).Count(&count) + testUser := models.User{ + ID: 1, + Hash: "123", + Mail: "test_user@example.com", + Name: "Test User", + Status: models.UserStatusActive, + RoleID: 2, + } + if count == 0 { + err := db.Create(&testUser).Error + require.NoError(t, err) + } else { + db.First(&testUser, 1) + } + + t.Run("correct uhash succeeds", func(t *testing.T) { + server.Authorize(t, []string{"test.permission"}) + assert.True(t, server.CallAndGetStatus(t)) + }) + + t.Run("modified uhash in database fails", func(t *testing.T) { + // Update user hash in database + db.Model(&testUser).Where("id = ?", 1).Update("hash", "modified_hash") + userCache.Invalidate(1) + + // Try to authenticate with old session (has hash="123") + assert.False(t, server.CallAndGetStatus(t)) + }) + + t.Run("blocked user fails", func(t *testing.T) { + // Restore original hash + db.Model(&testUser).Where("id = ?", 1).Update("hash", "123") + // Block user + db.Model(&testUser).Where("id = ?", 1).Update("status", models.UserStatusBlocked) + userCache.Invalidate(1) + + assert.False(t, server.CallAndGetStatus(t)) + }) + + t.Run("deleted user fails", func(t *testing.T) { + // Undelete and unblock first + db.Model(&models.User{}).Unscoped().Where("id = ?", 1).Update("deleted_at", nil) + db.Model(&testUser).Where("id = ?", 1).Update("status", models.UserStatusActive) + + // Delete user + db.Delete(&testUser, 1) + userCache.Invalidate(1) + + assert.False(t, server.CallAndGetStatus(t)) + }) +} + +// TestUserHashValidation_TokenAuth tests uhash validation with token authentication +func TestUserHashValidation_TokenAuth(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test_salt", tokenCache, userCache) + + server := newTestServer(t, "/protected", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Create test user + testUser := models.User{ + ID: 200, + Hash: "token_test_hash", + Mail: "token_user@example.com", + Name: "Token Test User", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&testUser).Error + require.NoError(t, err) + + // Create API token + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 200, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT token with correct hash + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 200, + UHASH: "token_test_hash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test_salt")) + require.NoError(t, err) + + t.Run("correct uhash succeeds", func(t *testing.T) { + success := server.CallAndGetStatus(t, "Bearer "+tokenString) + assert.True(t, success) + }) + + t.Run("modified uhash in database fails", func(t *testing.T) { + // Update user hash in database + db.Model(&testUser).Update("hash", "different_hash") + userCache.Invalidate(200) + + // Try to authenticate with token (has original hash) + success := server.CallAndGetStatus(t, "Bearer "+tokenString) + assert.False(t, success) + }) + + t.Run("blocked user fails", func(t *testing.T) { + // Restore original hash + db.Model(&testUser).Update("hash", "token_test_hash") + // Block user + db.Model(&testUser).Update("status", models.UserStatusBlocked) + userCache.Invalidate(200) + + success := server.CallAndGetStatus(t, "Bearer "+tokenString) + assert.False(t, success) + }) + + t.Run("deleted user fails", func(t *testing.T) { + // Unblock and restore for clean state + db.Model(&models.User{}).Unscoped().Where("id = ?", 200).Update("deleted_at", nil) + db.Model(&testUser).Update("status", models.UserStatusActive) + + // Delete user + db.Delete(&testUser) + userCache.Invalidate(200) + + success := server.CallAndGetStatus(t, "Bearer "+tokenString) + assert.False(t, success) + }) +} + +// TestUserHashValidation_CrossInstallation simulates different installations +func TestUserHashValidation_CrossInstallation(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test_salt", tokenCache, userCache) + + server := newTestServer(t, "/protected", db, authMiddleware.AuthTokenRequired) + defer server.Close() + + // Simulate Installation A + userInstallationA := models.User{ + ID: 300, + Hash: "installation_a_hash", + Mail: "cross@example.com", + Name: "Cross Installation User", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&userInstallationA).Error + require.NoError(t, err) + + // Create API token for Installation A + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: 300, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT token with Installation A hash + claimsA := models.APITokenClaims{ + TokenID: tokenID, + RID: 2, + UID: 300, + UHASH: "installation_a_hash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + tokenA := jwt.NewWithClaims(jwt.SigningMethodHS256, claimsA) + tokenStringA, err := tokenA.SignedString(auth.MakeJWTSigningKey("test_salt")) + require.NoError(t, err) + + t.Run("token works on Installation A", func(t *testing.T) { + success := server.CallAndGetStatus(t, "Bearer "+tokenStringA) + assert.True(t, success) + }) + + t.Run("token from Installation A fails on Installation B", func(t *testing.T) { + // Simulate Installation B - user has different hash + db.Model(&userInstallationA).Update("hash", "installation_b_hash") + userCache.Invalidate(300) + + // Try to use token from Installation A (has installation_a_hash) + success := server.CallAndGetStatus(t, "Bearer "+tokenStringA) + assert.False(t, success, "Token from Installation A should not work on Installation B") + }) + + t.Run("new token from Installation B works", func(t *testing.T) { + // Create new token for Installation B + tokenIDB, err := auth.GenerateTokenID() + require.NoError(t, err) + apiTokenB := models.APIToken{ + TokenID: tokenIDB, + UserID: 300, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&apiTokenB).Error + require.NoError(t, err) + + // Create JWT token with Installation B hash + claimsB := models.APITokenClaims{ + TokenID: tokenIDB, + RID: 2, + UID: 300, + UHASH: "installation_b_hash", // correct hash for Installation B + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + tokenB := jwt.NewWithClaims(jwt.SigningMethodHS256, claimsB) + tokenStringB, err := tokenB.SignedString(auth.MakeJWTSigningKey("test_salt")) + require.NoError(t, err) + + // Token from Installation B should work + success := server.CallAndGetStatus(t, "Bearer "+tokenStringB) + assert.True(t, success, "New token from Installation B should work") + }) +} diff --git a/backend/pkg/server/auth/permissions.go b/backend/pkg/server/auth/permissions.go index acdc75b2..eb39244e 100644 --- a/backend/pkg/server/auth/permissions.go +++ b/backend/pkg/server/auth/permissions.go @@ -2,6 +2,7 @@ package auth import ( "fmt" + "slices" "pentagi/pkg/server/response" @@ -29,7 +30,7 @@ func PrivilegesRequired(privs ...string) gin.HandlerFunc { return } - for _, priv := range append([]string{}, privs...) { + for _, priv := range privs { if !LookupPerm(prms, priv) { response.Error(c, response.ErrPrivilegesRequired, fmt.Errorf("'%s' is not set", priv)) c.Abort() @@ -41,10 +42,5 @@ func PrivilegesRequired(privs ...string) gin.HandlerFunc { } func LookupPerm(prm []string, perm string) bool { - for _, p := range prm { - if p == perm { - return true - } - } - return false + return slices.Contains(prm, perm) } diff --git a/backend/pkg/server/auth/permissions_test.go b/backend/pkg/server/auth/permissions_test.go index e5b6ee02..4fb3b1d3 100644 --- a/backend/pkg/server/auth/permissions_test.go +++ b/backend/pkg/server/auth/permissions_test.go @@ -1,6 +1,7 @@ -package auth +package auth_test import ( + "pentagi/pkg/server/auth" "testing" "github.com/gin-gonic/gin" @@ -8,8 +9,13 @@ import ( ) func TestPrivilegesRequired(t *testing.T) { - authMiddleware := NewAuthMiddleware("/base/url", "test") - server := newTestServer(t, "/test", authMiddleware.AuthRequired, PrivilegesRequired("priv1", "priv2")) + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + userCache := auth.NewUserCache(db) + authMiddleware := auth.NewAuthMiddleware("/base/url", "test", tokenCache, userCache) + server := newTestServer(t, "/test", db, authMiddleware.AuthTokenRequired, auth.PrivilegesRequired("priv1", "priv2")) defer server.Close() server.SetSessionCheckFunc(func(t *testing.T, c *gin.Context) { diff --git a/backend/pkg/server/auth/proto.go b/backend/pkg/server/auth/proto.go deleted file mode 100644 index c4094b01..00000000 --- a/backend/pkg/server/auth/proto.go +++ /dev/null @@ -1,119 +0,0 @@ -package auth - -import ( - "fmt" - "net/http" - "time" - - "pentagi/pkg/server/logger" - "pentagi/pkg/server/models" - "pentagi/pkg/server/response" - - "github.com/gin-gonic/gin" - "github.com/golang-jwt/jwt/v5" -) - -type ProtoService struct { - globalSalt string -} - -func NewProtoService(globalSalt string) *ProtoService { - return &ProtoService{ - globalSalt: globalSalt, - } -} - -// CreateAuthToken is a function to create new JWT token to authorize automation requests -// @Summary Create new JWT token to use it into automation connections -// @Tags Proto -// @Accept json -// @Produce json -// @Param json body models.ProtoAuthTokenRequest true "Proto auth token request JSON data" -// @Success 201 {object} response.successResp{data=models.ProtoAuthToken} "token created successful" -// @Failure 400 {object} response.errorResp "invalid requested token info" -// @Failure 403 {object} response.errorResp "creating token not permitted" -// @Failure 500 {object} response.errorResp "internal error on creating token" -// @Router /token [post] -func (p *ProtoService) CreateAuthToken(c *gin.Context) { - var req models.ProtoAuthTokenRequest - if err := c.ShouldBindJSON(&req); err != nil { - logger.FromContext(c).WithError(err).Errorf("error binding JSON") - response.Error(c, response.ErrProtoInvalidRequest, err) - return - } - if err := req.Valid(); err != nil { - logger.FromContext(c).WithError(err).Errorf("error validating JSON") - response.Error(c, response.ErrProtoInvalidRequest, err) - return - } - - token, err := p.MakeToken(c, &req) - if err != nil { - logger.FromContext(c).WithError(err).Errorf("error on making token") - response.Error(c, response.ErrProtoCreateTokenFail, err) - return - } - if _, err = ValidateToken(token, p.globalSalt); err != nil { - logger.FromContext(c).WithError(err).Errorf("error on validating token") - response.Error(c, response.ErrProtoInvalidToken, err) - return - } - - pat := models.ProtoAuthToken{ - Token: token, - TTL: req.TTL, - CreatedDate: time.Now(), - } - response.Success(c, http.StatusCreated, pat) -} - -func (p *ProtoService) MakeToken(c *gin.Context, req *models.ProtoAuthTokenRequest) (string, error) { - claims, err := p.makeTokenClaims(c, req.Type) - if err != nil { - return "", fmt.Errorf("failed to get token claims: %w", err) - } - - now := time.Now() - claims.RegisteredClaims = jwt.RegisteredClaims{ - ExpiresAt: jwt.NewNumericDate(now.Add(time.Duration(req.TTL) * time.Second)), - IssuedAt: jwt.NewNumericDate(now), - Subject: "automation", - } - - token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) - tokenString, err := token.SignedString(MakeCookieStoreKey(p.globalSalt)[1]) - if err != nil { - return "", fmt.Errorf("failed to sign token: %w", err) - } - return tokenString, nil -} - -func (p *ProtoService) makeTokenClaims(c *gin.Context, cpt string) (*models.ProtoAuthTokenClaims, error) { - rid := c.GetUint64("rid") - if rid == 0 { - return nil, fmt.Errorf("input RID invalid %d", rid) - } - - uid := c.GetUint64("uid") - if uid == 0 { - return nil, fmt.Errorf("input UID invalid %d", uid) - } - - tid := c.GetString("tid") - if tid == "" { - return nil, fmt.Errorf("input TID invalid %s", tid) - } - - uhash := c.GetString("uhash") - if uid == 0 { - return nil, fmt.Errorf("input UHASH invalid %d", uid) - } - - return &models.ProtoAuthTokenClaims{ - RID: rid, - UID: uid, - TID: tid, - UHASH: uhash, - CPT: cpt, - }, nil -} diff --git a/backend/pkg/server/auth/session.go b/backend/pkg/server/auth/session.go index 57e1a58f..e7563c46 100644 --- a/backend/pkg/server/auth/session.go +++ b/backend/pkg/server/auth/session.go @@ -1,41 +1,71 @@ package auth import ( - "crypto/sha256" "crypto/sha512" - "encoding/hex" "strings" "sync" - "pentagi/pkg/system" - "pentagi/pkg/version" + "golang.org/x/crypto/pbkdf2" ) var ( - cookieStoreKeys [][]byte - cookieStoreOnce sync.Once + cookieStoreKeys sync.Map // cache of cookie keys per salt + jwtSigningKeys sync.Map // cache of JWT signing keys per salt +) + +const ( + pbkdf2Iterations = 210000 // OWASP 2023 recommendation + jwtKeyLength = 32 // 256 bits for HS256 + authKeyLength = 64 // 512 bits for cookie auth key + encKeyLength = 32 // 256 bits for cookie encryption key ) // MakeCookieStoreKey is function to generate auth and encryption keys for cookie store func MakeCookieStoreKey(globalSalt string) [][]byte { - cookieStoreOnce.Do(func() { - baseHash := func(values ...string) string { - hash := sha256.Sum256([]byte(strings.Join(values, "|"))) - return hex.EncodeToString(hash[:]) - } - authKey := strings.Join([]string{ - baseHash(version.GetBinaryVersion(), "a8d0abae36f749588f4393e6fc292690", globalSalt), - system.GetHostID(), - globalSalt, - }, "|") - encKey := strings.Join([]string{ - baseHash(version.GetBinaryVersion(), "7c9be62adec5076970fa946e78f256e2", globalSalt), - system.GetHostID(), - globalSalt, - }, "|") - authKeyBytes := sha512.Sum512([]byte(authKey)) - encKeyBytes := sha256.Sum256([]byte(encKey)) - cookieStoreKeys = [][]byte{authKeyBytes[:], encKeyBytes[:]} - }) - return cookieStoreKeys + // Check cache for existing keys + if cached, ok := cookieStoreKeys.Load(globalSalt); ok { + return cached.([][]byte) + } + + // Generate new keys for this salt using PBKDF2 + password := []byte(strings.Join([]string{ + "a8d0abae36f749588f4393e6fc292690", + globalSalt, + "7c9be62adec5076970fa946e78f256e2", + }, "|")) + + // Auth key (64 bytes) - using salt variant 1 + authSalt := []byte("pentagi.cookie.auth|" + globalSalt) + authKey := pbkdf2.Key(password, authSalt, pbkdf2Iterations, authKeyLength, sha512.New) + + // Encryption key (32 bytes) - using salt variant 2 + encSalt := []byte("pentagi.cookie.enc|" + globalSalt) + encKey := pbkdf2.Key(password, encSalt, pbkdf2Iterations, encKeyLength, sha512.New) + + newKeys := [][]byte{authKey, encKey} + + // Store in cache (LoadOrStore handles concurrent access) + actual, _ := cookieStoreKeys.LoadOrStore(globalSalt, newKeys) + return actual.([][]byte) +} + +// MakeJWTSigningKey is function to generate signing key for JWT tokens +func MakeJWTSigningKey(globalSalt string) []byte { + // Check cache for existing key + if cached, ok := jwtSigningKeys.Load(globalSalt); ok { + return cached.([]byte) + } + + // Generate new key for this salt using PBKDF2 + password := []byte(strings.Join([]string{ + "4c1e9cb77df7f9a58fcc5f52d40af685", + globalSalt, + "09784e190148d13d48885aa47cf8a297", + }, "|")) + salt := []byte("pentagi.jwt.signing|" + globalSalt) + newKey := pbkdf2.Key(password, salt, pbkdf2Iterations, jwtKeyLength, sha512.New) + + // Store in cache (LoadOrStore handles concurrent access) + actual, _ := jwtSigningKeys.LoadOrStore(globalSalt, newKey) + return actual.([]byte) } diff --git a/backend/pkg/server/auth/session_test.go b/backend/pkg/server/auth/session_test.go new file mode 100644 index 00000000..cd739bbe --- /dev/null +++ b/backend/pkg/server/auth/session_test.go @@ -0,0 +1,61 @@ +package auth_test + +import ( + "pentagi/pkg/server/auth" + "testing" + + "github.com/stretchr/testify/assert" +) + +func TestMakeJWTSigningKey(t *testing.T) { + salt1 := "test_salt_1" + salt2 := "test_salt_2" + + // Test that key is generated + key1 := auth.MakeJWTSigningKey(salt1) + assert.NotNil(t, key1) + assert.Len(t, key1, 32, "JWT signing key should be 32 bytes (256 bits)") + + // Test that same salt produces same key (cached) + key1Again := auth.MakeJWTSigningKey(salt1) + assert.Equal(t, key1, key1Again, "Same salt should produce same key from cache") + + // Test that different salts produce different keys + key2 := auth.MakeJWTSigningKey(salt2) + assert.NotEqual(t, key1, key2, "Different salts should produce different keys") + assert.Len(t, key2, 32, "JWT signing key should be 32 bytes (256 bits)") + + // Verify consistency for salt2 + key2Again := auth.MakeJWTSigningKey(salt2) + assert.Equal(t, key2, key2Again, "Same salt should produce same key from cache") +} + +func TestMakeCookieStoreKey(t *testing.T) { + salt := "test_salt" + + // Test that keys are generated + keys := auth.MakeCookieStoreKey(salt) + assert.NotNil(t, keys) + assert.Len(t, keys, 2, "Should return auth and encryption keys") + + // Test that auth key is 64 bytes (SHA512) + assert.Len(t, keys[0], 64, "Auth key should be 64 bytes") + + // Test that encryption key is 32 bytes (SHA256) + assert.Len(t, keys[1], 32, "Encryption key should be 32 bytes") + + // Test consistency + keysAgain := auth.MakeCookieStoreKey(salt) + assert.Equal(t, keys, keysAgain, "Same salt should produce same keys") +} + +func TestMakeJWTSigningKeyDifferentFromCookieKey(t *testing.T) { + salt := "test_salt" + + jwtKey := auth.MakeJWTSigningKey(salt) + cookieKeys := auth.MakeCookieStoreKey(salt) + + // JWT signing key should be different from both cookie keys + assert.NotEqual(t, jwtKey, cookieKeys[0], "JWT key should differ from cookie auth key") + assert.NotEqual(t, jwtKey, cookieKeys[1], "JWT key should differ from cookie encryption key") +} diff --git a/backend/pkg/server/auth/users_cache.go b/backend/pkg/server/auth/users_cache.go new file mode 100644 index 00000000..a3ec9dc0 --- /dev/null +++ b/backend/pkg/server/auth/users_cache.go @@ -0,0 +1,92 @@ +package auth + +import ( + "sync" + "time" + + "pentagi/pkg/server/models" + + "github.com/jinzhu/gorm" +) + +// userCacheEntry represents a cached user status entry +type userCacheEntry struct { + hash string + status models.UserStatus + notFound bool // negative caching + expiresAt time.Time +} + +// UserCache provides caching for user hash lookups +type UserCache struct { + cache sync.Map + ttl time.Duration + db *gorm.DB +} + +// NewUserCache creates a new user cache instance +func NewUserCache(db *gorm.DB) *UserCache { + return &UserCache{ + ttl: 5 * time.Minute, + db: db, + } +} + +// SetTTL sets the TTL for the user cache +func (uc *UserCache) SetTTL(ttl time.Duration) { + uc.ttl = ttl +} + +// GetUserHash retrieves user hash and status from cache or database +func (uc *UserCache) GetUserHash(userID uint64) (string, models.UserStatus, error) { + // check cache first + if entry, ok := uc.cache.Load(userID); ok { + cached := entry.(userCacheEntry) + if time.Now().Before(cached.expiresAt) { + // return cached "not found" error + if cached.notFound { + return "", "", gorm.ErrRecordNotFound + } + return cached.hash, cached.status, nil + } + // cache entry expired, remove it + uc.cache.Delete(userID) + } + + // load from database + var user models.User + if err := uc.db.Where("id = ?", userID).First(&user).Error; err != nil { + if gorm.IsRecordNotFoundError(err) { + // cache negative result (user not found) + uc.cache.Store(userID, userCacheEntry{ + notFound: true, + expiresAt: time.Now().Add(uc.ttl), + }) + return "", "", gorm.ErrRecordNotFound + } + return "", "", err + } + + // update cache with positive result + uc.cache.Store(userID, userCacheEntry{ + hash: user.Hash, + status: user.Status, + notFound: false, + expiresAt: time.Now().Add(uc.ttl), + }) + + return user.Hash, user.Status, nil +} + +// Invalidate removes a specific user from cache +func (uc *UserCache) Invalidate(userID uint64) { + uc.cache.Delete(userID) +} + +// InvalidateAll clears the entire cache +func (uc *UserCache) InvalidateAll() { + uc.cache.Range(func(key, value any) bool { + uc.cache.Delete(key) + return true + }) +} diff --git a/backend/pkg/server/auth/users_cache_test.go b/backend/pkg/server/auth/users_cache_test.go new file mode 100644 index 00000000..9de662b3 --- /dev/null +++ b/backend/pkg/server/auth/users_cache_test.go @@ -0,0 +1,431 @@ +package auth_test + +import ( + "fmt" + "sync" + "testing" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/models" + + "github.com/jinzhu/gorm" + _ "github.com/jinzhu/gorm/dialects/sqlite" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func setupUserTestDB(t *testing.T) *gorm.DB { + t.Helper() + db, err := gorm.Open("sqlite3", ":memory:") + require.NoError(t, err) + + // Create users table + db.Exec(` + CREATE TABLE users ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + hash TEXT NOT NULL UNIQUE, + type TEXT NOT NULL DEFAULT 'local', + mail TEXT NOT NULL UNIQUE, + name TEXT NOT NULL DEFAULT '', + status TEXT NOT NULL DEFAULT 'active', + role_id INTEGER NOT NULL DEFAULT 2, + password TEXT, + password_change_required BOOLEAN NOT NULL DEFAULT false, + provider TEXT, + created_at DATETIME DEFAULT CURRENT_TIMESTAMP, + deleted_at DATETIME + ) + `) + + time.Sleep(200 * time.Millisecond) // wait for database to be ready + + return db +} + +func TestUserCache_GetUserHash(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + // Insert test user + user := models.User{ + ID: 1, + Hash: "test_hash_123", + Mail: "test@example.com", + Name: "Test User", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + + // Test: Get user hash (should hit database) + hash, status, err := cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_123", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Test: Get user hash again (should hit cache) + hash, status, err = cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_123", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Test: Non-existent user + _, _, err = cache.GetUserHash(999) + assert.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) +} + +func TestUserCache_Invalidate(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + // Insert test user + user := models.User{ + ID: 1, + Hash: "test_hash_456", + Mail: "test2@example.com", + Name: "Test User 2", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + + // Get hash to populate cache + hash, status, err := cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_456", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Update user in database + db.Model(&user).Update("status", models.UserStatusBlocked) + + // Status should still be active (from cache) + hash, status, err = cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_456", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Invalidate cache + cache.Invalidate(1) + + // Status should now be blocked (from database) + hash, status, err = cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_456", hash) + assert.Equal(t, models.UserStatusBlocked, status) +} + +func TestUserCache_Expiration(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + // Create cache with very short TTL for testing + cache := auth.NewUserCache(db) + cache.SetTTL(300 * time.Millisecond) + + // Insert test user + user := models.User{ + ID: 1, + Hash: "test_hash_789", + Mail: "test3@example.com", + Name: "Test User 3", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + + // Get hash to populate cache + hash, status, err := cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_789", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Update user in database + db.Model(&user).Update("status", models.UserStatusBlocked) + + // Wait for cache to expire + time.Sleep(500 * time.Millisecond) + + // Status should now be blocked (cache expired, reading from DB) + hash, status, err = cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "test_hash_789", hash) + assert.Equal(t, models.UserStatusBlocked, status) +} + +func TestUserCache_UserStatuses(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + testCases := []struct { + name string + userStatus models.UserStatus + expectedStatus models.UserStatus + }{ + { + name: "active user", + userStatus: models.UserStatusActive, + expectedStatus: models.UserStatusActive, + }, + { + name: "blocked user", + userStatus: models.UserStatusBlocked, + expectedStatus: models.UserStatusBlocked, + }, + { + name: "created user", + userStatus: models.UserStatusCreated, + expectedStatus: models.UserStatusCreated, + }, + } + + for i, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + user := models.User{ + ID: uint64(i + 1), + Hash: "hash_" + tc.name, + Mail: tc.name + "@example.com", + Name: tc.name, + Status: tc.userStatus, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + + hash, status, err := cache.GetUserHash(user.ID) + require.NoError(t, err) + assert.Equal(t, user.Hash, hash) + assert.Equal(t, tc.expectedStatus, status) + }) + } +} + +func TestUserCache_DeletedUser(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + // Insert test user + user := models.User{ + ID: 1, + Hash: "deleted_hash", + Mail: "deleted@example.com", + Name: "Deleted User", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + + // Get hash to populate cache + hash, status, err := cache.GetUserHash(1) + require.NoError(t, err) + assert.Equal(t, "deleted_hash", hash) + assert.Equal(t, models.UserStatusActive, status) + + // Soft delete user + db.Delete(&user) + + // Invalidate cache + cache.Invalidate(1) + + // Should return error for deleted user + _, _, err = cache.GetUserHash(1) + assert.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) +} + +func TestUserCache_ConcurrentAccess(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + // Insert test users + for i := 1; i <= 10; i++ { + user := models.User{ + ID: uint64(i), + Hash: fmt.Sprintf("concurrent_hash_%d", i), + Mail: fmt.Sprintf("concurrent%d@example.com", i), + Name: "Concurrent User", + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + } + + // warm up cache + for i := range 10 { + _, _, err := cache.GetUserHash(uint64(i%10 + 1)) + require.NoError(t, err) + } + + var wg sync.WaitGroup + errors := make(chan error, 100) + + // Concurrent reads + for i := range 10 { + wg.Add(1) + go func(userID uint64) { + defer wg.Done() + for range 10 { + _, _, err := cache.GetUserHash(userID) + if err != nil { + errors <- err + } + } + }(uint64(i%10 + 1)) + } + + // Concurrent invalidations + for i := range 5 { + wg.Add(1) + go func(userID uint64) { + defer wg.Done() + for range 5 { + cache.Invalidate(userID) + time.Sleep(10 * time.Millisecond) + } + }(uint64(i%10 + 1)) + } + + wg.Wait() + close(errors) + + // Check for errors + for err := range errors { + t.Errorf("Concurrent access error: %v", err) + } +} + +func TestUserCache_InvalidateAll(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + + // Insert multiple users + for i := 1; i <= 5; i++ { + user := models.User{ + ID: uint64(i), + Hash: fmt.Sprintf("invalidate_all_%d", i), + Mail: fmt.Sprintf("all%d@example.com", i), + Name: fmt.Sprintf("User %d", i), + Status: models.UserStatusActive, + RoleID: 2, + } + err := db.Create(&user).Error + require.NoError(t, err) + } + + // Populate cache + for i := 1; i <= 5; i++ { + _, _, err := cache.GetUserHash(uint64(i)) + require.NoError(t, err) + } + + // Update all users in database + db.Model(&models.User{}).Where("id > 0").Update("status", models.UserStatusBlocked) + + // Invalidate all + cache.InvalidateAll() + + // All users should now show blocked status + for i := 1; i <= 5; i++ { + _, status, err := cache.GetUserHash(uint64(i)) + require.NoError(t, err) + assert.Equal(t, models.UserStatusBlocked, status) + } +} + +func TestUserCache_NegativeCaching(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + nonExistentUserID := uint64(9999) + + // First call - should hit database and cache the "not found" + _, _, err := cache.GetUserHash(nonExistentUserID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) + + // Second call - should return from cache without hitting DB + _, _, err = cache.GetUserHash(nonExistentUserID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err, "Should return cached not found error") + + // Now create the user in DB + user := models.User{ + ID: nonExistentUserID, + Hash: "new_user_hash", + Mail: "new@example.com", + Name: "New User", + Status: models.UserStatusActive, + RoleID: 2, + } + err = db.Create(&user).Error + require.NoError(t, err) + + // Should still return cached "not found" until invalidated + _, _, err = cache.GetUserHash(nonExistentUserID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err, "Should still return cached not found") + + // Invalidate cache + cache.Invalidate(nonExistentUserID) + + // Now should find the user + hash, status, err := cache.GetUserHash(nonExistentUserID) + require.NoError(t, err) + assert.Equal(t, "new_user_hash", hash) + assert.Equal(t, models.UserStatusActive, status) +} + +func TestUserCache_NegativeCachingExpiration(t *testing.T) { + db := setupUserTestDB(t) + defer db.Close() + + cache := auth.NewUserCache(db) + cache.SetTTL(300 * time.Millisecond) + + nonExistentUserID := uint64(8888) + + // First call - cache the "not found" + _, _, err := cache.GetUserHash(nonExistentUserID) + require.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) + + // Create user in DB + user := models.User{ + ID: nonExistentUserID, + Hash: "temp_user_hash", + Mail: "temp@example.com", + Name: "Temp User", + Status: models.UserStatusActive, + RoleID: 2, + } + err = db.Create(&user).Error + require.NoError(t, err) + + // Wait for cache to expire + time.Sleep(500 * time.Millisecond) + + // Now should find the user (cache expired) + hash, status, err := cache.GetUserHash(nonExistentUserID) + require.NoError(t, err) + assert.Equal(t, "temp_user_hash", hash) + assert.Equal(t, models.UserStatusActive, status) +} diff --git a/backend/pkg/server/docs/docs.go b/backend/pkg/server/docs/docs.go index 4d3dfd6f..d43abcc1 100644 --- a/backend/pkg/server/docs/docs.go +++ b/backend/pkg/server/docs/docs.go @@ -27,6 +27,11 @@ const docTemplate = `{ "paths": { "/agentlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -41,7 +46,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -67,16 +72,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -136,6 +142,11 @@ const docTemplate = `{ }, "/assistantlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -150,7 +161,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -176,16 +187,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -499,6 +511,11 @@ const docTemplate = `{ }, "/containers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -513,7 +530,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -539,16 +556,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -608,6 +626,11 @@ const docTemplate = `{ }, "/flows/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -622,7 +645,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -648,16 +671,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -715,6 +739,11 @@ const docTemplate = `{ } }, "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -778,6 +807,11 @@ const docTemplate = `{ }, "/flows/{flowID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -835,6 +869,11 @@ const docTemplate = `{ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -904,6 +943,11 @@ const docTemplate = `{ } }, "delete": { + "security": [ + { + "BearerAuth": [] + } + ], "tags": [ "Flows" ], @@ -960,6 +1004,11 @@ const docTemplate = `{ }, "/flows/{flowID}/agentlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -982,7 +1031,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1008,16 +1057,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1077,6 +1127,11 @@ const docTemplate = `{ }, "/flows/{flowID}/assistantlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1099,7 +1154,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1125,16 +1180,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1194,6 +1250,11 @@ const docTemplate = `{ }, "/flows/{flowID}/assistants/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1216,7 +1277,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1242,16 +1303,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1309,6 +1371,11 @@ const docTemplate = `{ } }, "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -1380,6 +1447,11 @@ const docTemplate = `{ }, "/flows/{flowID}/assistants/{assistantID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1445,6 +1517,11 @@ const docTemplate = `{ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -1522,6 +1599,11 @@ const docTemplate = `{ } }, "delete": { + "security": [ + { + "BearerAuth": [] + } + ], "tags": [ "Assistants" ], @@ -1586,6 +1668,11 @@ const docTemplate = `{ }, "/flows/{flowID}/containers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1608,7 +1695,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1634,16 +1721,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1703,6 +1791,11 @@ const docTemplate = `{ }, "/flows/{flowID}/containers/{containerID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1770,6 +1863,11 @@ const docTemplate = `{ }, "/flows/{flowID}/graph": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1829,6 +1927,11 @@ const docTemplate = `{ }, "/flows/{flowID}/msglogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1851,7 +1954,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1877,16 +1980,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1946,6 +2050,11 @@ const docTemplate = `{ }, "/flows/{flowID}/screenshots/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1968,7 +2077,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1994,16 +2103,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2063,6 +2173,11 @@ const docTemplate = `{ }, "/flows/{flowID}/screenshots/{screenshotID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2130,6 +2245,11 @@ const docTemplate = `{ }, "/flows/{flowID}/screenshots/{screenshotID}/file": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "image/png", "application/json" @@ -2180,6 +2300,11 @@ const docTemplate = `{ }, "/flows/{flowID}/searchlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2202,7 +2327,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2228,16 +2353,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2297,6 +2423,11 @@ const docTemplate = `{ }, "/flows/{flowID}/subtasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2319,7 +2450,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2345,16 +2476,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2414,6 +2546,11 @@ const docTemplate = `{ }, "/flows/{flowID}/tasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2436,7 +2573,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2462,16 +2599,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2531,6 +2669,11 @@ const docTemplate = `{ }, "/flows/{flowID}/tasks/{taskID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2598,6 +2741,11 @@ const docTemplate = `{ }, "/flows/{flowID}/tasks/{taskID}/graph": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2665,6 +2813,11 @@ const docTemplate = `{ }, "/flows/{flowID}/tasks/{taskID}/subtasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2695,7 +2848,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2721,16 +2874,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2790,6 +2944,11 @@ const docTemplate = `{ }, "/flows/{flowID}/tasks/{taskID}/subtasks/{subtaskID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2865,7 +3024,12 @@ const docTemplate = `{ }, "/flows/{flowID}/termlogs/": { "get": { - "produces": [ + "security": [ + { + "BearerAuth": [] + } + ], + "produces": [ "application/json" ], "tags": [ @@ -2887,7 +3051,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2913,16 +3077,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2982,6 +3147,11 @@ const docTemplate = `{ }, "/flows/{flowID}/usage": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get comprehensive analytics for a single flow including all breakdowns", "produces": [ "application/json" @@ -3049,6 +3219,11 @@ const docTemplate = `{ }, "/flows/{flowID}/vecstorelogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3071,7 +3246,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3097,16 +3272,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3166,6 +3342,11 @@ const docTemplate = `{ }, "/graphql": { "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3217,6 +3398,11 @@ const docTemplate = `{ }, "/info": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3274,6 +3460,11 @@ const docTemplate = `{ }, "/msglogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3288,7 +3479,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3314,16 +3505,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3383,6 +3575,11 @@ const docTemplate = `{ }, "/prompts/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3397,7 +3594,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3423,16 +3620,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3492,6 +3690,11 @@ const docTemplate = `{ }, "/prompts/{promptType}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3554,6 +3757,11 @@ const docTemplate = `{ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3601,6 +3809,24 @@ const docTemplate = `{ ] } }, + "201": { + "description": "prompt created successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.Prompt" + } + } + } + ] + } + }, "400": { "description": "invalid prompt request data", "schema": { @@ -3626,10 +3852,70 @@ const docTemplate = `{ } } } + }, + "delete": { + "security": [ + { + "BearerAuth": [] + } + ], + "produces": [ + "application/json" + ], + "tags": [ + "Prompts" + ], + "summary": "Delete prompt by type", + "parameters": [ + { + "type": "string", + "description": "prompt type", + "name": "promptType", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "prompt deleted successful", + "schema": { + "$ref": "#/definitions/SuccessResponse" + } + }, + "400": { + "description": "invalid prompt request data", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "deleting prompt not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "prompt not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on deleting prompt", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } } }, "/prompts/{promptType}/default": { "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3668,6 +3954,24 @@ const docTemplate = `{ ] } }, + "201": { + "description": "prompt created with default value successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.Prompt" + } + } + } + ] + } + }, "400": { "description": "invalid prompt request data", "schema": { @@ -3697,6 +4001,11 @@ const docTemplate = `{ }, "/providers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3748,7 +4057,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3774,16 +4083,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3901,6 +4211,11 @@ const docTemplate = `{ }, "/screenshots/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3915,7 +4230,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3941,16 +4256,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4010,6 +4326,11 @@ const docTemplate = `{ }, "/searchlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4024,7 +4345,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4050,16 +4371,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4119,6 +4441,11 @@ const docTemplate = `{ }, "/termlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4133,7 +4460,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4159,16 +4486,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4226,7 +4554,48 @@ const docTemplate = `{ } } }, - "/token": { + "/tokens": { + "get": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "List API tokens", + "responses": { + "200": { + "description": "tokens retrieved successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/services.tokens" + } + } + } + ] + } + }, + "403": { + "description": "listing tokens not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on listing tokens", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, "post": { "consumes": [ "application/json" @@ -4235,17 +4604,17 @@ const docTemplate = `{ "application/json" ], "tags": [ - "Proto" + "Tokens" ], - "summary": "Create new JWT token to use it into automation connections", + "summary": "Create new API token for automation", "parameters": [ { - "description": "Proto auth token request JSON data", + "description": "Token creation request", "name": "json", "in": "body", "required": true, "schema": { - "$ref": "#/definitions/models.ProtoAuthTokenRequest" + "$ref": "#/definitions/models.CreateAPITokenRequest" } } ], @@ -4261,27 +4630,203 @@ const docTemplate = `{ "type": "object", "properties": { "data": { - "$ref": "#/definitions/models.ProtoAuthToken" + "$ref": "#/definitions/models.APITokenWithSecret" + } + } + } + ] + } + }, + "400": { + "description": "invalid token request or default salt", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "creating token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on creating token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + } + }, + "/tokens/{tokenID}": { + "get": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Get API token details", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "token retrieved successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.APIToken" + } + } + } + ] + } + }, + "403": { + "description": "accessing token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "token not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on getting token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, + "put": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Update API token", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + }, + { + "description": "Token update request", + "name": "json", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/models.UpdateAPITokenRequest" + } + } + ], + "responses": { + "200": { + "description": "token updated successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.APIToken" } } } ] } }, - "400": { - "description": "invalid requested token info", + "400": { + "description": "invalid update request", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "updating token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "token not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on updating token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, + "delete": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Delete API token", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "token deleted successful", + "schema": { + "$ref": "#/definitions/SuccessResponse" + } + }, + "403": { + "description": "deleting token not permitted", "schema": { "$ref": "#/definitions/ErrorResponse" } }, - "403": { - "description": "creating token not permitted", + "404": { + "description": "token not found", "schema": { "$ref": "#/definitions/ErrorResponse" } }, "500": { - "description": "internal error on creating token", + "description": "internal error on deleting token", "schema": { "$ref": "#/definitions/ErrorResponse" } @@ -4291,6 +4836,11 @@ const docTemplate = `{ }, "/usage": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get comprehensive analytics for all user's flows including usage, toolcalls, and structural stats", "produces": [ "application/json" @@ -4335,6 +4885,11 @@ const docTemplate = `{ }, "/usage/{period}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get time-series analytics data for week, month, or quarter", "produces": [ "application/json" @@ -4519,7 +5074,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4545,16 +5100,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4857,6 +5413,11 @@ const docTemplate = `{ }, "/vecstorelogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4871,7 +5432,7 @@ const docTemplate = `{ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4897,16 +5458,17 @@ const docTemplate = `{ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -5094,6 +5656,106 @@ const docTemplate = `{ } } }, + "models.APIToken": { + "type": "object", + "required": [ + "created_at", + "status", + "token_id", + "ttl", + "updated_at" + ], + "properties": { + "created_at": { + "type": "string" + }, + "deleted_at": { + "type": "string" + }, + "id": { + "type": "integer", + "minimum": 0 + }, + "name": { + "type": "string", + "maxLength": 100 + }, + "role_id": { + "type": "integer", + "minimum": 0 + }, + "status": { + "type": "string" + }, + "token_id": { + "type": "string" + }, + "ttl": { + "type": "integer", + "maximum": 94608000, + "minimum": 60 + }, + "updated_at": { + "type": "string" + }, + "user_id": { + "type": "integer", + "minimum": 0 + } + } + }, + "models.APITokenWithSecret": { + "type": "object", + "required": [ + "created_at", + "status", + "token", + "token_id", + "ttl", + "updated_at" + ], + "properties": { + "created_at": { + "type": "string" + }, + "deleted_at": { + "type": "string" + }, + "id": { + "type": "integer", + "minimum": 0 + }, + "name": { + "type": "string", + "maxLength": 100 + }, + "role_id": { + "type": "integer", + "minimum": 0 + }, + "status": { + "type": "string" + }, + "token": { + "type": "string" + }, + "token_id": { + "type": "string" + }, + "ttl": { + "type": "integer", + "maximum": 94608000, + "minimum": 60 + }, + "updated_at": { + "type": "string" + }, + "user_id": { + "type": "integer", + "minimum": 0 + } + } + }, "models.AgentTypeUsageStats": { "type": "object", "required": [ @@ -5113,6 +5775,7 @@ const docTemplate = `{ "type": "object", "required": [ "executor", + "flow_id", "initiator", "task" ], @@ -5138,19 +5801,22 @@ const docTemplate = `{ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task": { "type": "string" }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, "models.Assistant": { "type": "object", "required": [ + "flow_id", "language", "model", "model_provider_name", @@ -5205,6 +5871,7 @@ const docTemplate = `{ "models.AssistantFlow": { "type": "object", "required": [ + "flow_id", "language", "model", "model_provider_name", @@ -5262,7 +5929,8 @@ const docTemplate = `{ "models.Assistantlog": { "type": "object", "required": [ - "message", + "assistant_id", + "flow_id", "result_format", "type" ], @@ -5325,6 +5993,7 @@ const docTemplate = `{ "models.Container": { "type": "object", "required": [ + "flow_id", "image", "local_dir", "local_id", @@ -5367,6 +6036,24 @@ const docTemplate = `{ } } }, + "models.CreateAPITokenRequest": { + "type": "object", + "required": [ + "ttl" + ], + "properties": { + "name": { + "type": "string", + "maxLength": 100 + }, + "ttl": { + "description": "from 1 minute to 3 years", + "type": "integer", + "maximum": 94608000, + "minimum": 60 + } + } + }, "models.CreateAssistant": { "type": "object", "required": [ @@ -5464,7 +6151,8 @@ const docTemplate = `{ "model_provider_name", "model_provider_type", "status", - "title" + "title", + "user_id" ], "properties": { "created_at": { @@ -5569,7 +6257,8 @@ const docTemplate = `{ "model_provider_type", "status", "tasks", - "title" + "title", + "user_id" ], "properties": { "created_at": { @@ -5743,6 +6432,7 @@ const docTemplate = `{ "models.Msglog": { "type": "object", "required": [ + "flow_id", "message", "result_format", "type" @@ -5769,10 +6459,12 @@ const docTemplate = `{ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "thinking": { "type": "string" @@ -5943,48 +6635,6 @@ const docTemplate = `{ } } }, - "models.ProtoAuthToken": { - "type": "object", - "required": [ - "token", - "ttl" - ], - "properties": { - "created_date": { - "type": "string" - }, - "token": { - "type": "string" - }, - "ttl": { - "type": "integer", - "maximum": 94608000, - "minimum": 1 - } - } - }, - "models.ProtoAuthTokenRequest": { - "type": "object", - "required": [ - "ttl", - "type" - ], - "properties": { - "ttl": { - "type": "integer", - "default": 31536000, - "maximum": 94608000, - "minimum": 1 - }, - "type": { - "type": "string", - "default": "automation", - "enum": [ - "automation" - ] - } - } - }, "models.ProviderInfo": { "type": "object", "required": [ @@ -6096,6 +6746,7 @@ const docTemplate = `{ "required": [ "engine", "executor", + "flow_id", "initiator", "query" ], @@ -6127,10 +6778,12 @@ const docTemplate = `{ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, @@ -6139,6 +6792,7 @@ const docTemplate = `{ "required": [ "description", "status", + "task_id", "title" ], "properties": { @@ -6242,6 +6896,7 @@ const docTemplate = `{ "models.Task": { "type": "object", "required": [ + "flow_id", "input", "status", "title" @@ -6307,6 +6962,7 @@ const docTemplate = `{ "models.TaskSubtasks": { "type": "object", "required": [ + "flow_id", "input", "status", "subtasks", @@ -6400,6 +7056,18 @@ const docTemplate = `{ } } }, + "models.UpdateAPITokenRequest": { + "type": "object", + "properties": { + "name": { + "type": "string", + "maxLength": 100 + }, + "status": { + "type": "string" + } + } + }, "models.UsageStats": { "type": "object", "properties": { @@ -6432,8 +7100,8 @@ const docTemplate = `{ "models.User": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6477,9 +7145,9 @@ const docTemplate = `{ "models.UserPassword": { "type": "object", "required": [ - "created_at", "mail", "password", + "role_id", "status", "type" ], @@ -6527,8 +7195,8 @@ const docTemplate = `{ "models.UserRole": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6575,8 +7243,8 @@ const docTemplate = `{ "models.UserRolePrivileges": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6626,9 +7294,9 @@ const docTemplate = `{ "action", "executor", "filter", + "flow_id", "initiator", - "query", - "result" + "query" ], "properties": { "action": { @@ -6661,10 +7329,12 @@ const docTemplate = `{ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, @@ -6888,6 +7558,20 @@ const docTemplate = `{ } } }, + "services.tokens": { + "type": "object", + "properties": { + "tokens": { + "type": "array", + "items": { + "$ref": "#/definitions/models.APIToken" + } + }, + "total": { + "type": "integer" + } + } + }, "services.users": { "type": "object", "properties": { @@ -6986,6 +7670,14 @@ const docTemplate = `{ } } } + }, + "securityDefinitions": { + "BearerAuth": { + "description": "Type \"Bearer\" followed by a space and JWT token.", + "type": "apiKey", + "name": "Authorization", + "in": "header" + } } }` diff --git a/backend/pkg/server/docs/swagger.json b/backend/pkg/server/docs/swagger.json index 7b43b52f..dbd02063 100644 --- a/backend/pkg/server/docs/swagger.json +++ b/backend/pkg/server/docs/swagger.json @@ -19,6 +19,11 @@ "paths": { "/agentlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -33,7 +38,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -59,16 +64,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -128,6 +134,11 @@ }, "/assistantlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -142,7 +153,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -168,16 +179,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -491,6 +503,11 @@ }, "/containers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -505,7 +522,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -531,16 +548,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -600,6 +618,11 @@ }, "/flows/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -614,7 +637,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -640,16 +663,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -707,6 +731,11 @@ } }, "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -770,6 +799,11 @@ }, "/flows/{flowID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -827,6 +861,11 @@ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -896,6 +935,11 @@ } }, "delete": { + "security": [ + { + "BearerAuth": [] + } + ], "tags": [ "Flows" ], @@ -952,6 +996,11 @@ }, "/flows/{flowID}/agentlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -974,7 +1023,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1000,16 +1049,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1069,6 +1119,11 @@ }, "/flows/{flowID}/assistantlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1091,7 +1146,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1117,16 +1172,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1186,6 +1242,11 @@ }, "/flows/{flowID}/assistants/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1208,7 +1269,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1234,16 +1295,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1301,6 +1363,11 @@ } }, "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -1372,6 +1439,11 @@ }, "/flows/{flowID}/assistants/{assistantID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1437,6 +1509,11 @@ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -1514,6 +1591,11 @@ } }, "delete": { + "security": [ + { + "BearerAuth": [] + } + ], "tags": [ "Assistants" ], @@ -1578,6 +1660,11 @@ }, "/flows/{flowID}/containers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1600,7 +1687,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1626,16 +1713,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1695,6 +1783,11 @@ }, "/flows/{flowID}/containers/{containerID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1762,6 +1855,11 @@ }, "/flows/{flowID}/graph": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1821,6 +1919,11 @@ }, "/flows/{flowID}/msglogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1843,7 +1946,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1869,16 +1972,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -1938,6 +2042,11 @@ }, "/flows/{flowID}/screenshots/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -1960,7 +2069,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -1986,16 +2095,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2055,6 +2165,11 @@ }, "/flows/{flowID}/screenshots/{screenshotID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2122,6 +2237,11 @@ }, "/flows/{flowID}/screenshots/{screenshotID}/file": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "image/png", "application/json" @@ -2172,6 +2292,11 @@ }, "/flows/{flowID}/searchlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2194,7 +2319,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2220,16 +2345,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2289,6 +2415,11 @@ }, "/flows/{flowID}/subtasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2311,7 +2442,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2337,16 +2468,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2406,6 +2538,11 @@ }, "/flows/{flowID}/tasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2428,7 +2565,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2454,16 +2591,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2523,6 +2661,11 @@ }, "/flows/{flowID}/tasks/{taskID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2590,6 +2733,11 @@ }, "/flows/{flowID}/tasks/{taskID}/graph": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2657,6 +2805,11 @@ }, "/flows/{flowID}/tasks/{taskID}/subtasks/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2687,7 +2840,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2713,16 +2866,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2782,6 +2936,11 @@ }, "/flows/{flowID}/tasks/{taskID}/subtasks/{subtaskID}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -2857,7 +3016,12 @@ }, "/flows/{flowID}/termlogs/": { "get": { - "produces": [ + "security": [ + { + "BearerAuth": [] + } + ], + "produces": [ "application/json" ], "tags": [ @@ -2879,7 +3043,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -2905,16 +3069,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -2974,6 +3139,11 @@ }, "/flows/{flowID}/usage": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get comprehensive analytics for a single flow including all breakdowns", "produces": [ "application/json" @@ -3041,6 +3211,11 @@ }, "/flows/{flowID}/vecstorelogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3063,7 +3238,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3089,16 +3264,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3158,6 +3334,11 @@ }, "/graphql": { "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3209,6 +3390,11 @@ }, "/info": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3266,6 +3452,11 @@ }, "/msglogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3280,7 +3471,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3306,16 +3497,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3375,6 +3567,11 @@ }, "/prompts/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3389,7 +3586,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3415,16 +3612,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3484,6 +3682,11 @@ }, "/prompts/{promptType}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3546,6 +3749,11 @@ } }, "put": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3593,6 +3801,24 @@ ] } }, + "201": { + "description": "prompt created successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.Prompt" + } + } + } + ] + } + }, "400": { "description": "invalid prompt request data", "schema": { @@ -3618,10 +3844,70 @@ } } } + }, + "delete": { + "security": [ + { + "BearerAuth": [] + } + ], + "produces": [ + "application/json" + ], + "tags": [ + "Prompts" + ], + "summary": "Delete prompt by type", + "parameters": [ + { + "type": "string", + "description": "prompt type", + "name": "promptType", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "prompt deleted successful", + "schema": { + "$ref": "#/definitions/SuccessResponse" + } + }, + "400": { + "description": "invalid prompt request data", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "deleting prompt not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "prompt not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on deleting prompt", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } } }, "/prompts/{promptType}/default": { "post": { + "security": [ + { + "BearerAuth": [] + } + ], "consumes": [ "application/json" ], @@ -3660,6 +3946,24 @@ ] } }, + "201": { + "description": "prompt created with default value successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.Prompt" + } + } + } + ] + } + }, "400": { "description": "invalid prompt request data", "schema": { @@ -3689,6 +3993,11 @@ }, "/providers/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3740,7 +4049,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3766,16 +4075,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -3893,6 +4203,11 @@ }, "/screenshots/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -3907,7 +4222,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -3933,16 +4248,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4002,6 +4318,11 @@ }, "/searchlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4016,7 +4337,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4042,16 +4363,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4111,6 +4433,11 @@ }, "/termlogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4125,7 +4452,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4151,16 +4478,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4218,7 +4546,48 @@ } } }, - "/token": { + "/tokens": { + "get": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "List API tokens", + "responses": { + "200": { + "description": "tokens retrieved successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/services.tokens" + } + } + } + ] + } + }, + "403": { + "description": "listing tokens not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on listing tokens", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, "post": { "consumes": [ "application/json" @@ -4227,17 +4596,17 @@ "application/json" ], "tags": [ - "Proto" + "Tokens" ], - "summary": "Create new JWT token to use it into automation connections", + "summary": "Create new API token for automation", "parameters": [ { - "description": "Proto auth token request JSON data", + "description": "Token creation request", "name": "json", "in": "body", "required": true, "schema": { - "$ref": "#/definitions/models.ProtoAuthTokenRequest" + "$ref": "#/definitions/models.CreateAPITokenRequest" } } ], @@ -4253,27 +4622,203 @@ "type": "object", "properties": { "data": { - "$ref": "#/definitions/models.ProtoAuthToken" + "$ref": "#/definitions/models.APITokenWithSecret" + } + } + } + ] + } + }, + "400": { + "description": "invalid token request or default salt", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "creating token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on creating token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + } + }, + "/tokens/{tokenID}": { + "get": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Get API token details", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "token retrieved successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.APIToken" + } + } + } + ] + } + }, + "403": { + "description": "accessing token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "token not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on getting token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, + "put": { + "consumes": [ + "application/json" + ], + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Update API token", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + }, + { + "description": "Token update request", + "name": "json", + "in": "body", + "required": true, + "schema": { + "$ref": "#/definitions/models.UpdateAPITokenRequest" + } + } + ], + "responses": { + "200": { + "description": "token updated successful", + "schema": { + "allOf": [ + { + "$ref": "#/definitions/SuccessResponse" + }, + { + "type": "object", + "properties": { + "data": { + "$ref": "#/definitions/models.APIToken" } } } ] } }, - "400": { - "description": "invalid requested token info", + "400": { + "description": "invalid update request", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "403": { + "description": "updating token not permitted", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "404": { + "description": "token not found", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + }, + "500": { + "description": "internal error on updating token", + "schema": { + "$ref": "#/definitions/ErrorResponse" + } + } + } + }, + "delete": { + "produces": [ + "application/json" + ], + "tags": [ + "Tokens" + ], + "summary": "Delete API token", + "parameters": [ + { + "type": "string", + "description": "Token ID", + "name": "tokenID", + "in": "path", + "required": true + } + ], + "responses": { + "200": { + "description": "token deleted successful", + "schema": { + "$ref": "#/definitions/SuccessResponse" + } + }, + "403": { + "description": "deleting token not permitted", "schema": { "$ref": "#/definitions/ErrorResponse" } }, - "403": { - "description": "creating token not permitted", + "404": { + "description": "token not found", "schema": { "$ref": "#/definitions/ErrorResponse" } }, "500": { - "description": "internal error on creating token", + "description": "internal error on deleting token", "schema": { "$ref": "#/definitions/ErrorResponse" } @@ -4283,6 +4828,11 @@ }, "/usage": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get comprehensive analytics for all user's flows including usage, toolcalls, and structural stats", "produces": [ "application/json" @@ -4327,6 +4877,11 @@ }, "/usage/{period}": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "description": "Get time-series analytics data for week, month, or quarter", "produces": [ "application/json" @@ -4511,7 +5066,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4537,16 +5092,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -4849,6 +5405,11 @@ }, "/vecstorelogs/": { "get": { + "security": [ + { + "BearerAuth": [] + } + ], "produces": [ "application/json" ], @@ -4863,7 +5424,7 @@ "type": "string" }, "collectionFormat": "multi", - "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\"}\n field value should be integer or string or array type", + "description": "Filtering result on server e.g. {\"value\":[...],\"field\":\"...\",\"operator\":\"...\"}\n field is the unique identifier of the table column, different for each endpoint\n value should be integer or string or array type, \"value\":123 or \"value\":\"string\" or \"value\":[123,456]\n operator value should be one of \u003c,\u003c=,\u003e=,\u003e,=,!=,like,not like,in\n default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at'", "name": "filters[]", "in": "query" }, @@ -4889,16 +5450,17 @@ "default": 5, "description": "Amount items per page (min -1, max 1000, -1 means unlimited)", "name": "pageSize", - "in": "query", - "required": true + "in": "query" }, { - "type": "string", - "default": "{}", - "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value", - "name": "sort", - "in": "query", - "required": true + "type": "array", + "items": { + "type": "string" + }, + "collectionFormat": "multi", + "description": "Sorting result on server e.g. {\"prop\":\"...\",\"order\":\"...\"}\n field order is \"ascending\" or \"descending\" value\n order is required if prop is not empty", + "name": "sort[]", + "in": "query" }, { "enum": [ @@ -5086,6 +5648,106 @@ } } }, + "models.APIToken": { + "type": "object", + "required": [ + "created_at", + "status", + "token_id", + "ttl", + "updated_at" + ], + "properties": { + "created_at": { + "type": "string" + }, + "deleted_at": { + "type": "string" + }, + "id": { + "type": "integer", + "minimum": 0 + }, + "name": { + "type": "string", + "maxLength": 100 + }, + "role_id": { + "type": "integer", + "minimum": 0 + }, + "status": { + "type": "string" + }, + "token_id": { + "type": "string" + }, + "ttl": { + "type": "integer", + "maximum": 94608000, + "minimum": 60 + }, + "updated_at": { + "type": "string" + }, + "user_id": { + "type": "integer", + "minimum": 0 + } + } + }, + "models.APITokenWithSecret": { + "type": "object", + "required": [ + "created_at", + "status", + "token", + "token_id", + "ttl", + "updated_at" + ], + "properties": { + "created_at": { + "type": "string" + }, + "deleted_at": { + "type": "string" + }, + "id": { + "type": "integer", + "minimum": 0 + }, + "name": { + "type": "string", + "maxLength": 100 + }, + "role_id": { + "type": "integer", + "minimum": 0 + }, + "status": { + "type": "string" + }, + "token": { + "type": "string" + }, + "token_id": { + "type": "string" + }, + "ttl": { + "type": "integer", + "maximum": 94608000, + "minimum": 60 + }, + "updated_at": { + "type": "string" + }, + "user_id": { + "type": "integer", + "minimum": 0 + } + } + }, "models.AgentTypeUsageStats": { "type": "object", "required": [ @@ -5105,6 +5767,7 @@ "type": "object", "required": [ "executor", + "flow_id", "initiator", "task" ], @@ -5130,19 +5793,22 @@ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task": { "type": "string" }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, "models.Assistant": { "type": "object", "required": [ + "flow_id", "language", "model", "model_provider_name", @@ -5197,6 +5863,7 @@ "models.AssistantFlow": { "type": "object", "required": [ + "flow_id", "language", "model", "model_provider_name", @@ -5254,7 +5921,8 @@ "models.Assistantlog": { "type": "object", "required": [ - "message", + "assistant_id", + "flow_id", "result_format", "type" ], @@ -5317,6 +5985,7 @@ "models.Container": { "type": "object", "required": [ + "flow_id", "image", "local_dir", "local_id", @@ -5359,6 +6028,24 @@ } } }, + "models.CreateAPITokenRequest": { + "type": "object", + "required": [ + "ttl" + ], + "properties": { + "name": { + "type": "string", + "maxLength": 100 + }, + "ttl": { + "description": "from 1 minute to 3 years", + "type": "integer", + "maximum": 94608000, + "minimum": 60 + } + } + }, "models.CreateAssistant": { "type": "object", "required": [ @@ -5456,7 +6143,8 @@ "model_provider_name", "model_provider_type", "status", - "title" + "title", + "user_id" ], "properties": { "created_at": { @@ -5561,7 +6249,8 @@ "model_provider_type", "status", "tasks", - "title" + "title", + "user_id" ], "properties": { "created_at": { @@ -5735,6 +6424,7 @@ "models.Msglog": { "type": "object", "required": [ + "flow_id", "message", "result_format", "type" @@ -5761,10 +6451,12 @@ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "thinking": { "type": "string" @@ -5935,48 +6627,6 @@ } } }, - "models.ProtoAuthToken": { - "type": "object", - "required": [ - "token", - "ttl" - ], - "properties": { - "created_date": { - "type": "string" - }, - "token": { - "type": "string" - }, - "ttl": { - "type": "integer", - "maximum": 94608000, - "minimum": 1 - } - } - }, - "models.ProtoAuthTokenRequest": { - "type": "object", - "required": [ - "ttl", - "type" - ], - "properties": { - "ttl": { - "type": "integer", - "default": 31536000, - "maximum": 94608000, - "minimum": 1 - }, - "type": { - "type": "string", - "default": "automation", - "enum": [ - "automation" - ] - } - } - }, "models.ProviderInfo": { "type": "object", "required": [ @@ -6088,6 +6738,7 @@ "required": [ "engine", "executor", + "flow_id", "initiator", "query" ], @@ -6119,10 +6770,12 @@ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, @@ -6131,6 +6784,7 @@ "required": [ "description", "status", + "task_id", "title" ], "properties": { @@ -6234,6 +6888,7 @@ "models.Task": { "type": "object", "required": [ + "flow_id", "input", "status", "title" @@ -6299,6 +6954,7 @@ "models.TaskSubtasks": { "type": "object", "required": [ + "flow_id", "input", "status", "subtasks", @@ -6392,6 +7048,18 @@ } } }, + "models.UpdateAPITokenRequest": { + "type": "object", + "properties": { + "name": { + "type": "string", + "maxLength": 100 + }, + "status": { + "type": "string" + } + } + }, "models.UsageStats": { "type": "object", "properties": { @@ -6424,8 +7092,8 @@ "models.User": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6469,9 +7137,9 @@ "models.UserPassword": { "type": "object", "required": [ - "created_at", "mail", "password", + "role_id", "status", "type" ], @@ -6519,8 +7187,8 @@ "models.UserRole": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6567,8 +7235,8 @@ "models.UserRolePrivileges": { "type": "object", "required": [ - "created_at", "mail", + "role_id", "status", "type" ], @@ -6618,9 +7286,9 @@ "action", "executor", "filter", + "flow_id", "initiator", - "query", - "result" + "query" ], "properties": { "action": { @@ -6653,10 +7321,12 @@ "type": "string" }, "subtask_id": { - "type": "integer" + "type": "integer", + "minimum": 0 }, "task_id": { - "type": "integer" + "type": "integer", + "minimum": 0 } } }, @@ -6880,6 +7550,20 @@ } } }, + "services.tokens": { + "type": "object", + "properties": { + "tokens": { + "type": "array", + "items": { + "$ref": "#/definitions/models.APIToken" + } + }, + "total": { + "type": "integer" + } + } + }, "services.users": { "type": "object", "properties": { @@ -6978,5 +7662,13 @@ } } } + }, + "securityDefinitions": { + "BearerAuth": { + "description": "Type \"Bearer\" followed by a space and JWT token.", + "type": "apiKey", + "name": "Authorization", + "in": "header" + } } } \ No newline at end of file diff --git a/backend/pkg/server/docs/swagger.yaml b/backend/pkg/server/docs/swagger.yaml index e27364ce..0e2b2f18 100644 --- a/backend/pkg/server/docs/swagger.yaml +++ b/backend/pkg/server/docs/swagger.yaml @@ -87,6 +87,79 @@ definitions: type: string type: array type: object + models.APIToken: + properties: + created_at: + type: string + deleted_at: + type: string + id: + minimum: 0 + type: integer + name: + maxLength: 100 + type: string + role_id: + minimum: 0 + type: integer + status: + type: string + token_id: + type: string + ttl: + maximum: 94608000 + minimum: 60 + type: integer + updated_at: + type: string + user_id: + minimum: 0 + type: integer + required: + - created_at + - status + - token_id + - ttl + - updated_at + type: object + models.APITokenWithSecret: + properties: + created_at: + type: string + deleted_at: + type: string + id: + minimum: 0 + type: integer + name: + maxLength: 100 + type: string + role_id: + minimum: 0 + type: integer + status: + type: string + token: + type: string + token_id: + type: string + ttl: + maximum: 94608000 + minimum: 60 + type: integer + updated_at: + type: string + user_id: + minimum: 0 + type: integer + required: + - created_at + - status + - token + - token_id + - ttl + - updated_at + type: object models.AgentTypeUsageStats: properties: agent_type: @@ -114,13 +187,16 @@ definitions: result: type: string subtask_id: + minimum: 0 type: integer task: type: string task_id: + minimum: 0 type: integer required: - executor + - flow_id - initiator - task type: object @@ -156,6 +232,7 @@ definitions: updated_at: type: string required: + - flow_id - language - model - model_provider_name @@ -197,6 +274,7 @@ definitions: updated_at: type: string required: + - flow_id - language - model - model_provider_name @@ -228,7 +306,8 @@ definitions: type: type: string required: - - message + - assistant_id + - flow_id - result_format - type type: object @@ -273,6 +352,7 @@ definitions: updated_at: type: string required: + - flow_id - image - local_dir - local_id @@ -280,6 +360,19 @@ definitions: - status - type type: object + models.CreateAPITokenRequest: + properties: + name: + maxLength: 100 + type: string + ttl: + description: from 1 minute to 3 years + maximum: 94608000 + minimum: 60 + type: integer + required: + - ttl + type: object models.CreateAssistant: properties: functions: @@ -379,6 +472,7 @@ definitions: - model_provider_type - status - title + - user_id type: object models.FlowExecutionStats: properties: @@ -458,6 +552,7 @@ definitions: - status - tasks - title + - user_id type: object models.FlowUsageResponse: properties: @@ -559,14 +654,17 @@ definitions: result_format: type: string subtask_id: + minimum: 0 type: integer task_id: + minimum: 0 type: integer thinking: type: string type: type: string required: + - flow_id - message - result_format - type @@ -682,36 +780,6 @@ definitions: - prompt - type type: object - models.ProtoAuthToken: - properties: - created_date: - type: string - token: - type: string - ttl: - maximum: 94608000 - minimum: 1 - type: integer - required: - - token - - ttl - type: object - models.ProtoAuthTokenRequest: - properties: - ttl: - default: 31536000 - maximum: 94608000 - minimum: 1 - type: integer - type: - default: automation - enum: - - automation - type: string - required: - - ttl - - type - type: object models.ProviderInfo: properties: name: @@ -807,12 +875,15 @@ definitions: result: type: string subtask_id: + minimum: 0 type: integer task_id: + minimum: 0 type: integer required: - engine - executor + - flow_id - initiator - query type: object @@ -841,6 +912,7 @@ definitions: required: - description - status + - task_id - title type: object models.SubtaskExecutionStats: @@ -909,6 +981,7 @@ definitions: updated_at: type: string required: + - flow_id - input - status - title @@ -958,6 +1031,7 @@ definitions: updated_at: type: string required: + - flow_id - input - status - subtasks @@ -1001,6 +1075,14 @@ definitions: minimum: 0 type: number type: object + models.UpdateAPITokenRequest: + properties: + name: + maxLength: 100 + type: string + status: + type: string + type: object models.UsageStats: properties: total_usage_cache_in: @@ -1049,8 +1131,8 @@ definitions: type: type: string required: - - created_at - mail + - role_id - status - type type: object @@ -1084,9 +1166,9 @@ definitions: type: type: string required: - - created_at - mail - password + - role_id - status - type type: object @@ -1119,8 +1201,8 @@ definitions: type: type: string required: - - created_at - mail + - role_id - status - type type: object @@ -1153,8 +1235,8 @@ definitions: type: type: string required: - - created_at - mail + - role_id - status - type type: object @@ -1181,16 +1263,18 @@ definitions: result: type: string subtask_id: + minimum: 0 type: integer task_id: + minimum: 0 type: integer required: - action - executor - filter + - flow_id - initiator - query - - result type: object services.agentlogs: properties: @@ -1334,6 +1418,15 @@ definitions: total: type: integer type: object + services.tokens: + properties: + tokens: + items: + $ref: '#/definitions/models.APIToken' + type: array + total: + type: integer + type: object services.users: properties: total: @@ -1418,8 +1511,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -1442,16 +1538,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -1488,6 +1585,8 @@ paths: description: internal error on getting agentlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve agentlogs list tags: - Agentlogs @@ -1496,8 +1595,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -1520,16 +1622,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -1566,6 +1669,8 @@ paths: description: internal error on getting assistantlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve assistantlogs list tags: - Assistantlogs @@ -1739,8 +1844,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -1763,16 +1871,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -1809,6 +1918,8 @@ paths: description: internal error on getting containers schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve containers list tags: - Containers @@ -1817,8 +1928,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -1841,16 +1955,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -1887,6 +2002,8 @@ paths: description: internal error on getting flows schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flows list tags: - Flows @@ -1924,6 +2041,8 @@ paths: description: internal error on creating flow schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Create new flow with custom functions tags: - Flows @@ -1958,6 +2077,8 @@ paths: description: internal error on deleting flow schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Delete flow by id tags: - Flows @@ -1993,6 +2114,8 @@ paths: description: internal error on getting flow schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow by id tags: - Flows @@ -2036,6 +2159,8 @@ paths: description: internal error on patching flow schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Patch flow tags: - Flows @@ -2050,8 +2175,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2074,16 +2202,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2120,6 +2249,8 @@ paths: description: internal error on getting agentlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve agentlogs list by flow id tags: - Agentlogs @@ -2134,8 +2265,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2158,16 +2292,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2204,6 +2339,8 @@ paths: description: internal error on getting assistantlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve assistantlogs list by flow id tags: - Assistantlogs @@ -2218,8 +2355,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2242,16 +2382,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2288,6 +2429,8 @@ paths: description: internal error on getting assistants schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve assistants list tags: - Assistants @@ -2331,6 +2474,8 @@ paths: description: internal error on creating assistant schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Create new assistant with custom functions tags: - Assistants @@ -2371,6 +2516,8 @@ paths: description: internal error on deleting assistant schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Delete assistant by id tags: - Assistants @@ -2412,6 +2559,8 @@ paths: description: internal error on getting flow assistant schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow assistant by id tags: - Assistants @@ -2461,6 +2610,8 @@ paths: description: internal error on patching assistant schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Patch assistant tags: - Assistants @@ -2475,8 +2626,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2499,16 +2653,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2545,6 +2700,8 @@ paths: description: internal error on getting containers schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve containers list by flow id tags: - Containers @@ -2587,6 +2744,8 @@ paths: description: internal error on getting container schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve container info by id and flow id tags: - Containers @@ -2623,6 +2782,8 @@ paths: description: internal error on getting flow graph schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow graph by id tags: - Flows @@ -2637,8 +2798,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2661,16 +2825,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2707,6 +2872,8 @@ paths: description: internal error on getting msglogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve msglogs list by flow id tags: - Msglogs @@ -2721,8 +2888,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2745,16 +2915,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2791,6 +2962,8 @@ paths: description: internal error on getting screenshots schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve screenshots list by flow id tags: - Screenshots @@ -2833,6 +3006,8 @@ paths: description: internal error on getting screenshot schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve screenshot info by id and flow id tags: - Screenshots @@ -2867,6 +3042,8 @@ paths: description: internal error on getting screenshot schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve screenshot file by id and flow id tags: - Screenshots @@ -2881,8 +3058,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2905,16 +3085,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -2951,6 +3132,8 @@ paths: description: internal error on getting searchlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve searchlogs list by flow id tags: - Searchlogs @@ -2965,8 +3148,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -2989,16 +3175,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3035,6 +3222,8 @@ paths: description: internal error on getting flow subtasks schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow subtasks list tags: - Subtasks @@ -3049,8 +3238,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3073,16 +3265,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3119,6 +3312,8 @@ paths: description: internal error on getting flow tasks schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow tasks list tags: - Tasks @@ -3161,6 +3356,8 @@ paths: description: internal error on getting flow task schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow task by id tags: - Tasks @@ -3203,6 +3400,8 @@ paths: description: internal error on getting flow task graph schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow task graph by id tags: - Tasks @@ -3223,8 +3422,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3247,16 +3449,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3293,6 +3496,8 @@ paths: description: internal error on getting flow subtasks schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow task subtasks list tags: - Subtasks @@ -3341,6 +3546,8 @@ paths: description: internal error on getting flow task subtask schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve flow task subtask by id tags: - Subtasks @@ -3355,8 +3562,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3379,16 +3589,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3425,6 +3636,8 @@ paths: description: internal error on getting termlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve termlogs list by flow id tags: - Termlogs @@ -3466,6 +3679,8 @@ paths: description: internal error on getting flow analytics schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve analytics for specific flow tags: - Flows @@ -3481,8 +3696,11 @@ paths: type: integer - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3505,16 +3723,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3551,6 +3770,8 @@ paths: description: internal error on getting vecstorelogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve vecstorelogs list by flow id tags: - Vecstorelogs @@ -3584,6 +3805,8 @@ paths: description: internal error on graphql request schema: $ref: '#/definitions/graphql.Response' + security: + - BearerAuth: [] summary: Perform graphql requests tags: - GraphQL @@ -3618,6 +3841,8 @@ paths: description: internal error on getting information about system and config schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve current user and system settings tags: - Public @@ -3626,8 +3851,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3650,16 +3878,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3696,6 +3925,8 @@ paths: description: internal error on getting msglogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve msglogs list tags: - Msglogs @@ -3704,8 +3935,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3728,16 +3962,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -3774,10 +4009,47 @@ paths: description: internal error on getting prompts schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve prompts list tags: - Prompts /prompts/{promptType}: + delete: + parameters: + - description: prompt type + in: path + name: promptType + required: true + type: string + produces: + - application/json + responses: + "200": + description: prompt deleted successful + schema: + $ref: '#/definitions/SuccessResponse' + "400": + description: invalid prompt request data + schema: + $ref: '#/definitions/ErrorResponse' + "403": + description: deleting prompt not permitted + schema: + $ref: '#/definitions/ErrorResponse' + "404": + description: prompt not found + schema: + $ref: '#/definitions/ErrorResponse' + "500": + description: internal error on deleting prompt + schema: + $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] + summary: Delete prompt by type + tags: + - Prompts get: parameters: - description: prompt type @@ -3813,6 +4085,8 @@ paths: description: internal error on getting prompt schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve prompt by type tags: - Prompts @@ -3843,6 +4117,15 @@ paths: data: $ref: '#/definitions/models.Prompt' type: object + "201": + description: prompt created successful + schema: + allOf: + - $ref: '#/definitions/SuccessResponse' + - properties: + data: + $ref: '#/definitions/models.Prompt' + type: object "400": description: invalid prompt request data schema: @@ -3859,6 +4142,8 @@ paths: description: internal error on updating prompt schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Update prompt tags: - Prompts @@ -3884,6 +4169,15 @@ paths: data: $ref: '#/definitions/models.Prompt' type: object + "201": + description: prompt created with default value successful + schema: + allOf: + - $ref: '#/definitions/SuccessResponse' + - properties: + data: + $ref: '#/definitions/models.Prompt' + type: object "400": description: invalid prompt request data schema: @@ -3900,6 +4194,8 @@ paths: description: internal error on resetting prompt schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Reset prompt by type to default value tags: - Prompts @@ -3921,6 +4217,8 @@ paths: description: getting providers not permitted schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve providers list tags: - Providers @@ -3929,8 +4227,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -3953,16 +4254,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4042,8 +4344,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -4066,16 +4371,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4112,6 +4418,8 @@ paths: description: internal error on getting screenshots schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve screenshots list tags: - Screenshots @@ -4120,8 +4428,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -4144,16 +4455,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4190,6 +4502,8 @@ paths: description: internal error on getting searchlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve searchlogs list tags: - Searchlogs @@ -4198,8 +4512,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -4222,16 +4539,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4268,20 +4586,46 @@ paths: description: internal error on getting termlogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve termlogs list tags: - Termlogs - /token: + /tokens: + get: + produces: + - application/json + responses: + "200": + description: tokens retrieved successful + schema: + allOf: + - $ref: '#/definitions/SuccessResponse' + - properties: + data: + $ref: '#/definitions/services.tokens' + type: object + "403": + description: listing tokens not permitted + schema: + $ref: '#/definitions/ErrorResponse' + "500": + description: internal error on listing tokens + schema: + $ref: '#/definitions/ErrorResponse' + summary: List API tokens + tags: + - Tokens post: consumes: - application/json parameters: - - description: Proto auth token request JSON data + - description: Token creation request in: body name: json required: true schema: - $ref: '#/definitions/models.ProtoAuthTokenRequest' + $ref: '#/definitions/models.CreateAPITokenRequest' produces: - application/json responses: @@ -4292,10 +4636,10 @@ paths: - $ref: '#/definitions/SuccessResponse' - properties: data: - $ref: '#/definitions/models.ProtoAuthToken' + $ref: '#/definitions/models.APITokenWithSecret' type: object "400": - description: invalid requested token info + description: invalid token request or default salt schema: $ref: '#/definitions/ErrorResponse' "403": @@ -4306,9 +4650,119 @@ paths: description: internal error on creating token schema: $ref: '#/definitions/ErrorResponse' - summary: Create new JWT token to use it into automation connections + summary: Create new API token for automation + tags: + - Tokens + /tokens/{tokenID}: + delete: + parameters: + - description: Token ID + in: path + name: tokenID + required: true + type: string + produces: + - application/json + responses: + "200": + description: token deleted successful + schema: + $ref: '#/definitions/SuccessResponse' + "403": + description: deleting token not permitted + schema: + $ref: '#/definitions/ErrorResponse' + "404": + description: token not found + schema: + $ref: '#/definitions/ErrorResponse' + "500": + description: internal error on deleting token + schema: + $ref: '#/definitions/ErrorResponse' + summary: Delete API token tags: - - Proto + - Tokens + get: + parameters: + - description: Token ID + in: path + name: tokenID + required: true + type: string + produces: + - application/json + responses: + "200": + description: token retrieved successful + schema: + allOf: + - $ref: '#/definitions/SuccessResponse' + - properties: + data: + $ref: '#/definitions/models.APIToken' + type: object + "403": + description: accessing token not permitted + schema: + $ref: '#/definitions/ErrorResponse' + "404": + description: token not found + schema: + $ref: '#/definitions/ErrorResponse' + "500": + description: internal error on getting token + schema: + $ref: '#/definitions/ErrorResponse' + summary: Get API token details + tags: + - Tokens + put: + consumes: + - application/json + parameters: + - description: Token ID + in: path + name: tokenID + required: true + type: string + - description: Token update request + in: body + name: json + required: true + schema: + $ref: '#/definitions/models.UpdateAPITokenRequest' + produces: + - application/json + responses: + "200": + description: token updated successful + schema: + allOf: + - $ref: '#/definitions/SuccessResponse' + - properties: + data: + $ref: '#/definitions/models.APIToken' + type: object + "400": + description: invalid update request + schema: + $ref: '#/definitions/ErrorResponse' + "403": + description: updating token not permitted + schema: + $ref: '#/definitions/ErrorResponse' + "404": + description: token not found + schema: + $ref: '#/definitions/ErrorResponse' + "500": + description: internal error on updating token + schema: + $ref: '#/definitions/ErrorResponse' + summary: Update API token + tags: + - Tokens /usage: get: description: Get comprehensive analytics for all user's flows including usage, @@ -4333,6 +4787,8 @@ paths: description: internal error on getting analytics schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve system-wide analytics tags: - Usage @@ -4373,6 +4829,8 @@ paths: description: internal error on getting analytics schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve analytics for specific time period tags: - Usage @@ -4447,8 +4905,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -4471,16 +4932,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4678,8 +5140,11 @@ paths: parameters: - collectionFormat: multi description: |- - Filtering result on server e.g. {"value":[...],"field":"..."} - field value should be integer or string or array type + Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + field is the unique identifier of the table column, different for each endpoint + value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + operator value should be one of <,<=,>=,>,=,!=,like,not like,in + default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' in: query items: type: string @@ -4702,16 +5167,17 @@ paths: maximum: 1000 minimum: -1 name: pageSize - required: true type: integer - - default: '{}' + - collectionFormat: multi description: |- Sorting result on server e.g. {"prop":"...","order":"..."} field order is "ascending" or "descending" value + order is required if prop is not empty in: query - name: sort - required: true - type: string + items: + type: string + name: sort[] + type: array - default: init description: Type of request enum: @@ -4748,7 +5214,15 @@ paths: description: internal error on getting vecstorelogs schema: $ref: '#/definitions/ErrorResponse' + security: + - BearerAuth: [] summary: Retrieve vecstorelogs list tags: - Vecstorelogs +securityDefinitions: + BearerAuth: + description: Type "Bearer" followed by a space and JWT token. + in: header + name: Authorization + type: apiKey swagger: "2.0" diff --git a/backend/pkg/server/models/agentlogs.go b/backend/pkg/server/models/agentlogs.go index 155a346a..97c0c006 100644 --- a/backend/pkg/server/models/agentlogs.go +++ b/backend/pkg/server/models/agentlogs.go @@ -14,9 +14,9 @@ type Agentlog struct { Executor MsgchainType `json:"executor" validate:"valid,required" gorm:"type:MSGCHAIN_TYPE;NOT NULL"` Task string `json:"task" validate:"required" gorm:"type:TEXT;NOT NULL"` Result string `json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/api_tokens.go b/backend/pkg/server/models/api_tokens.go new file mode 100644 index 00000000..4fb97458 --- /dev/null +++ b/backend/pkg/server/models/api_tokens.go @@ -0,0 +1,133 @@ +package models + +import ( + "fmt" + "time" + + "github.com/golang-jwt/jwt/v5" + "github.com/jinzhu/gorm" +) + +// TokenStatus represents the status of an API token +type TokenStatus string + +const ( + TokenStatusActive TokenStatus = "active" + TokenStatusRevoked TokenStatus = "revoked" + TokenStatusExpired TokenStatus = "expired" +) + +func (s TokenStatus) String() string { + return string(s) +} + +// Valid is function to control input/output data +func (s TokenStatus) Valid() error { + switch s { + case TokenStatusActive, TokenStatusRevoked, TokenStatusExpired: + return nil + default: + return fmt.Errorf("invalid TokenStatus: %s", s) + } +} + +// Validate is function to use callback to control input/output data +func (s TokenStatus) Validate(db *gorm.DB) { + if err := s.Valid(); err != nil { + db.AddError(err) + } +} + +// APIToken is model to contain API token metadata +// nolint:lll +type APIToken struct { + ID uint64 `form:"id" json:"id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL;PRIMARY_KEY;AUTO_INCREMENT"` + TokenID string `form:"token_id" json:"token_id" validate:"required,len=10" gorm:"type:TEXT;NOT NULL;UNIQUE_INDEX"` + UserID uint64 `form:"user_id" json:"user_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + RoleID uint64 `form:"role_id" json:"role_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + Name *string `form:"name,omitempty" json:"name,omitempty" validate:"omitempty,max=100" gorm:"type:TEXT"` + TTL uint64 `form:"ttl" json:"ttl" validate:"required,min=60,max=94608000" gorm:"type:BIGINT;NOT NULL"` + Status TokenStatus `form:"status" json:"status" validate:"valid,required" gorm:"type:TOKEN_STATUS;NOT NULL;default:'active'"` + CreatedAt time.Time `form:"created_at" json:"created_at" validate:"required" gorm:"type:TIMESTAMPTZ;NOT NULL;default:CURRENT_TIMESTAMP"` + UpdatedAt time.Time `form:"updated_at" json:"updated_at" validate:"required" gorm:"type:TIMESTAMPTZ;NOT NULL;default:CURRENT_TIMESTAMP"` + DeletedAt *time.Time `form:"deleted_at,omitempty" json:"deleted_at,omitempty" validate:"omitempty" sql:"index" gorm:"type:TIMESTAMPTZ"` +} + +// TableName returns the table name string to guaranty use correct table +func (at *APIToken) TableName() string { + return "api_tokens" +} + +// Valid is function to control input/output data +func (at APIToken) Valid() error { + if err := at.Status.Valid(); err != nil { + return err + } + return validate.Struct(at) +} + +// Validate is function to use callback to control input/output data +func (at APIToken) Validate(db *gorm.DB) { + if err := at.Valid(); err != nil { + db.AddError(err) + } +} + +// APITokenWithSecret is model to contain API token with the JWT token string (returned only on creation) +// nolint:lll +type APITokenWithSecret struct { + APIToken `form:"" json:""` + Token string `form:"token" json:"token" validate:"required,jwt" gorm:"-"` +} + +// Valid is function to control input/output data +func (ats APITokenWithSecret) Valid() error { + if err := ats.APIToken.Valid(); err != nil { + return err + } + return validate.Struct(ats) +} + +// CreateAPITokenRequest is model to contain request data for creating an API token +// nolint:lll +type CreateAPITokenRequest struct { + Name *string `form:"name,omitempty" json:"name,omitempty" validate:"omitempty,max=100"` + TTL uint64 `form:"ttl" json:"ttl" validate:"required,min=60,max=94608000"` // from 1 minute to 3 years +} + +// Valid is function to control input/output data +func (catr CreateAPITokenRequest) Valid() error { + return validate.Struct(catr) +} + +// UpdateAPITokenRequest is model to contain request data for updating an API token +// nolint:lll +type UpdateAPITokenRequest struct { + Name *string `form:"name,omitempty" json:"name,omitempty" validate:"omitempty,max=100"` + Status TokenStatus `form:"status,omitempty" json:"status,omitempty" validate:"omitempty,valid"` +} + +// Valid is function to control input/output data +func (uatr UpdateAPITokenRequest) Valid() error { + if uatr.Status != "" { + if err := uatr.Status.Valid(); err != nil { + return err + } + } + return validate.Struct(uatr) +} + +// APITokenClaims is model to contain JWT claims for API tokens +// nolint:lll +type APITokenClaims struct { + TokenID string `json:"tid" validate:"required,len=10"` + RID uint64 `json:"rid" validate:"min=0,max=10000"` + UID uint64 `json:"uid" validate:"min=0,max=10000"` + UHASH string `json:"uhash" validate:"required"` + jwt.RegisteredClaims +} + +// Valid is function to control input/output data +func (atc APITokenClaims) Valid() error { + return validate.Struct(atc) +} diff --git a/backend/pkg/server/models/assistantlogs.go b/backend/pkg/server/models/assistantlogs.go index e8d64637..58bc5b15 100644 --- a/backend/pkg/server/models/assistantlogs.go +++ b/backend/pkg/server/models/assistantlogs.go @@ -11,12 +11,12 @@ import ( type Assistantlog struct { ID uint64 `form:"id" json:"id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL;PRIMARY_KEY;AUTO_INCREMENT"` Type MsglogType `form:"type" json:"type" validate:"valid,required" gorm:"type:MSGLOG_TYPE;NOT NULL"` - Message string `form:"message" json:"message" validate:"required" gorm:"type:TEXT;NOT NULL"` + Message string `form:"message" json:"message" validate:"omitempty" gorm:"type:TEXT;NOT NULL"` Thinking string `form:"thinking" json:"thinking" validate:"omitempty" gorm:"type:TEXT;NULL"` Result string `form:"result" json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` ResultFormat MsglogResultFormat `form:"result_format" json:"result_format" validate:"valid,required" gorm:"type:MSGLOG_RESULT_FORMAT;NOT NULL;default:plain"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` - AssistantID uint64 `form:"assistant_id" json:"assistant_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` + AssistantID uint64 `form:"assistant_id" json:"assistant_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/assistants.go b/backend/pkg/server/models/assistants.go index 72925dd9..5b0f0411 100644 --- a/backend/pkg/server/models/assistants.go +++ b/backend/pkg/server/models/assistants.go @@ -54,7 +54,7 @@ type Assistant struct { ModelProviderType ProviderType `form:"model_provider_type" json:"model_provider_type" validate:"valid,required" gorm:"type:PROVIDER_TYPE;NOT NULL"` Language string `form:"language" json:"language" validate:"max=70,required" gorm:"type:TEXT;NOT NULL"` Functions *tools.Functions `form:"functions,omitempty" json:"functions,omitempty" validate:"omitempty,valid" gorm:"type:JSON;NOT NULL;default:'{}'"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` UpdatedAt time.Time `form:"updated_at,omitempty" json:"updated_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` DeletedAt *time.Time `form:"deleted_at,omitempty" json:"deleted_at,omitempty" validate:"omitempty" sql:"index" gorm:"type:TIMESTAMPTZ"` @@ -94,7 +94,7 @@ func (ca CreateAssistant) Valid() error { // PatchAssistant is model to contain assistant patching paylaod // nolint:lll type PatchAssistant struct { - Action string `form:"action" json:"action" validate:"required,oneof=stop,input" enums:"stop,input" default:"stop"` + Action string `form:"action" json:"action" validate:"required,oneof=stop input" enums:"stop,input" default:"stop"` Input *string `form:"input,omitempty" json:"input,omitempty" validate:"required_if=Action input" example:"user input for waiting assistant"` UseAgents bool `form:"use_agents" json:"use_agents" validate:"omitempty" example:"true"` } diff --git a/backend/pkg/server/models/containers.go b/backend/pkg/server/models/containers.go index b79b56c4..6868373d 100644 --- a/backend/pkg/server/models/containers.go +++ b/backend/pkg/server/models/containers.go @@ -80,7 +80,7 @@ type Container struct { Status ContainerStatus `form:"status" json:"status" validate:"valid,required" gorm:"type:CONTAINER_STATUS;NOT NULL;default:'starting'"` LocalID string `form:"local_id" json:"local_id" validate:"required" gorm:"type:TEXT;NOT NULL"` LocalDir string `form:"local_dir" json:"local_dir" validate:"required" gorm:"type:TEXT;NOT NULL"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` UpdatedAt time.Time `form:"updated_at,omitempty" json:"updated_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/flows.go b/backend/pkg/server/models/flows.go index 477c364b..668ca998 100644 --- a/backend/pkg/server/models/flows.go +++ b/backend/pkg/server/models/flows.go @@ -55,7 +55,7 @@ type Flow struct { ModelProviderType ProviderType `form:"model_provider_type" json:"model_provider_type" validate:"valid,required" gorm:"type:PROVIDER_TYPE;NOT NULL"` Language string `form:"language" json:"language" validate:"max=70,required" gorm:"type:TEXT;NOT NULL"` Functions *tools.Functions `form:"functions,omitempty" json:"functions,omitempty" validate:"omitempty,valid" gorm:"type:JSON;NOT NULL;default:'{}'"` - UserID uint64 `form:"user_id" json:"user_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + UserID uint64 `form:"user_id" json:"user_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` UpdatedAt time.Time `form:"updated_at,omitempty" json:"updated_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` DeletedAt *time.Time `form:"deleted_at,omitempty" json:"deleted_at,omitempty" validate:"omitempty" sql:"index" gorm:"type:TIMESTAMPTZ"` @@ -94,7 +94,7 @@ func (cf CreateFlow) Valid() error { // PatchFlow is model to contain flow patching paylaod // nolint:lll type PatchFlow struct { - Action string `form:"action" json:"action" validate:"required,oneof=stop,finish,input" enums:"stop,finish,input" default:"stop"` + Action string `form:"action" json:"action" validate:"required,oneof=stop finish input" enums:"stop,finish,input" default:"stop"` Input *string `form:"input,omitempty" json:"input,omitempty" validate:"required_if=Action input" example:"user input for waiting flow"` } diff --git a/backend/pkg/server/models/msglogs.go b/backend/pkg/server/models/msglogs.go index e73596c9..ef85902a 100644 --- a/backend/pkg/server/models/msglogs.go +++ b/backend/pkg/server/models/msglogs.go @@ -87,9 +87,9 @@ type Msglog struct { Thinking string `form:"thinking" json:"thinking" validate:"omitempty" gorm:"type:TEXT;NULL"` Result string `form:"result" json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` ResultFormat MsglogResultFormat `form:"result_format" json:"result_format" validate:"valid,required" gorm:"type:MSGLOG_RESULT_FORMAT;NOT NULL;default:plain"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/proto.go b/backend/pkg/server/models/proto.go deleted file mode 100644 index be570e2a..00000000 --- a/backend/pkg/server/models/proto.go +++ /dev/null @@ -1,48 +0,0 @@ -package models - -import ( - "time" - - "github.com/golang-jwt/jwt/v5" -) - -// ProtoAuthToken is model to contain information to authorize vxproto connections -// nolint:lll -type ProtoAuthToken struct { - Token string `form:"token" json:"token" validate:"jwt,required"` - TTL uint64 `form:"ttl" json:"ttl" validate:"min=1,max=94608000,required"` - CreatedDate time.Time `form:"created_date,omitempty" json:"created_date,omitempty" validate:"omitempty"` -} - -// Valid is function to control input/output data -func (pat ProtoAuthToken) Valid() error { - return validate.Struct(pat) -} - -// ProtoAuthToken is model to contain information to authorize vxproto connections -// nolint:lll -type ProtoAuthTokenRequest struct { - TTL uint64 `form:"ttl" json:"ttl" validate:"min=1,max=94608000,required" default:"31536000"` - Type string `form:"type" json:"type" validate:"oneof=automation,required" default:"automation"` -} - -// Valid is function to control input/output data -func (patr ProtoAuthTokenRequest) Valid() error { - return validate.Struct(patr) -} - -// ProtoAuthTokenClaims is model to contain token claims to authorize vxproto connections -// nolint:lll -type ProtoAuthTokenClaims struct { - RID uint64 `form:"rid" json:"rid" validate:"min=0,max=10000"` - UID uint64 `form:"uid" json:"uid" validate:"min=0,max=10000"` - TID string `form:"tid" json:"tid" validate:"required,oneof=local oauth"` - UHASH string `form:"uhash" json:"uhash"` - CPT string `form:"cpt" json:"cpt" validate:"required,oneof=automation"` - jwt.RegisteredClaims -} - -// Valid is function to control input/output data -func (patc ProtoAuthTokenClaims) Valid() error { - return validate.Struct(patc) -} diff --git a/backend/pkg/server/models/screenshots.go b/backend/pkg/server/models/screenshots.go index dd3881f1..10357018 100644 --- a/backend/pkg/server/models/screenshots.go +++ b/backend/pkg/server/models/screenshots.go @@ -13,8 +13,8 @@ type Screenshot struct { Name string `form:"name" json:"name" validate:"required" gorm:"type:TEXT;NOT NULL"` URL string `form:"url" json:"url" validate:"required" gorm:"type:TEXT;NOT NULL"` FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitempty,min=0,numeric" gorm:"type:BIGINT"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitempty,min=0,numeric" gorm:"type:BIGINT"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/searchlogs.go b/backend/pkg/server/models/searchlogs.go index cfd8d0e6..063e1af3 100644 --- a/backend/pkg/server/models/searchlogs.go +++ b/backend/pkg/server/models/searchlogs.go @@ -53,9 +53,9 @@ type Searchlog struct { Engine SearchEngineType `json:"engine" validate:"valid,required" gorm:"type:SEARCHENGINE_TYPE;NOT NULL"` Query string `json:"query" validate:"required" gorm:"type:TEXT;NOT NULL"` Result string `json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/subtasks.go b/backend/pkg/server/models/subtasks.go index a7993464..5619dced 100644 --- a/backend/pkg/server/models/subtasks.go +++ b/backend/pkg/server/models/subtasks.go @@ -51,7 +51,7 @@ type Subtask struct { Description string `form:"description" json:"description" validate:"required" gorm:"type:TEXT;NOT NULL"` Context string `form:"context" json:"context" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` Result string `form:"result" json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` - TaskID uint64 `form:"task_id" json:"task_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + TaskID uint64 `form:"task_id" json:"task_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` UpdatedAt time.Time `form:"updated_at,omitempty" json:"updated_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/tasks.go b/backend/pkg/server/models/tasks.go index da1a2eb6..0297ba2c 100644 --- a/backend/pkg/server/models/tasks.go +++ b/backend/pkg/server/models/tasks.go @@ -50,7 +50,7 @@ type Task struct { Title string `form:"title" json:"title" validate:"required" gorm:"type:TEXT;NOT NULL;default:'untitled'"` Input string `form:"input" json:"input" validate:"required" gorm:"type:TEXT;NOT NULL"` Result string `form:"result" json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL;default:''"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` UpdatedAt time.Time `form:"updated_at,omitempty" json:"updated_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/termlogs.go b/backend/pkg/server/models/termlogs.go index 052cf22f..7ea0d289 100644 --- a/backend/pkg/server/models/termlogs.go +++ b/backend/pkg/server/models/termlogs.go @@ -44,8 +44,8 @@ type Termlog struct { Text string `form:"text" json:"text" validate:"required" gorm:"type:TEXT;NOT NULL"` ContainerID uint64 `form:"container_id" json:"container_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitempty,min=0,numeric" gorm:"type:BIGINT"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitempty,min=0,numeric" gorm:"type:BIGINT"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/models/users.go b/backend/pkg/server/models/users.go index a8a9aacb..a9ace4e8 100644 --- a/backend/pkg/server/models/users.go +++ b/backend/pkg/server/models/users.go @@ -43,6 +43,7 @@ type UserType string const ( UserTypeLocal UserType = "local" UserTypeOAuth UserType = "oauth" + UserTypeAPI UserType = "api" ) func (s UserType) String() string { @@ -52,7 +53,7 @@ func (s UserType) String() string { // Valid is function to control input/output data func (s UserType) Valid() error { switch s { - case UserTypeLocal, UserTypeOAuth: + case UserTypeLocal, UserTypeOAuth, UserTypeAPI: return nil default: return fmt.Errorf("invalid UserType: %s", s) @@ -75,10 +76,10 @@ type User struct { Mail string `form:"mail" json:"mail" validate:"max=50,vmail,required" gorm:"type:TEXT;NOT NULL;UNIQUE_INDEX"` Name string `form:"name" json:"name" validate:"max=70,omitempty" gorm:"type:TEXT;NOT NULL;default:''"` Status UserStatus `form:"status" json:"status" validate:"valid,required" gorm:"type:USER_STATUS;NOT NULL;default:'created'"` - RoleID uint64 `form:"role_id" json:"role_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL;default:2"` + RoleID uint64 `form:"role_id" json:"role_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL;default:2"` PasswordChangeRequired bool `form:"password_change_required" json:"password_change_required" gorm:"type:BOOL;NOT NULL;default:false"` Provider *string `form:"provider,omitempty" json:"provider,omitempty" validate:"omitempty" gorm:"type:TEXT"` - CreatedAt time.Time `form:"created_at" json:"created_at" validate:"required" gorm:"type:TIMESTAMPTZ;NOT NULL;default:CURRENT_TIMESTAMP"` + CreatedAt time.Time `form:"created_at" json:"created_at" validate:"omitempty" gorm:"type:TIMESTAMPTZ;NOT NULL;default:CURRENT_TIMESTAMP"` } // TableName returns the table name string to guaranty use correct table diff --git a/backend/pkg/server/models/vecstorelogs.go b/backend/pkg/server/models/vecstorelogs.go index 0d66903b..c91a3917 100644 --- a/backend/pkg/server/models/vecstorelogs.go +++ b/backend/pkg/server/models/vecstorelogs.go @@ -44,10 +44,10 @@ type Vecstorelog struct { Filter string `json:"filter" validate:"required" gorm:"type:JSON;NOT NULL"` Query string `json:"query" validate:"required" gorm:"type:TEXT;NOT NULL"` Action VecstoreActionType `json:"action" validate:"valid,required" gorm:"type:VECSTORE_ACTION_TYPE;NOT NULL"` - Result string `json:"result" validate:"required" gorm:"type:TEXT;NOT NULL"` - FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric" gorm:"type:BIGINT;NOT NULL"` - TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` - SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"numeric,omitempty" gorm:"type:BIGINT;NOT NULL"` + Result string `json:"result" validate:"omitempty" gorm:"type:TEXT;NOT NULL"` + FlowID uint64 `form:"flow_id" json:"flow_id" validate:"min=0,numeric,required" gorm:"type:BIGINT;NOT NULL"` + TaskID *uint64 `form:"task_id,omitempty" json:"task_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` + SubtaskID *uint64 `form:"subtask_id,omitempty" json:"subtask_id,omitempty" validate:"omitnil,min=0" gorm:"type:BIGINT;NOT NULL"` CreatedAt time.Time `form:"created_at,omitempty" json:"created_at,omitempty" validate:"omitempty" gorm:"type:TIMESTAMPTZ;default:CURRENT_TIMESTAMP"` } diff --git a/backend/pkg/server/rdb/table.go b/backend/pkg/server/rdb/table.go index 817af7ae..430b826d 100644 --- a/backend/pkg/server/rdb/table.go +++ b/backend/pkg/server/rdb/table.go @@ -4,7 +4,7 @@ import ( "crypto/md5" //nolint:gosec "encoding/hex" "errors" - "regexp" + "slices" "strconv" "strings" "time" @@ -18,15 +18,15 @@ import ( // //nolint:lll type TableFilter struct { - Value interface{} `form:"value" json:"value,omitempty" binding:"required" swaggertype:"object"` - Field string `form:"field" json:"field,omitempty" binding:"required"` - Operator string `form:"operator" json:"operator,omitempty" binding:"oneof=<,<=,>=,>,=,!=,like,not like,in,omitempty" default:"like" enums:"<,<=,>=,>,=,!=,like,not like,in"` + Value any `form:"value" json:"value,omitempty" binding:"required" swaggertype:"object"` + Field string `form:"field" json:"field,omitempty" binding:"required"` + Operator string `form:"operator" json:"operator,omitempty" binding:"oneof='<' '<=' '>=' '>' '=' '!=' 'like' 'not like' 'in',omitempty" default:"like" enums:"<,<=,>=,>,=,!=,like,not like,in"` } // TableSort is auxiliary struct to contain method of sorting type TableSort struct { Prop string `form:"prop" json:"prop,omitempty" binding:"omitempty"` - Order string `form:"order" json:"order,omitempty" binding:"omitempty"` + Order string `form:"order" json:"order,omitempty" binding:"oneof=ascending descending,required_with=Prop,omitempty" enums:"ascending,descending"` } // TableQuery is main struct to contain input params @@ -36,35 +36,39 @@ type TableQuery struct { // Number of page (since 1) Page int `form:"page" json:"page" binding:"min=1,required" default:"1" minimum:"1"` // Amount items per page (min -1, max 1000, -1 means unlimited) - Size int `form:"pageSize" json:"pageSize" binding:"min=-1,max=1000,required" default:"5" minimum:"-1" maximum:"1000"` + Size int `form:"pageSize" json:"pageSize" binding:"min=-1,max=1000" default:"5" minimum:"-1" maximum:"1000"` // Type of request Type string `form:"type" json:"type" binding:"oneof=sort filter init page size,required" default:"init" enums:"sort,filter,init,page,size"` // Sorting result on server e.g. {"prop":"...","order":"..."} // field order is "ascending" or "descending" value - Sort TableSort `form:"sort" json:"sort" binding:"required" swaggertype:"string" default:"{}"` - // Filtering result on server e.g. {"value":[...],"field":"..."} - // field value should be integer or string or array type - Filters []TableFilter `form:"filters[]" json:"filters[],omitempty" binding:"omitempty" swaggertype:"array,string"` + // order is required if prop is not empty + Sort []TableSort `form:"sort[]" json:"sort[],omitempty" binding:"omitempty,dive" swaggertype:"array,string"` + // Filtering result on server e.g. {"value":[...],"field":"...","operator":"..."} + // field is the unique identifier of the table column, different for each endpoint + // value should be integer or string or array type, "value":123 or "value":"string" or "value":[123,456] + // operator value should be one of <,<=,>=,>,=,!=,like,not like,in + // default operator value is 'like' or '=' if field is 'id' or '*_id' or '*_at' + Filters []TableFilter `form:"filters[]" json:"filters[],omitempty" binding:"omitempty,dive" swaggertype:"array,string"` // Field to group results by Group string `form:"group" json:"group,omitempty" binding:"omitempty" swaggertype:"string"` // non input arguments - table string `form:"-" json:"-"` - groupField string `form:"-" json:"-"` - sqlMappers map[string]interface{} `form:"-" json:"-"` - sqlFind func(out interface{}) func(*gorm.DB) *gorm.DB `form:"-" json:"-"` - sqlFilters []func(*gorm.DB) *gorm.DB `form:"-" json:"-"` - sqlOrders []func(*gorm.DB) *gorm.DB `form:"-" json:"-"` + table string `form:"-" json:"-"` + groupField string `form:"-" json:"-"` + sqlMappers map[string]any `form:"-" json:"-"` + sqlFind func(out any) func(*gorm.DB) *gorm.DB `form:"-" json:"-"` + sqlFilters []func(*gorm.DB) *gorm.DB `form:"-" json:"-"` + sqlOrders []func(*gorm.DB) *gorm.DB `form:"-" json:"-"` } // Init is function to set table name and sql mapping to data columns -func (q *TableQuery) Init(table string, sqlMappers map[string]interface{}) error { +func (q *TableQuery) Init(table string, sqlMappers map[string]any) error { q.table = table - q.sqlFind = func(out interface{}) func(db *gorm.DB) *gorm.DB { + q.sqlFind = func(out any) func(db *gorm.DB) *gorm.DB { return func(db *gorm.DB) *gorm.DB { return db.Find(out) } } - q.sqlMappers = make(map[string]interface{}) + q.sqlMappers = make(map[string]any) q.sqlOrders = append(q.sqlOrders, func(db *gorm.DB) *gorm.DB { return db.Order("id DESC") }) @@ -72,12 +76,12 @@ func (q *TableQuery) Init(table string, sqlMappers map[string]interface{}) error switch t := v.(type) { case string: t = q.DoConditionFormat(t) - if strings.HasSuffix(t, "id") || strings.HasSuffix(t, "date") || k == "ngroups" || strings.HasPrefix(t, "count(") { + if isNumbericField(k) { q.sqlMappers[k] = t } else { - q.sqlMappers[k] = "LOWER(" + t + ")" + q.sqlMappers[k] = "LOWER(" + t + "::text)" } - case func(q *TableQuery, db *gorm.DB, value interface{}) *gorm.DB: + case func(q *TableQuery, db *gorm.DB, value any) *gorm.DB: q.sqlMappers[k] = t default: continue @@ -108,7 +112,7 @@ func (q *TableQuery) SetFilters(sqlFilters []func(*gorm.DB) *gorm.DB) { } // SetFind is function to set custom find function to build target SQL query -func (q *TableQuery) SetFind(find func(out interface{}) func(*gorm.DB) *gorm.DB) { +func (q *TableQuery) SetFind(find func(out any) func(*gorm.DB) *gorm.DB) { q.sqlFind = find } @@ -118,12 +122,12 @@ func (q *TableQuery) SetOrders(sqlOrders []func(*gorm.DB) *gorm.DB) { } // Mappers is getter for private field (SQL find funcction to use it in custom query) -func (q *TableQuery) Find(out interface{}) func(*gorm.DB) *gorm.DB { +func (q *TableQuery) Find(out any) func(*gorm.DB) *gorm.DB { return q.sqlFind(out) } // Mappers is getter for private field (SQL mappers fields to table ones) -func (q *TableQuery) Mappers() map[string]interface{} { +func (q *TableQuery) Mappers() map[string]any { return q.sqlMappers } @@ -134,22 +138,34 @@ func (q *TableQuery) Table() string { // Ordering is function to get order of data rows according with input params func (q *TableQuery) Ordering() func(db *gorm.DB) *gorm.DB { - field := "" - arrow := "" - switch q.Sort.Order { - case "ascending": - arrow = "ASC" - case "descending": - arrow = "DESC" - } - if v, ok := q.sqlMappers[q.Sort.Prop]; ok { - if s, ok := v.(string); ok { - field = s + var sortItems []TableSort + + for _, sort := range q.Sort { + var t TableSort + + switch sort.Order { + case "ascending": + t.Order = "ASC" + case "descending": + t.Order = "DESC" + } + + if v, ok := q.sqlMappers[sort.Prop]; ok { + if s, ok := v.(string); ok { + t.Prop = s + } + } + + if t.Prop != "" && t.Order != "" { + sortItems = append(sortItems, t) } } + return func(db *gorm.DB) *gorm.DB { - if field != "" && arrow != "" { - db = db.Order(field + " " + arrow) + for _, sort := range sortItems { + // sort.Prop comes from server-side whitelist (q.sqlMappers) + // sort.Order is validated to be only "ASC" or "DESC" + db = db.Order(sort.Prop + " " + sort.Order) } for _, order := range q.sqlOrders { db = order(db) @@ -161,9 +177,9 @@ func (q *TableQuery) Ordering() func(db *gorm.DB) *gorm.DB { // Paginate is function to navigate between pages according with input params func (q *TableQuery) Paginate() func(db *gorm.DB) *gorm.DB { return func(db *gorm.DB) *gorm.DB { - if q.Page <= 0 && q.Size > 0 { + if q.Page <= 0 && q.Size >= 0 { return db.Limit(q.Size) - } else if q.Page > 0 && q.Size > 0 { + } else if q.Page > 0 && q.Size >= 0 { offset := (q.Page - 1) * q.Size return db.Offset(offset).Limit(q.Size) } @@ -172,7 +188,7 @@ func (q *TableQuery) Paginate() func(db *gorm.DB) *gorm.DB { } // GroupBy is function to group results by some field -func (q *TableQuery) GroupBy(total *uint64, result interface{}) func(db *gorm.DB) *gorm.DB { +func (q *TableQuery) GroupBy(total *uint64, result any) func(db *gorm.DB) *gorm.DB { return func(db *gorm.DB) *gorm.DB { return db.Group(q.groupField).Where(q.groupField+" IS NOT NULL").Count(total).Pluck(q.groupField, result) } @@ -182,11 +198,11 @@ func (q *TableQuery) GroupBy(total *uint64, result interface{}) func(db *gorm.DB func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { type item struct { op string - v interface{} + v any } fl := make(map[string][]item) - setFilter := func(field, operator string, value interface{}) { + setFilter := func(field, operator string, value any) { if operator == "" { operator = "like" // nolint:goconst } @@ -197,16 +213,20 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { switch tvalue := value.(type) { case string, float64, bool: fl[field] = append(fvalue, item{operator, tvalue}) - case []interface{}: + case []any: fl[field] = append(fvalue, item{operator, tvalue}) } } patchOperator := func(f *TableFilter) { switch f.Operator { - case "<", "<=", ">=", ">", "=", "!=", "like", "not like", "in": + case "<", "<=", ">=", ">", "=", "!=", "in": + case "not like": + if isNumbericField(f.Field) { + f.Operator = "!=" + } default: f.Operator = "like" - if strings.HasSuffix(f.Field, "id") || f.Field == "ngroups" { + if isNumbericField(f.Field) { f.Operator = "=" } } @@ -219,7 +239,7 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { if _, ok := q.sqlMappers[f.Field]; ok { if v, ok := f.Value.(string); ok && v != "" { vs := v - if StringInSlice(f.Operator, []string{"like", "not like"}) { + if slices.Contains([]string{"like", "not like"}, f.Operator) { vs = "%" + strings.ToLower(vs) + "%" } setFilter(f.Field, f.Operator, vs) @@ -230,8 +250,8 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { if v, ok := f.Value.(bool); ok { setFilter(f.Field, f.Operator, v) } - if v, ok := f.Value.([]interface{}); ok && len(v) != 0 { - var vi []interface{} + if v, ok := f.Value.([]any); ok && len(v) != 0 { + var vi []any for _, ti := range v { if ts, ok := ti.(string); ok { vi = append(vi, strings.ToLower(ts)) @@ -251,11 +271,11 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { } return func(db *gorm.DB) *gorm.DB { - doFilter := func(db *gorm.DB, k, s string, v interface{}) *gorm.DB { + doFilter := func(db *gorm.DB, k, s string, v any) *gorm.DB { switch t := q.sqlMappers[k].(type) { case string: return db.Where(t+s, v) - case func(q *TableQuery, db *gorm.DB, value interface{}) *gorm.DB: + case func(q *TableQuery, db *gorm.DB, value any) *gorm.DB: return t(q, db, v) default: return db @@ -263,7 +283,7 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { } for k, f := range fl { for _, it := range f { - if _, ok := it.v.([]interface{}); ok { + if _, ok := it.v.([]any); ok { db = doFilter(db, k, " "+it.op+" (?)", it.v) } else { db = doFilter(db, k, " "+it.op+" ?", it.v) @@ -278,7 +298,7 @@ func (q *TableQuery) DataFilter() func(db *gorm.DB) *gorm.DB { } // Query is function to retrieve table data according with input params -func (q *TableQuery) Query(db *gorm.DB, result interface{}, +func (q *TableQuery) Query(db *gorm.DB, result any, funcs ...func(*gorm.DB) *gorm.DB) (uint64, error) { var total uint64 err := ApplyToChainDB( @@ -291,8 +311,12 @@ func (q *TableQuery) Query(db *gorm.DB, result interface{}, } // QueryGrouped is function to retrieve grouped data according with input params -func (q *TableQuery) QueryGrouped(db *gorm.DB, result interface{}, +func (q *TableQuery) QueryGrouped(db *gorm.DB, result any, funcs ...func(*gorm.DB) *gorm.DB) (uint64, error) { + if _, ok := q.sqlMappers[q.Group]; !ok { + return 0, errors.New("group field not found") + } + var total uint64 err := ApplyToChainDB( ApplyToChainDB(db.Table(q.Table()), funcs...).Scopes(q.DataFilter()), @@ -315,92 +339,6 @@ func EncryptPassword(password string) (hpass []byte, err error) { return } -// TagsMapper is function to make tags field mapper for common tables -func TagsMapper(q *TableQuery, db *gorm.DB, value interface{}) *gorm.DB { - return JsonArrayMapper(q, db, value, "{{table}}.info", "$.tags[*]") -} - -// JsonDictMapper is function to make json dict field mapper for common tables -func JsonDictMapper(q *TableQuery, db *gorm.DB, value interface{}, sre, scond string) *gorm.DB { - vs := make([]interface{}, 0) - re := regexp.MustCompile(sre) - cond := q.DoConditionFormat(scond) - buildArray := func() *gorm.DB { - var conds []string - for i := 0; i < len(vs)/2; i++ { - conds = append(conds, cond) - } - return db.Where(strings.Join(conds, " OR "), vs...) - } - parseOSArch := func(v string) []interface{} { - list := re.FindStringSubmatch(v) - if len(list) == 3 { - return []interface{}{list[1], list[2]} - } - return []interface{}{} - } - - switch v := value.(type) { - case string: - vs = append(vs, parseOSArch(v)...) - return buildArray() - case []interface{}: - for _, t := range v { - if ts, ok := t.(string); ok { - vs = append(vs, parseOSArch(ts)...) - } - } - return buildArray() - default: - return db - } -} - -// JsonArrayMapper is function to make json array field mapper for common tables -func JsonArrayMapper(q *TableQuery, db *gorm.DB, value interface{}, column, path string) *gorm.DB { - cond := q.DoConditionFormat( - "(jsonb_path_query_array(lower(" + column + "::text)::jsonb, '" + path + "')::jsonb) $$$ lower(?)", - ) - buildArray := func(vs []interface{}) *gorm.DB { - var conds []string - for i := 0; i < len(vs); i++ { - conds = append(conds, cond) - } - return db.Where(strings.Join(conds, " OR "), vs...) - } - - switch v := value.(type) { - case string, float64: - return db.Where(cond, v) - case []interface{}: - return buildArray(v) - default: - return db - } -} - -func JsonArrayKeysMapper(q *TableQuery, db *gorm.DB, value interface{}, column, path string) *gorm.DB { - cond := q.DoConditionFormat( - "(jsonb_path_query_first(lower(" + column + "::text)::jsonb, '" + path + "')::jsonb) $$$ lower(?)", - ) - buildArray := func(vs []interface{}) *gorm.DB { - var conds []string - for i := 0; i < len(vs); i++ { - conds = append(conds, cond) - } - return db.Where(strings.Join(conds, " OR "), vs...) - } - - switch v := value.(type) { - case string, float64: - return db.Where(cond, v) - case []interface{}: - return buildArray(v) - default: - return db - } -} - // MakeMD5Hash is function to generate common hash by value func MakeMD5Hash(value, salt string) string { currentTime := time.Now().Format("2006-01-02 15:04:05.000000000") @@ -427,11 +365,6 @@ func MakeUuidStrFromHash(hash string) (string, error) { return userIdUuid.String(), nil } -func StringInSlice(a string, list []string) bool { - for _, b := range list { - if b == a { - return true - } - } - return false +func isNumbericField(field string) bool { + return strings.HasSuffix(field, "_id") || strings.HasSuffix(field, "_at") || field == "id" } diff --git a/backend/pkg/server/response/errors.go b/backend/pkg/server/response/errors.go index bd638735..b3161a1f 100644 --- a/backend/pkg/server/response/errors.go +++ b/backend/pkg/server/response/errors.go @@ -36,12 +36,6 @@ var ErrInfoUserNotFound = NewHttpError(404, "Info.UserNotFound", "user not found var ErrInfoInvalidUserData = NewHttpError(500, "Info.InvalidUserData", "invalid user data") var ErrInfoInvalidServiceData = NewHttpError(500, "Info.InvalidServiceData", "invalid service data") -// proto - -var ErrProtoInvalidRequest = NewHttpError(400, "Proto.InvalidRequest", "failed to validate auth token request") -var ErrProtoCreateTokenFail = NewHttpError(400, "Proto.CreateTokenFail", "failed to make auth token") -var ErrProtoInvalidToken = NewHttpError(400, "Proto.InvalidToken", "failed to valid auth token") - // users var ErrUsersNotFound = NewHttpError(404, "Users.NotFound", "user not found") @@ -132,3 +126,11 @@ var ErrSubtasksInvalidData = NewHttpError(500, "Subtasks.InvalidData", "invalid var ErrAssistantsInvalidRequest = NewHttpError(400, "Assistants.InvalidRequest", "invalid assistant request data") var ErrAssistantsNotFound = NewHttpError(404, "Assistants.NotFound", "assistant not found") var ErrAssistantsInvalidData = NewHttpError(500, "Assistants.InvalidData", "invalid assistant data") + +// tokens + +var ErrTokenCreationDisabled = NewHttpError(400, "Token.CreationDisabled", "token creation is disabled with default configuration") +var ErrTokenNotFound = NewHttpError(404, "Token.NotFound", "token not found") +var ErrTokenUnauthorized = NewHttpError(403, "Token.Unauthorized", "not authorized to manage this token") +var ErrTokenInvalidRequest = NewHttpError(400, "Token.InvalidRequest", "invalid token request data") +var ErrTokenInvalidData = NewHttpError(500, "Token.InvalidData", "invalid token data") diff --git a/backend/pkg/server/router.go b/backend/pkg/server/router.go index 28a2f565..9d674714 100644 --- a/backend/pkg/server/router.go +++ b/backend/pkg/server/router.go @@ -62,6 +62,11 @@ var frontendRoutes = []string{ // @query.collection.format multi +// @securityDefinitions.apikey BearerAuth +// @in header +// @name Authorization +// @description Type "Bearer" followed by a space and JWT token. + // @BasePath /api/v1 func NewRouter( db *database.Queries, @@ -78,7 +83,9 @@ func NewRouter( gob.Register([]string{}) - authMiddleware := auth.NewAuthMiddleware(baseURL, cfg.CookieSigningSalt) + tokenCache := auth.NewTokenCache(orm) + userCache := auth.NewUserCache(orm) + authMiddleware := auth.NewAuthMiddleware(baseURL, cfg.CookieSigningSalt, tokenCache, userCache) oauthClients := make(map[string]oauth.OAuthClient) oauthLoginCallbackURL := "/auth/login-callback" @@ -115,7 +122,7 @@ func NewRouter( orm, oauthClients, ) - userService := services.NewUserService(orm) + userService := services.NewUserService(orm, userCache) roleService := services.NewRoleService(orm) providerService := services.NewProviderService(providers) flowService := services.NewFlowService(orm, providers, controller) @@ -132,8 +139,9 @@ func NewRouter( screenshotService := services.NewScreenshotService(orm, cfg.DataDir) promptService := services.NewPromptService(orm) analyticsService := services.NewAnalyticsService(orm) + tokenService := services.NewTokenService(orm, cfg.CookieSigningSalt, tokenCache) graphqlService := services.NewGraphqlService( - db, cfg, baseURL, cfg.CorsOrigins, providers, controller, subscriptions, + db, cfg, baseURL, cfg.CorsOrigins, tokenCache, providers, controller, subscriptions, ) router := gin.Default() @@ -165,7 +173,7 @@ func NewRouter( // Special case for local user own password change changePasswordGroup := api.Group("/user") - changePasswordGroup.Use(authMiddleware.AuthRequired) + changePasswordGroup.Use(authMiddleware.AuthUserRequired) changePasswordGroup.Use(localUserRequired()) changePasswordGroup.PUT("/password", userService.ChangePasswordCurrentUser) @@ -192,13 +200,10 @@ func NewRouter( } privateGroup := api.Group("/") - privateGroup.Use(authMiddleware.AuthRequired) + privateGroup.Use(authMiddleware.AuthTokenRequired) { setGraphqlGroup(privateGroup, graphqlService) - setRolesGroup(privateGroup, roleService) - setUsersGroup(privateGroup, userService) - setProvidersGroup(privateGroup, providerService) setFlowsGroup(privateGroup, flowService) setTasksGroup(privateGroup, taskService) @@ -216,6 +221,14 @@ func NewRouter( setAnalyticsGroup(privateGroup, analyticsService) } + privateUserGroup := api.Group("/") + privateUserGroup.Use(authMiddleware.AuthUserRequired) + { + setRolesGroup(privateGroup, roleService) + setUsersGroup(privateGroup, userService) + setTokensGroup(privateGroup, tokenService) + } + if cfg.StaticURL != nil && cfg.StaticURL.Scheme != "" && cfg.StaticURL.Host != "" { router.NoRoute(func() gin.HandlerFunc { return func(c *gin.Context) { @@ -294,7 +307,7 @@ func setGraphqlGroup(parent *gin.RouterGroup, svc *services.GraphqlService) { func setSubtasksGroup(parent *gin.RouterGroup, svc *services.SubtaskService) { flowSubtasksViewGroup := parent.Group("/flows/:flowID/subtasks") { - flowSubtasksViewGroup.GET("/", svc.GetFlowTaskSubtasks) + flowSubtasksViewGroup.GET("/", svc.GetFlowSubtasks) } flowTaskSubtasksViewGroup := parent.Group("/flows/:flowID/tasks/:taskID/subtasks") @@ -470,6 +483,7 @@ func setPromptsGroup(parent *gin.RouterGroup, svc *services.PromptService) { { promptsEditGroup.PUT("/:promptType", svc.PatchPrompt) promptsEditGroup.POST("/:promptType/default", svc.ResetPrompt) + promptsEditGroup.DELETE("/:promptType", svc.DeletePrompt) } } @@ -523,3 +537,14 @@ func setAnalyticsGroup(parent *gin.RouterGroup, svc *services.AnalyticsService) flowUsageViewGroup.GET("/", svc.GetFlowUsage) } } + +func setTokensGroup(parent *gin.RouterGroup, svc *services.TokenService) { + tokensGroup := parent.Group("/tokens") + { + tokensGroup.POST("/", svc.CreateToken) + tokensGroup.GET("/", svc.ListTokens) + tokensGroup.GET("/:tokenID", svc.GetToken) + tokensGroup.PUT("/:tokenID", svc.UpdateToken) + tokensGroup.DELETE("/:tokenID", svc.DeleteToken) + } +} diff --git a/backend/pkg/server/services/agentlogs.go b/backend/pkg/server/services/agentlogs.go index 17e963cf..c427297b 100644 --- a/backend/pkg/server/services/agentlogs.go +++ b/backend/pkg/server/services/agentlogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,7 +25,7 @@ type agentlogsGrouped struct { Total uint64 `json:"total"` } -var agentlogsSQLMappers = map[string]interface{}{ +var agentlogsSQLMappers = map[string]any{ "id": "{{table}}.id", "initiator": "{{table}}.initiator", "executor": "{{table}}.executor", @@ -33,6 +34,7 @@ var agentlogsSQLMappers = map[string]interface{}{ "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", + "created_at": "{{table}}.created_at", "data": "({{table}}.task || ' ' || {{table}}.result)", } @@ -50,6 +52,7 @@ func NewAgentlogService(db *gorm.DB) *AgentlogService { // @Summary Retrieve agentlogs list // @Tags Agentlogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=agentlogs} "agentlogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -92,6 +95,12 @@ func (s *AgentlogService) GetAgentlogs(c *gin.Context) { query.Init("agentlogs", agentlogsSQLMappers) if query.Group != "" { + if _, ok := agentlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding agentlogs grouped: group field not found") + response.Error(c, response.ErrAgentlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped agentlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding agentlogs grouped") @@ -124,6 +133,7 @@ func (s *AgentlogService) GetAgentlogs(c *gin.Context) { // @Summary Retrieve agentlogs list by flow id // @Tags Agentlogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=agentlogs} "agentlogs list received successful" @@ -175,6 +185,12 @@ func (s *AgentlogService) GetFlowAgentlogs(c *gin.Context) { query.Init("agentlogs", agentlogsSQLMappers) if query.Group != "" { + if _, ok := agentlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding agentlogs grouped: group field not found") + response.Error(c, response.ErrAgentlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped agentlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding agentlogs grouped") diff --git a/backend/pkg/server/services/analytics.go b/backend/pkg/server/services/analytics.go index ff9da71b..76d5483c 100644 --- a/backend/pkg/server/services/analytics.go +++ b/backend/pkg/server/services/analytics.go @@ -31,6 +31,7 @@ func NewAnalyticsService(db *gorm.DB) *AnalyticsService { // @Description Get comprehensive analytics for all user's flows including usage, toolcalls, and structural stats // @Tags Usage // @Produce json +// @Security BearerAuth // @Success 200 {object} response.successResp{data=models.SystemUsageResponse} "analytics received successful" // @Failure 403 {object} response.errorResp "getting analytics not permitted" // @Failure 500 {object} response.errorResp "internal error on getting analytics" @@ -359,6 +360,7 @@ func (s *AnalyticsService) GetSystemUsage(c *gin.Context) { // @Description Get time-series analytics data for week, month, or quarter // @Tags Usage // @Produce json +// @Security BearerAuth // @Param period path string true "period" Enums(week, month, quarter) // @Success 200 {object} response.successResp{data=models.PeriodUsageResponse} "period analytics received successful" // @Failure 400 {object} response.errorResp "invalid period parameter" @@ -774,6 +776,7 @@ func (s *AnalyticsService) GetPeriodUsage(c *gin.Context) { // @Description Get comprehensive analytics for a single flow including all breakdowns // @Tags Flows, Usage // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Success 200 {object} response.successResp{data=models.FlowUsageResponse} "flow analytics received successful" // @Failure 400 {object} response.errorResp "invalid flow id" diff --git a/backend/pkg/server/services/api_tokens.go b/backend/pkg/server/services/api_tokens.go new file mode 100644 index 00000000..42e15031 --- /dev/null +++ b/backend/pkg/server/services/api_tokens.go @@ -0,0 +1,382 @@ +package services + +import ( + "errors" + "net/http" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/logger" + "pentagi/pkg/server/models" + "pentagi/pkg/server/response" + + "github.com/gin-gonic/gin" + "github.com/jinzhu/gorm" +) + +type tokens struct { + Tokens []models.APIToken `json:"tokens"` + Total uint64 `json:"total"` +} + +// TokenService handles API token management +type TokenService struct { + db *gorm.DB + globalSalt string + tokenCache *auth.TokenCache +} + +// NewTokenService creates a new TokenService instance +func NewTokenService(db *gorm.DB, globalSalt string, tokenCache *auth.TokenCache) *TokenService { + return &TokenService{ + db: db, + globalSalt: globalSalt, + tokenCache: tokenCache, + } +} + +// CreateToken creates a new API token +// @Summary Create new API token for automation +// @Tags Tokens +// @Accept json +// @Produce json +// @Param json body models.CreateAPITokenRequest true "Token creation request" +// @Success 201 {object} response.successResp{data=models.APITokenWithSecret} "token created successful" +// @Failure 400 {object} response.errorResp "invalid token request or default salt" +// @Failure 403 {object} response.errorResp "creating token not permitted" +// @Failure 500 {object} response.errorResp "internal error on creating token" +// @Router /tokens [post] +func (s *TokenService) CreateToken(c *gin.Context) { + // check for default salt + if s.globalSalt == "" || s.globalSalt == "salt" { + logger.FromContext(c).Errorf("token creation attempted with default salt") + response.Error(c, response.ErrTokenCreationDisabled, errors.New("token creation is disabled with default salt")) + return + } + + uid := c.GetUint64("uid") + rid := c.GetUint64("rid") + uhash := c.GetString("uhash") + + var req models.CreateAPITokenRequest + if err := c.ShouldBindJSON(&req); err != nil { + logger.FromContext(c).WithError(err).Errorf("error binding JSON") + response.Error(c, response.ErrTokenInvalidRequest, err) + return + } + if err := req.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating JSON") + response.Error(c, response.ErrTokenInvalidRequest, err) + return + } + + // check if name is unique for this user (if provided) + if req.Name != nil && *req.Name != "" { + var existing models.APIToken + err := s.db. + Where("user_id = ? AND name = ? AND deleted_at IS NULL", uid, *req.Name). + First(&existing). + Error + if err == nil { + logger.FromContext(c).Errorf("token with name '%s' already exists for user %d", *req.Name, uid) + response.Error(c, response.ErrTokenInvalidRequest, errors.New("token with this name already exists")) + return + } + } + + // generate token_id + tokenID, err := auth.GenerateTokenID() + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error generating token ID") + response.Error(c, response.ErrInternal, err) + return + } + + // create JWT claims + claims := auth.MakeAPITokenClaims(tokenID, uhash, uid, rid, req.TTL) + + // sign token + token, err := auth.MakeAPIToken(s.globalSalt, claims) + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error signing token") + response.Error(c, response.ErrInternal, err) + return + } + + // save to database + apiToken := models.APIToken{ + TokenID: tokenID, + UserID: uid, + RoleID: rid, + Name: req.Name, + TTL: req.TTL, + Status: models.TokenStatusActive, + } + + if err := s.db.Create(&apiToken).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error creating token in database") + response.Error(c, response.ErrInternal, err) + return + } + + result := models.APITokenWithSecret{ + APIToken: apiToken, + Token: token, + } + + // invalidate cache for negative caching results + s.tokenCache.Invalidate(apiToken.TokenID) + s.tokenCache.InvalidateUser(apiToken.UserID) + + response.Success(c, http.StatusCreated, result) +} + +// ListTokens returns a list of tokens (user sees only their own, admin sees all) +// @Summary List API tokens +// @Tags Tokens +// @Produce json +// @Success 200 {object} response.successResp{data=tokens} "tokens retrieved successful" +// @Failure 403 {object} response.errorResp "listing tokens not permitted" +// @Failure 500 {object} response.errorResp "internal error on listing tokens" +// @Router /tokens [get] +func (s *TokenService) ListTokens(c *gin.Context) { + uid := c.GetUint64("uid") + prms := c.GetStringSlice("prm") + + query := s.db.Where("deleted_at IS NULL") + + // check if user has admin privilege + hasAdmin := auth.LookupPerm(prms, "settings.tokens.admin") + if !hasAdmin { + // regular user sees only their own tokens + query = query.Where("user_id = ?", uid) + } + + var tokenList []models.APIToken + var total uint64 + + if err := query.Order("created_at DESC").Find(&tokenList).Count(&total).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding tokens") + response.Error(c, response.ErrInternal, err) + return + } + + for i := range tokenList { + token := &tokenList[i] + isExpired := token.CreatedAt.Add(time.Duration(token.TTL) * time.Second).Before(time.Now()) + if token.Status == models.TokenStatusActive && isExpired { + token.Status = models.TokenStatusExpired + } + } + + result := tokens{ + Tokens: tokenList, + Total: total, + } + + response.Success(c, http.StatusOK, result) +} + +// GetToken returns information about a specific token +// @Summary Get API token details +// @Tags Tokens +// @Produce json +// @Param tokenID path string true "Token ID" +// @Success 200 {object} response.successResp{data=models.APIToken} "token retrieved successful" +// @Failure 403 {object} response.errorResp "accessing token not permitted" +// @Failure 404 {object} response.errorResp "token not found" +// @Failure 500 {object} response.errorResp "internal error on getting token" +// @Router /tokens/{tokenID} [get] +func (s *TokenService) GetToken(c *gin.Context) { + uid := c.GetUint64("uid") + prms := c.GetStringSlice("prm") + tokenID := c.Param("tokenID") + + var token models.APIToken + if err := s.db.Where("token_id = ? AND deleted_at IS NULL", tokenID).First(&token).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding token") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrTokenNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + // check authorization + hasAdmin := auth.LookupPerm(prms, "settings.tokens.admin") + if !hasAdmin && token.UserID != uid { + logger.FromContext(c).Errorf("user %d attempted to access token of user %d", uid, token.UserID) + response.Error(c, response.ErrTokenUnauthorized, errors.New("not authorized to access this token")) + return + } + + isExpired := token.CreatedAt.Add(time.Duration(token.TTL) * time.Second).Before(time.Now()) + if token.Status == models.TokenStatusActive && isExpired { + token.Status = models.TokenStatusExpired + } + + if err := token.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating token data") + response.Error(c, response.ErrTokenInvalidData, err) + return + } + + response.Success(c, http.StatusOK, token) +} + +// UpdateToken updates name and/or status of a token +// @Summary Update API token +// @Tags Tokens +// @Accept json +// @Produce json +// @Param tokenID path string true "Token ID" +// @Param json body models.UpdateAPITokenRequest true "Token update request" +// @Success 200 {object} response.successResp{data=models.APIToken} "token updated successful" +// @Failure 400 {object} response.errorResp "invalid update request" +// @Failure 403 {object} response.errorResp "updating token not permitted" +// @Failure 404 {object} response.errorResp "token not found" +// @Failure 500 {object} response.errorResp "internal error on updating token" +// @Router /tokens/{tokenID} [put] +func (s *TokenService) UpdateToken(c *gin.Context) { + uid := c.GetUint64("uid") + prms := c.GetStringSlice("prm") + tokenID := c.Param("tokenID") + + var req models.UpdateAPITokenRequest + if err := c.ShouldBindJSON(&req); err != nil { + logger.FromContext(c).WithError(err).Errorf("error binding JSON") + response.Error(c, response.ErrTokenInvalidRequest, err) + return + } + if err := req.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating JSON") + response.Error(c, response.ErrTokenInvalidRequest, err) + return + } + + var token models.APIToken + if err := s.db.Where("token_id = ? AND deleted_at IS NULL", tokenID).First(&token).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding token") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrTokenNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + // check authorization + hasAdmin := auth.LookupPerm(prms, "settings.tokens.admin") + if !hasAdmin && token.UserID != uid { + logger.FromContext(c).Errorf("user %d attempted to update token of user %d", uid, token.UserID) + response.Error(c, response.ErrTokenUnauthorized, errors.New("not authorized to update this token")) + return + } + + // update fields + updates := make(map[string]any) + if req.Name != nil { + // check uniqueness if name is changing + if token.Name == nil || *token.Name != *req.Name { + if *req.Name != "" { + var existing models.APIToken + err := s.db. + Where("user_id = ? AND name = ? AND token_id != ? AND deleted_at IS NULL", token.UserID, *req.Name, tokenID). + First(&existing). + Error + if err == nil { + logger.FromContext(c).Errorf("token with name '%s' already exists for user %d", *req.Name, token.UserID) + response.Error(c, response.ErrTokenInvalidRequest, errors.New("token with this name already exists")) + return + } + } + } + updates["name"] = req.Name + } + switch req.Status { + case models.TokenStatusActive: + updates["status"] = models.TokenStatusActive + case models.TokenStatusRevoked: + updates["status"] = models.TokenStatusRevoked + case models.TokenStatusExpired: + updates["status"] = models.TokenStatusRevoked + } + + if len(updates) > 0 { + if err := s.db.Model(&token).Updates(updates).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error updating token") + response.Error(c, response.ErrInternal, err) + return + } + + // invalidate cache if status changed + if req.Status != "" { + s.tokenCache.Invalidate(tokenID) + // also invalidate all tokens for this user (in case of role change or security event) + s.tokenCache.InvalidateUser(token.UserID) + } + + // reload token + if err := s.db.Where("token_id = ?", tokenID).First(&token).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error reloading token") + response.Error(c, response.ErrInternal, err) + return + } + } + + isExpired := token.CreatedAt.Add(time.Duration(token.TTL) * time.Second).Before(time.Now()) + if token.Status == models.TokenStatusActive && isExpired { + token.Status = models.TokenStatusExpired + } + + response.Success(c, http.StatusOK, token) +} + +// DeleteToken performs soft delete of a token +// @Summary Delete API token +// @Tags Tokens +// @Produce json +// @Param tokenID path string true "Token ID" +// @Success 200 {object} response.successResp "token deleted successful" +// @Failure 403 {object} response.errorResp "deleting token not permitted" +// @Failure 404 {object} response.errorResp "token not found" +// @Failure 500 {object} response.errorResp "internal error on deleting token" +// @Router /tokens/{tokenID} [delete] +func (s *TokenService) DeleteToken(c *gin.Context) { + uid := c.GetUint64("uid") + prms := c.GetStringSlice("prm") + tokenID := c.Param("tokenID") + + var token models.APIToken + if err := s.db.Where("token_id = ? AND deleted_at IS NULL", tokenID).First(&token).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding token") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrTokenNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + // check authorization + hasAdmin := auth.LookupPerm(prms, "settings.tokens.admin") + if !hasAdmin && token.UserID != uid { + logger.FromContext(c).Errorf("user %d attempted to delete token of user %d", uid, token.UserID) + response.Error(c, response.ErrTokenUnauthorized, errors.New("not authorized to delete this token")) + return + } + + // soft delete + if err := s.db.Delete(&token).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error deleting token") + response.Error(c, response.ErrInternal, err) + return + } + + // invalidate cache for this token and all user's tokens + s.tokenCache.Invalidate(tokenID) + s.tokenCache.InvalidateUser(token.UserID) + + response.Success(c, http.StatusOK, gin.H{"message": "token deleted successfully"}) +} diff --git a/backend/pkg/server/services/api_tokens_test.go b/backend/pkg/server/services/api_tokens_test.go new file mode 100644 index 00000000..f2e0d673 --- /dev/null +++ b/backend/pkg/server/services/api_tokens_test.go @@ -0,0 +1,1079 @@ +package services + +import ( + "bytes" + "encoding/json" + "fmt" + "net/http" + "net/http/httptest" + "testing" + "time" + + "pentagi/pkg/server/auth" + "pentagi/pkg/server/models" + + "github.com/gin-gonic/gin" + "github.com/golang-jwt/jwt/v5" + "github.com/jinzhu/gorm" + _ "github.com/jinzhu/gorm/dialects/sqlite" + "github.com/stretchr/testify/assert" + "github.com/stretchr/testify/require" +) + +func setupTestDB(t *testing.T) *gorm.DB { + t.Helper() + db, err := gorm.Open("sqlite3", ":memory:") + require.NoError(t, err) + + // Create roles table + db.Exec(` + CREATE TABLE roles ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + name TEXT NOT NULL UNIQUE + ) + `) + + // Create privileges table + db.Exec(` + CREATE TABLE privileges ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + role_id INTEGER NOT NULL, + name TEXT NOT NULL, + UNIQUE(role_id, name) + ) + `) + + // Create api_tokens table + db.Exec(` + CREATE TABLE api_tokens ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + token_id TEXT NOT NULL UNIQUE, + user_id INTEGER NOT NULL, + role_id INTEGER NOT NULL, + name TEXT, + ttl INTEGER NOT NULL, + status TEXT NOT NULL DEFAULT 'active', + created_at DATETIME DEFAULT CURRENT_TIMESTAMP, + updated_at DATETIME DEFAULT CURRENT_TIMESTAMP, + deleted_at DATETIME + ) + `) + + // Insert test roles + db.Exec("INSERT INTO roles (id, name) VALUES (1, 'Admin'), (2, 'User')") + + // Insert test privileges for Admin role + db.Exec(`INSERT INTO privileges (role_id, name) VALUES + (1, 'users.create'), + (1, 'users.delete'), + (1, 'users.edit'), + (1, 'users.view'), + (1, 'roles.view'), + (1, 'flows.admin'), + (1, 'flows.create'), + (1, 'flows.delete'), + (1, 'flows.edit'), + (1, 'flows.view'), + (1, 'settings.tokens.create'), + (1, 'settings.tokens.view'), + (1, 'settings.tokens.edit'), + (1, 'settings.tokens.delete'), + (1, 'settings.tokens.admin')`) + + // Insert test privileges for User role + db.Exec(`INSERT INTO privileges (role_id, name) VALUES + (2, 'roles.view'), + (2, 'flows.create'), + (2, 'flows.delete'), + (2, 'flows.edit'), + (2, 'flows.view'), + (2, 'settings.tokens.create'), + (2, 'settings.tokens.view'), + (2, 'settings.tokens.edit'), + (2, 'settings.tokens.delete')`) + + // Create users table + db.Exec(` + CREATE TABLE users ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + hash TEXT NOT NULL UNIQUE, + type TEXT NOT NULL DEFAULT 'local', + mail TEXT NOT NULL UNIQUE, + name TEXT NOT NULL DEFAULT '', + status TEXT NOT NULL DEFAULT 'active', + role_id INTEGER NOT NULL DEFAULT 2, + password TEXT, + password_change_required BOOLEAN NOT NULL DEFAULT false, + provider TEXT, + created_at DATETIME DEFAULT CURRENT_TIMESTAMP, + deleted_at DATETIME + ) + `) + + // Insert test users + db.Exec("INSERT INTO users (id, hash, mail, name, status, role_id) VALUES (1, 'testhash1', 'user1@test.com', 'User 1', 'active', 2)") + db.Exec("INSERT INTO users (id, hash, mail, name, status, role_id) VALUES (2, 'testhash2', 'user2@test.com', 'User 2', 'active', 2)") + + return db +} + +func setupTestContext(uid, rid uint64, uhash string, permissions []string) (*gin.Context, *httptest.ResponseRecorder) { + gin.SetMode(gin.TestMode) + w := httptest.NewRecorder() + c, _ := gin.CreateTestContext(w) + + c.Set("uid", uid) + c.Set("rid", rid) + c.Set("uhash", uhash) + c.Set("prm", permissions) + + return c, w +} + +func TestTokenService_CreateToken(t *testing.T) { + testCases := []struct { + name string + globalSalt string + requestBody string + uid uint64 + rid uint64 + uhash string + expectedCode int + expectToken bool + errorContains string + }{ + { + name: "valid token creation", + globalSalt: "custom_salt", + requestBody: `{"ttl": 3600, "name": "Test Token"}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusCreated, + expectToken: true, + }, + { + name: "default salt protection", + globalSalt: "salt", + requestBody: `{"ttl": 3600}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusBadRequest, + expectToken: false, + errorContains: "disabled", + }, + { + name: "empty salt protection", + globalSalt: "", + requestBody: `{"ttl": 3600}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusBadRequest, + expectToken: false, + errorContains: "disabled", + }, + { + name: "invalid TTL (too short)", + globalSalt: "custom_salt", + requestBody: `{"ttl": 30}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusBadRequest, + expectToken: false, + errorContains: "", + }, + { + name: "invalid TTL (too long)", + globalSalt: "custom_salt", + requestBody: `{"ttl": 100000000}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusBadRequest, + expectToken: false, + errorContains: "", + }, + { + name: "token without name", + globalSalt: "custom_salt", + requestBody: `{"ttl": 7200}`, + uid: 1, + rid: 2, + uhash: "testhash", + expectedCode: http.StatusCreated, + expectToken: true, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, tc.globalSalt, tokenCache) + c, w := setupTestContext(tc.uid, tc.rid, tc.uhash, []string{"settings.tokens.create"}) + + c.Request = httptest.NewRequest(http.MethodPost, "/tokens", bytes.NewBufferString(tc.requestBody)) + c.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c) + + assert.Equal(t, tc.expectedCode, w.Code) + + if tc.expectToken { + var response struct { + Status string `json:"status"` + Data struct { + Token string `json:"token"` + TokenID string `json:"token_id"` + } `json:"data"` + } + err := json.Unmarshal(w.Body.Bytes(), &response) + require.NoError(t, err) + assert.Equal(t, "success", response.Status) + assert.NotEmpty(t, response.Data.Token) + assert.NotEmpty(t, response.Data.TokenID) + assert.Len(t, response.Data.TokenID, 10) + } + + if tc.errorContains != "" { + assert.Contains(t, w.Body.String(), tc.errorContains) + } + }) + } +} + +func TestTokenService_CreateToken_NameUniqueness(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create first token + c1, w1 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.create"}) + c1.Request = httptest.NewRequest(http.MethodPost, "/tokens", + bytes.NewBufferString(`{"ttl": 3600, "name": "Duplicate Name"}`)) + c1.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c1) + assert.Equal(t, http.StatusCreated, w1.Code) + + // Try to create second token with same name for same user + c2, w2 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.create"}) + c2.Request = httptest.NewRequest(http.MethodPost, "/tokens", + bytes.NewBufferString(`{"ttl": 3600, "name": "Duplicate Name"}`)) + c2.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c2) + assert.Equal(t, http.StatusBadRequest, w2.Code) + assert.Contains(t, w2.Body.String(), "already exists") + + // Create token with same name for different user (should succeed) + c3, w3 := setupTestContext(2, 2, "hash2", []string{"settings.tokens.create"}) + c3.Request = httptest.NewRequest(http.MethodPost, "/tokens", + bytes.NewBufferString(`{"ttl": 3600, "name": "Duplicate Name"}`)) + c3.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c3) + assert.Equal(t, http.StatusCreated, w3.Code) +} + +func TestTokenService_ListTokens(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + // Create tokens for different users + tokens := []models.APIToken{ + {TokenID: "token1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive}, + {TokenID: "token2", UserID: 1, RoleID: 2, TTL: 7200, Status: models.TokenStatusActive}, + {TokenID: "token3", UserID: 2, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive}, + {TokenID: "token4", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusRevoked}, + } + + for _, token := range tokens { + err := db.Create(&token).Error + require.NoError(t, err) + } + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + testCases := []struct { + name string + uid uint64 + permissions []string + expectedCount int + }{ + { + name: "regular user sees own tokens", + uid: 1, + permissions: []string{"settings.tokens.view"}, + expectedCount: 3, // token1, token2, token4 (including revoked) + }, + { + name: "admin sees all tokens", + uid: 1, + permissions: []string{"settings.tokens.view", "settings.tokens.admin"}, + expectedCount: 4, // all tokens + }, + { + name: "user 2 sees only own token", + uid: 2, + permissions: []string{"settings.tokens.view"}, + expectedCount: 1, // token3 + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + c, w := setupTestContext(tc.uid, 2, fmt.Sprintf("hash%d", tc.uid), tc.permissions) + c.Request = httptest.NewRequest(http.MethodGet, "/tokens", nil) + + service.ListTokens(c) + + assert.Equal(t, http.StatusOK, w.Code) + + var response struct { + Status string `json:"status"` + Data struct { + Tokens []models.APIToken `json:"tokens"` + Total uint64 `json:"total"` + } `json:"data"` + } + err := json.Unmarshal(w.Body.Bytes(), &response) + require.NoError(t, err) + assert.Equal(t, "success", response.Status) + assert.Equal(t, tc.expectedCount, len(response.Data.Tokens)) + assert.Equal(t, uint64(tc.expectedCount), response.Data.Total) + }) + } +} + +func TestTokenService_GetToken(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + // Create tokens + token1 := models.APIToken{TokenID: "usertoken1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + token2 := models.APIToken{TokenID: "usertoken2", UserID: 2, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + + db.Create(&token1) + db.Create(&token2) + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + testCases := []struct { + name string + tokenID string + uid uint64 + permissions []string + expectedCode int + }{ + { + name: "user gets own token", + tokenID: "usertoken1", + uid: 1, + permissions: []string{"settings.tokens.view"}, + expectedCode: http.StatusOK, + }, + { + name: "user cannot get other user's token", + tokenID: "usertoken2", + uid: 1, + permissions: []string{"settings.tokens.view"}, + expectedCode: http.StatusForbidden, + }, + { + name: "admin can get any token", + tokenID: "usertoken2", + uid: 1, + permissions: []string{"settings.tokens.view", "settings.tokens.admin"}, + expectedCode: http.StatusOK, + }, + { + name: "nonexistent token", + tokenID: "nonexistent", + uid: 1, + permissions: []string{"settings.tokens.view", "settings.tokens.admin"}, + expectedCode: http.StatusNotFound, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + c, w := setupTestContext(tc.uid, 2, "testhash", tc.permissions) + c.Request = httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tokens/%s", tc.tokenID), nil) + c.Params = gin.Params{{Key: "tokenID", Value: tc.tokenID}} + + service.GetToken(c) + + assert.Equal(t, tc.expectedCode, w.Code) + + if tc.expectedCode == http.StatusOK { + var response struct { + Status string `json:"status"` + Data models.APIToken `json:"data"` + } + err := json.Unmarshal(w.Body.Bytes(), &response) + require.NoError(t, err) + assert.Equal(t, "success", response.Status) + assert.Equal(t, tc.tokenID, response.Data.TokenID) + } + }) + } +} + +func TestTokenService_UpdateToken(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create initial token + initialToken := models.APIToken{ + TokenID: "updatetest1", + UserID: 1, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&initialToken).Error + require.NoError(t, err) + + testCases := []struct { + name string + tokenID string + uid uint64 + permissions []string + requestBody string + expectedCode int + checkResult func(t *testing.T, db *gorm.DB) + }{ + { + name: "update name", + tokenID: "updatetest1", + uid: 1, + permissions: []string{"settings.tokens.edit"}, + requestBody: `{"name": "Updated Name"}`, + expectedCode: http.StatusOK, + checkResult: func(t *testing.T, db *gorm.DB) { + var token models.APIToken + db.Where("token_id = ?", "updatetest1").First(&token) + assert.NotNil(t, token.Name) + assert.Equal(t, "Updated Name", *token.Name) + }, + }, + { + name: "revoke token", + tokenID: "updatetest1", + uid: 1, + permissions: []string{"settings.tokens.edit"}, + requestBody: `{"status": "revoked"}`, + expectedCode: http.StatusOK, + checkResult: func(t *testing.T, db *gorm.DB) { + var token models.APIToken + db.Where("token_id = ?", "updatetest1").First(&token) + assert.Equal(t, models.TokenStatusRevoked, token.Status) + }, + }, + { + name: "reactivate token", + tokenID: "updatetest1", + uid: 1, + permissions: []string{"settings.tokens.edit"}, + requestBody: `{"status": "active"}`, + expectedCode: http.StatusOK, + checkResult: func(t *testing.T, db *gorm.DB) { + var token models.APIToken + db.Where("token_id = ?", "updatetest1").First(&token) + assert.Equal(t, models.TokenStatusActive, token.Status) + }, + }, + { + name: "unauthorized update (different user)", + tokenID: "updatetest1", + uid: 2, + permissions: []string{"settings.tokens.edit"}, + requestBody: `{"name": "Hacked"}`, + expectedCode: http.StatusForbidden, + checkResult: nil, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + c, w := setupTestContext(tc.uid, 2, "testhash", tc.permissions) + c.Request = httptest.NewRequest(http.MethodPut, + fmt.Sprintf("/tokens/%s", tc.tokenID), + bytes.NewBufferString(tc.requestBody)) + c.Request.Header.Set("Content-Type", "application/json") + c.Params = gin.Params{{Key: "tokenID", Value: tc.tokenID}} + + service.UpdateToken(c) + + assert.Equal(t, tc.expectedCode, w.Code) + + if tc.checkResult != nil { + tc.checkResult(t, db) + } + }) + } +} + +func TestTokenService_UpdateToken_NameUniqueness(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create two tokens + token1 := models.APIToken{TokenID: "token1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + token2 := models.APIToken{TokenID: "token2", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token1) + db.Create(&token2) + + // Update token1 name + name1 := "First Token" + db.Model(&token1).Update("name", name1) + + // Try to update token2 with same name (should fail) + c, w := setupTestContext(1, 2, "hash1", []string{"settings.tokens.edit"}) + c.Request = httptest.NewRequest(http.MethodPut, "/tokens/token2", + bytes.NewBufferString(`{"name": "First Token"}`)) + c.Request.Header.Set("Content-Type", "application/json") + c.Params = gin.Params{{Key: "tokenID", Value: "token2"}} + + service.UpdateToken(c) + + assert.Equal(t, http.StatusBadRequest, w.Code) + assert.Contains(t, w.Body.String(), "already exists") +} + +func TestTokenService_DeleteToken(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + testCases := []struct { + name string + setupTokens func() string + tokenID string + uid uint64 + permissions []string + expectedCode int + }{ + { + name: "user deletes own token", + setupTokens: func() string { + token := models.APIToken{TokenID: "deltest1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + return "deltest1" + }, + uid: 1, + permissions: []string{"settings.tokens.delete"}, + expectedCode: http.StatusOK, + }, + { + name: "user cannot delete other user's token", + setupTokens: func() string { + token := models.APIToken{TokenID: "deltest2", UserID: 2, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + return "deltest2" + }, + uid: 1, + permissions: []string{"settings.tokens.delete"}, + expectedCode: http.StatusForbidden, + }, + { + name: "admin can delete any token", + setupTokens: func() string { + token := models.APIToken{TokenID: "deltest3", UserID: 2, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + return "deltest3" + }, + uid: 1, + permissions: []string{"settings.tokens.delete", "settings.tokens.admin"}, + expectedCode: http.StatusOK, + }, + { + name: "delete nonexistent token", + setupTokens: func() string { + return "nonexistent" + }, + uid: 1, + permissions: []string{"settings.tokens.delete", "settings.tokens.admin"}, + expectedCode: http.StatusNotFound, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + tokenID := tc.setupTokens() + + c, w := setupTestContext(tc.uid, 2, "testhash", tc.permissions) + c.Request = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/tokens/%s", tokenID), nil) + c.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + + service.DeleteToken(c) + + assert.Equal(t, tc.expectedCode, w.Code) + + // Verify soft delete + if tc.expectedCode == http.StatusOK { + var deletedToken models.APIToken + err := db.Unscoped().Where("token_id = ?", tokenID).First(&deletedToken).Error + require.NoError(t, err) + assert.NotNil(t, deletedToken.DeletedAt) + } + }) + } +} + +func TestTokenService_DeleteToken_InvalidatesCache(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create token + tokenID := "cachetest1" + token := models.APIToken{TokenID: tokenID, UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + + // Populate cache + _, _, err := service.tokenCache.GetStatus(tokenID) + require.NoError(t, err) + + // Delete token + c, w := setupTestContext(1, 2, "hash1", []string{"settings.tokens.delete"}) + c.Request = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/tokens/%s", tokenID), nil) + c.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + + service.DeleteToken(c) + assert.Equal(t, http.StatusOK, w.Code) + + // Cache should be invalidated (GetStatus should return error for deleted token) + _, _, err = service.tokenCache.GetStatus(tokenID) + assert.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) +} + +func TestTokenService_UpdateToken_InvalidatesCache(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create token + tokenID := "cachetest2" + token := models.APIToken{TokenID: tokenID, UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + + // Populate cache with active status + status, privileges, err := service.tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + assert.Contains(t, privileges, auth.PrivilegeAutomation) + + // Update status to revoked + c, w := setupTestContext(1, 2, "hash1", []string{"settings.tokens.edit"}) + c.Request = httptest.NewRequest(http.MethodPut, fmt.Sprintf("/tokens/%s", tokenID), + bytes.NewBufferString(`{"status": "revoked"}`)) + c.Request.Header.Set("Content-Type", "application/json") + c.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + + service.UpdateToken(c) + assert.Equal(t, http.StatusOK, w.Code) + + // Cache should be updated (should return revoked status) + status, privileges, err = service.tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status) + assert.NotEmpty(t, privileges) + assert.Contains(t, privileges, auth.PrivilegeAutomation) +} + +func TestTokenService_FullLifecycle(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Step 1: Create token + c1, w1 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.create"}) + c1.Request = httptest.NewRequest(http.MethodPost, "/tokens", + bytes.NewBufferString(`{"ttl": 3600, "name": "Lifecycle Test"}`)) + c1.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c1) + assert.Equal(t, http.StatusCreated, w1.Code) + + var createResp struct { + Status string `json:"status"` + Data struct { + TokenID string `json:"token_id"` + Token string `json:"token"` + Name string `json:"name"` + } `json:"data"` + } + json.Unmarshal(w1.Body.Bytes(), &createResp) + tokenID := createResp.Data.TokenID + tokenString := createResp.Data.Token + + // Step 2: Validate token works + claims, err := auth.ValidateAPIToken(tokenString, "custom_salt") + require.NoError(t, err) + assert.Equal(t, tokenID, claims.TokenID) + + // Step 3: List tokens (should see it) + c2, w2 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.view"}) + c2.Request = httptest.NewRequest(http.MethodGet, "/tokens", nil) + service.ListTokens(c2) + + var listResp struct { + Status string `json:"status"` + Data struct { + Tokens []models.APIToken `json:"tokens"` + } `json:"data"` + } + json.Unmarshal(w2.Body.Bytes(), &listResp) + assert.True(t, len(listResp.Data.Tokens) > 0) + + // Step 4: Update token name + c3, w3 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.edit"}) + c3.Request = httptest.NewRequest(http.MethodPut, fmt.Sprintf("/tokens/%s", tokenID), + bytes.NewBufferString(`{"name": "Updated Lifecycle"}`)) + c3.Request.Header.Set("Content-Type", "application/json") + c3.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + service.UpdateToken(c3) + assert.Equal(t, http.StatusOK, w3.Code) + + // Step 5: Revoke token + c4, w4 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.edit"}) + c4.Request = httptest.NewRequest(http.MethodPut, fmt.Sprintf("/tokens/%s", tokenID), + bytes.NewBufferString(`{"status": "revoked"}`)) + c4.Request.Header.Set("Content-Type", "application/json") + c4.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + service.UpdateToken(c4) + assert.Equal(t, http.StatusOK, w4.Code) + + // Step 6: Verify revoked status in cache + status, privileges, err := service.tokenCache.GetStatus(tokenID) + require.NoError(t, err) + assert.Equal(t, models.TokenStatusRevoked, status) + assert.NotEmpty(t, privileges) + assert.Contains(t, privileges, auth.PrivilegeAutomation) + + // Step 7: Delete token + c5, w5 := setupTestContext(1, 2, "hash1", []string{"settings.tokens.delete"}) + c5.Request = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/tokens/%s", tokenID), nil) + c5.Params = gin.Params{{Key: "tokenID", Value: tokenID}} + service.DeleteToken(c5) + assert.Equal(t, http.StatusOK, w5.Code) + + // Step 8: Verify soft delete + var deletedToken models.APIToken + err = db.Unscoped().Where("token_id = ?", tokenID).First(&deletedToken).Error + require.NoError(t, err) + assert.NotNil(t, deletedToken.DeletedAt) + + // Step 9: Token should not be found after deletion + _, _, err = service.tokenCache.GetStatus(tokenID) + assert.Error(t, err) + assert.Equal(t, gorm.ErrRecordNotFound, err) +} + +func TestTokenService_AdminPermissions(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + // Create tokens for different users + token1 := models.APIToken{TokenID: "admintest1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + token2 := models.APIToken{TokenID: "admintest2", UserID: 2, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token1) + db.Create(&token2) + + adminUID := uint64(3) + + testCases := []struct { + name string + operation string + tokenID string + expectedCode int + }{ + { + name: "admin views user 1 token", + operation: "get", + tokenID: "admintest1", + expectedCode: http.StatusOK, + }, + { + name: "admin views user 2 token", + operation: "get", + tokenID: "admintest2", + expectedCode: http.StatusOK, + }, + { + name: "admin updates user 2 token", + operation: "update", + tokenID: "admintest2", + expectedCode: http.StatusOK, + }, + { + name: "admin deletes user 1 token", + operation: "delete", + tokenID: "admintest1", + expectedCode: http.StatusOK, + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + c, w := setupTestContext(adminUID, 1, "adminhash", []string{ + "settings.tokens.admin", + "settings.tokens.view", + "settings.tokens.edit", + "settings.tokens.delete", + }) + + switch tc.operation { + case "get": + c.Request = httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tokens/%s", tc.tokenID), nil) + c.Params = gin.Params{{Key: "tokenID", Value: tc.tokenID}} + service.GetToken(c) + case "update": + c.Request = httptest.NewRequest(http.MethodPut, fmt.Sprintf("/tokens/%s", tc.tokenID), + bytes.NewBufferString(`{"status": "revoked"}`)) + c.Request.Header.Set("Content-Type", "application/json") + c.Params = gin.Params{{Key: "tokenID", Value: tc.tokenID}} + service.UpdateToken(c) + case "delete": + c.Request = httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/tokens/%s", tc.tokenID), nil) + c.Params = gin.Params{{Key: "tokenID", Value: tc.tokenID}} + service.DeleteToken(c) + } + + assert.Equal(t, tc.expectedCode, w.Code) + }) + } +} + +func TestTokenService_TokenPrivileges(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + + // Create admin token (role_id = 1) + adminToken := models.APIToken{ + TokenID: "admin_priv_test", + UserID: 1, + RoleID: 1, + TTL: 3600, + Status: models.TokenStatusActive, + } + err := db.Create(&adminToken).Error + require.NoError(t, err) + + // Create user token (role_id = 2) + userToken := models.APIToken{ + TokenID: "user_priv_test", + UserID: 2, + RoleID: 2, + TTL: 3600, + Status: models.TokenStatusActive, + } + err = db.Create(&userToken).Error + require.NoError(t, err) + + // Test admin privileges + t.Run("admin token has admin privileges", func(t *testing.T) { + status, privileges, err := tokenCache.GetStatus("admin_priv_test") + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + + // Should have automation privilege + assert.Contains(t, privileges, auth.PrivilegeAutomation) + + // Should have admin-specific privileges + assert.Contains(t, privileges, "users.create") + assert.Contains(t, privileges, "users.delete") + assert.Contains(t, privileges, "settings.tokens.admin") + assert.Contains(t, privileges, "flows.admin") + }) + + // Test user privileges + t.Run("user token has limited privileges", func(t *testing.T) { + status, privileges, err := tokenCache.GetStatus("user_priv_test") + require.NoError(t, err) + assert.Equal(t, models.TokenStatusActive, status) + assert.NotEmpty(t, privileges) + + // Should have automation privilege + assert.Contains(t, privileges, auth.PrivilegeAutomation) + + // Should have user-level privileges + assert.Contains(t, privileges, "flows.create") + assert.Contains(t, privileges, "settings.tokens.view") + + // Should NOT have admin-specific privileges + assert.NotContains(t, privileges, "users.create") + assert.NotContains(t, privileges, "users.delete") + assert.NotContains(t, privileges, "settings.tokens.admin") + assert.NotContains(t, privileges, "flows.admin") + }) + + // Test privilege caching + t.Run("privileges are cached", func(t *testing.T) { + // First call - loads from DB + _, privileges1, err := tokenCache.GetStatus("admin_priv_test") + require.NoError(t, err) + assert.NotEmpty(t, privileges1) + + // Second call - loads from cache + _, privileges2, err := tokenCache.GetStatus("admin_priv_test") + require.NoError(t, err) + assert.Equal(t, privileges1, privileges2) + }) + + // Test cache invalidation updates privileges + t.Run("cache invalidation reloads privileges", func(t *testing.T) { + // Get initial privileges + _, initialPrivs, err := tokenCache.GetStatus("user_priv_test") + require.NoError(t, err) + assert.NotEmpty(t, initialPrivs) + + // Update user's role to admin in DB + db.Model(&userToken).Update("role_id", 1) + + // Privileges should still be cached (old privileges) + _, cachedPrivs, err := tokenCache.GetStatus("user_priv_test") + require.NoError(t, err) + assert.NotContains(t, cachedPrivs, "users.create") // still user privileges + + // Invalidate cache + tokenCache.Invalidate("user_priv_test") + + // Should now have admin privileges + _, newPrivs, err := tokenCache.GetStatus("user_priv_test") + require.NoError(t, err) + assert.Contains(t, newPrivs, "users.create") // now has admin privileges + assert.Contains(t, newPrivs, "settings.tokens.admin") + }) +} + +func TestTokenService_SecurityChecks(t *testing.T) { + t.Run("token secret not stored in database", func(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenCache := auth.NewTokenCache(db) + service := NewTokenService(db, "custom_salt", tokenCache) + + c, w := setupTestContext(1, 2, "hash1", []string{"settings.tokens.create"}) + c.Request = httptest.NewRequest(http.MethodPost, "/tokens", + bytes.NewBufferString(`{"ttl": 3600}`)) + c.Request.Header.Set("Content-Type", "application/json") + + service.CreateToken(c) + assert.Equal(t, http.StatusCreated, w.Code) + + var response struct { + Status string `json:"status"` + Data struct { + Token string `json:"token"` + TokenID string `json:"token_id"` + } `json:"data"` + } + json.Unmarshal(w.Body.Bytes(), &response) + + // Verify token is returned in response + assert.NotEmpty(t, response.Data.Token) + + // Verify token is NOT in database + var dbToken models.APIToken + db.Where("token_id = ?", response.Data.TokenID).First(&dbToken) + + // Database should only have metadata, no token field + assert.Equal(t, response.Data.TokenID, dbToken.TokenID) + // Note: our model doesn't have Token field in APIToken, only in APITokenWithSecret for response + }) + + t.Run("token claims trusted from JWT", func(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + tokenID, err := auth.GenerateTokenID() + require.NoError(t, err) + + // Create token in DB with role_id = 2 + apiToken := models.APIToken{TokenID: tokenID, UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + err = db.Create(&apiToken).Error + require.NoError(t, err) + + // Create JWT with role_id = 1 (admin, different from DB) + claims := models.APITokenClaims{ + TokenID: tokenID, + RID: 1, // admin role in JWT + UID: 1, + UHASH: "testhash", + RegisteredClaims: jwt.RegisteredClaims{ + ExpiresAt: jwt.NewNumericDate(time.Now().Add(1 * time.Hour)), + IssuedAt: jwt.NewNumericDate(time.Now()), + Subject: "api_token", + }, + } + token := jwt.NewWithClaims(jwt.SigningMethodHS256, claims) + tokenString, err := token.SignedString(auth.MakeJWTSigningKey("test")) + require.NoError(t, err) + + // Validate token + validated, err := auth.ValidateAPIToken(tokenString, "test") + require.NoError(t, err) + + // We should trust JWT claims, not DB values + assert.Equal(t, uint64(1), validated.RID, "Should use role_id from JWT claims") + assert.NotEqual(t, apiToken.RoleID, validated.RID, "Should not use role_id from database") + }) + + t.Run("updated_at auto-updates", func(t *testing.T) { + db := setupTestDB(t) + defer db.Close() + + // Create token + token := models.APIToken{TokenID: "updatetime1", UserID: 1, RoleID: 2, TTL: 3600, Status: models.TokenStatusActive} + db.Create(&token) + + _ = token.UpdatedAt // record original time (trigger would update in real PostgreSQL) + time.Sleep(10 * time.Millisecond) + + // Update token + db.Model(&token).Update("status", models.TokenStatusRevoked) + + // Reload + var updated models.APIToken + db.Where("token_id = ?", "updatetime1").First(&updated) + + // updated_at should have changed + // Note: SQLite may not have trigger support in memory, but this demonstrates intent + // In real PostgreSQL, the trigger would update this automatically + }) +} diff --git a/backend/pkg/server/services/assistantlogs.go b/backend/pkg/server/services/assistantlogs.go index aad813ab..1b7f907a 100644 --- a/backend/pkg/server/services/assistantlogs.go +++ b/backend/pkg/server/services/assistantlogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,7 +25,7 @@ type assistantlogsGrouped struct { Total uint64 `json:"total"` } -var assistantlogsSQLMappers = map[string]interface{}{ +var assistantlogsSQLMappers = map[string]any{ "id": "{{table}}.id", "type": "{{table}}.type", "message": "{{table}}.message", @@ -32,6 +33,7 @@ var assistantlogsSQLMappers = map[string]interface{}{ "result_format": "{{table}}.result_format", "flow_id": "{{table}}.flow_id", "assistant_id": "{{table}}.assistant_id", + "created_at": "{{table}}.created_at", "data": "({{table}}.type || ' ' || {{table}}.message || ' ' || {{table}}.result)", } @@ -49,6 +51,7 @@ func NewAssistantlogService(db *gorm.DB) *AssistantlogService { // @Summary Retrieve assistantlogs list // @Tags Assistantlogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=assistantlogs} "assistantlogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -91,6 +94,12 @@ func (s *AssistantlogService) GetAssistantlogs(c *gin.Context) { query.Init("assistantlogs", assistantlogsSQLMappers) if query.Group != "" { + if _, ok := assistantlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding assistantlogs grouped: group field not found") + response.Error(c, response.ErrAssistantlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped assistantlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding assistantlogs grouped") @@ -123,6 +132,7 @@ func (s *AssistantlogService) GetAssistantlogs(c *gin.Context) { // @Summary Retrieve assistantlogs list by flow id // @Tags Assistantlogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=assistantlogs} "assistantlogs list received successful" @@ -174,6 +184,12 @@ func (s *AssistantlogService) GetFlowAssistantlogs(c *gin.Context) { query.Init("assistantlogs", assistantlogsSQLMappers) if query.Group != "" { + if _, ok := assistantlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding assistantlogs grouped: group field not found") + response.Error(c, response.ErrAssistantlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped assistantlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding assistantlogs grouped") diff --git a/backend/pkg/server/services/assistants.go b/backend/pkg/server/services/assistants.go index c9403e61..b5ef2cdd 100644 --- a/backend/pkg/server/services/assistants.go +++ b/backend/pkg/server/services/assistants.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -27,13 +28,19 @@ type assistantsGrouped struct { Total uint64 `json:"total"` } -var assistantsSQLMappers = map[string]interface{}{ - "id": "{{table}}.id", - "status": "{{table}}.status", - "title": "{{table}}.title", - "flow_id": "{{table}}.flow_id", - "msgchain_id": "{{table}}.msgchain_id", - "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.flow_id)", +var assistantsSQLMappers = map[string]any{ + "id": "{{table}}.id", + "status": "{{table}}.status", + "title": "{{table}}.title", + "model": "{{table}}.model", + "model_provider_name": "{{table}}.model_provider_name", + "model_provider_type": "{{table}}.model_provider_type", + "language": "{{table}}.language", + "flow_id": "{{table}}.flow_id", + "msgchain_id": "{{table}}.msgchain_id", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.flow_id)", } type AssistantService struct { @@ -54,6 +61,7 @@ func NewAssistantService(db *gorm.DB, pc providers.ProviderController, fc contro // @Summary Retrieve assistants list // @Tags Assistants // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=assistants} "assistants list received successful" @@ -87,13 +95,13 @@ func (s *AssistantService) GetFlowAssistants(c *gin.Context) { if slices.Contains(privs, "assistants.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = assistants.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "assistants.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = assistants.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -105,6 +113,12 @@ func (s *AssistantService) GetFlowAssistants(c *gin.Context) { query.Init("assistants", assistantsSQLMappers) if query.Group != "" { + if _, ok := assistantsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding assistants grouped: group field not found") + response.Error(c, response.ErrAssistantsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped assistantsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding assistants grouped") @@ -137,6 +151,7 @@ func (s *AssistantService) GetFlowAssistants(c *gin.Context) { // @Summary Retrieve flow assistant by id // @Tags Assistants // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param assistantID path int true "assistant id" minimum(0) // @Success 200 {object} response.successResp{data=models.Assistant} "flow assistant received successful" @@ -170,13 +185,13 @@ func (s *AssistantService) GetFlowAssistant(c *gin.Context) { if slices.Contains(privs, "assistants.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = assistants.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "assistants.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = assistants.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -187,7 +202,7 @@ func (s *AssistantService) GetFlowAssistant(c *gin.Context) { err = s.db.Model(&resp). Scopes(scope). - Where("id = ?", assistantID). + Where("assistants.id = ?", assistantID). Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting flow assistant by id") @@ -207,6 +222,7 @@ func (s *AssistantService) GetFlowAssistant(c *gin.Context) { // @Tags Assistants // @Accept json // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param json body models.CreateAssistant true "assistant model to create" // @Success 201 {object} response.successResp{data=models.AssistantFlow} "assistant created successful" @@ -289,6 +305,7 @@ func (s *AssistantService) CreateFlowAssistant(c *gin.Context) { // @Tags Assistants // @Accept json // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param assistantID path int true "assistant id" minimum(0) // @Param json body models.PatchAssistant true "assistant model to patch" @@ -325,6 +342,13 @@ func (s *AssistantService) PatchAssistant(c *gin.Context) { return } + assistantID, err = strconv.ParseUint(c.Param("assistantID"), 10, 64) + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error parsing assistant id") + response.Error(c, response.ErrAssistantsInvalidRequest, err) + return + } + uid := c.GetUint64("uid") privs := c.GetStringSlice("prm") var scope func(db *gorm.DB) *gorm.DB @@ -409,6 +433,7 @@ func (s *AssistantService) PatchAssistant(c *gin.Context) { // DeleteAssistant is a function to delete assistant by id // @Summary Delete assistant by id // @Tags Assistants +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param assistantID path int true "assistant id" minimum(0) // @Success 200 {object} response.successResp{data=models.AssistantFlow} "assistant deleted successful" diff --git a/backend/pkg/server/services/auth.go b/backend/pkg/server/services/auth.go index 53a27d50..679bbd74 100644 --- a/backend/pkg/server/services/auth.go +++ b/backend/pkg/server/services/auth.go @@ -12,6 +12,7 @@ import ( "net/http" "net/url" "path" + "slices" "strconv" "strings" "time" @@ -713,6 +714,7 @@ type info struct { // @Summary Retrieve current user and system settings // @Tags Public // @Produce json +// @Security BearerAuth // @Param refresh_cookie query boolean false "boolean arg to refresh current cookie, use explicit false" // @Success 200 {object} response.successResp{data=info} "info received successful" // @Failure 403 {object} response.errorResp "getting info not permitted" @@ -729,6 +731,7 @@ func (s *AuthService) Info(c *gin.Context) { tid := c.GetString("tid") exp := c.GetInt64("exp") gtm := c.GetInt64("gtm") + cpt := c.GetString("cpt") privs := c.GetStringSlice("prm") resp.Privs = privs @@ -741,7 +744,16 @@ func (s *AuthService) Info(c *gin.Context) { } logger.FromContext(c).WithFields(logrus.Fields( - map[string]interface{}{"exp": exp, "gtm": gtm, "uhash": uhash, "now": now})).Trace("AuthService.Info") + map[string]any{ + "exp": exp, + "gtm": gtm, + "uhash": uhash, + "now": now, + "cpt": cpt, + "uid": uid, + "tid": tid, + }, + )).Trace("AuthService.Info") if uhash == "" || exp == 0 || gtm == 0 || now > exp { resp.Type = "guest" @@ -750,8 +762,6 @@ func (s *AuthService) Info(c *gin.Context) { return } - resp.Type = "user" - err := s.db.Take(&resp.User, "id = ?", uid).Related(&resp.Role).Error if err != nil { response.Error(c, response.ErrInfoUserNotFound, err) @@ -767,6 +777,21 @@ func (s *AuthService) Info(c *gin.Context) { return } + if cpt == "automation" { + resp.Type = models.UserTypeAPI.String() + // filter out privileges that are not supported for API tokens + privs = slices.DeleteFunc(privs, func(priv string) bool { + return strings.HasPrefix(priv, "users.") || + strings.HasPrefix(priv, "roles.") || + strings.HasPrefix(priv, "settings.tokens.") + }) + resp.Privs = privs + response.Success(c, http.StatusOK, resp) + return + } + + resp.Type = "user" + // check 5 minutes timeout to refresh current token var fiveMins int64 = 5 * 60 if now >= gtm+fiveMins && c.Query("refresh_cookie") != "false" { diff --git a/backend/pkg/server/services/containers.go b/backend/pkg/server/services/containers.go index e30c14f6..a804f60b 100644 --- a/backend/pkg/server/services/containers.go +++ b/backend/pkg/server/services/containers.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -20,20 +21,22 @@ type containers struct { } type containersGrouped struct { - Grouped []containers `json:"grouped"` - Total uint64 `json:"total"` + Grouped []string `json:"grouped"` + Total uint64 `json:"total"` } -var containersSQLMappers = map[string]interface{}{ - "id": "{{table}}.id", - "type": "{{table}}.type", - "name": "{{table}}.name", - "image": "{{table}}.image", - "status": "{{table}}.status", - "local_id": "{{table}}.local_id", - "local_dir": "{{table}}.local_dir", - "flow_id": "{{table}}.flow_id", - "data": "({{table}}.type || ' ' || {{table}}.name || ' ' || {{table}}.status || ' ' || {{table}}.local_id || ' ' || {{table}}.local_dir)", +var containersSQLMappers = map[string]any{ + "id": "{{table}}.id", + "type": "{{table}}.type", + "name": "{{table}}.name", + "image": "{{table}}.image", + "status": "{{table}}.status", + "local_id": "{{table}}.local_id", + "local_dir": "{{table}}.local_dir", + "flow_id": "{{table}}.flow_id", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.type || ' ' || {{table}}.name || ' ' || {{table}}.status || ' ' || {{table}}.local_id || ' ' || {{table}}.local_dir)", } type ContainerService struct { @@ -50,6 +53,7 @@ func NewContainerService(db *gorm.DB) *ContainerService { // @Summary Retrieve containers list // @Tags Containers // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=containers} "containers list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -75,12 +79,12 @@ func (s *ContainerService) GetContainers(c *gin.Context) { if slices.Contains(privs, "containers.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id") + Joins("INNER JOIN flows f ON f.id = containers.flow_id") } } else if slices.Contains(privs, "containers.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = containers.flow_id"). Where("f.user_id = ?", uid) } } else { @@ -92,6 +96,12 @@ func (s *ContainerService) GetContainers(c *gin.Context) { query.Init("containers", containersSQLMappers) if query.Group != "" { + if _, ok := containersSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding containers grouped: group field not found") + response.Error(c, response.ErrContainersInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped containersGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding containers grouped") @@ -124,6 +134,7 @@ func (s *ContainerService) GetContainers(c *gin.Context) { // @Summary Retrieve containers list by flow id // @Tags Containers // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=containers} "containers list received successful" @@ -157,13 +168,13 @@ func (s *ContainerService) GetFlowContainers(c *gin.Context) { if slices.Contains(privs, "containers.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = containers.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "containers.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = containers.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -175,6 +186,12 @@ func (s *ContainerService) GetFlowContainers(c *gin.Context) { query.Init("containers", containersSQLMappers) if query.Group != "" { + if _, ok := containersSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding containers grouped: group field not found") + response.Error(c, response.ErrContainersInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped containersGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding containers grouped") @@ -207,6 +224,7 @@ func (s *ContainerService) GetFlowContainers(c *gin.Context) { // @Summary Retrieve container info by id and flow id // @Tags Containers // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param containerID path int true "container id" minimum(0) // @Success 200 {object} response.successResp{data=models.Container} "container info received successful" @@ -253,7 +271,7 @@ func (s *ContainerService) GetFlowContainer(c *gin.Context) { err = s.db.Model(&resp). Joins("INNER JOIN flows f ON f.id = flow_id"). Scopes(scope). - Where("id = ?", containerID). + Where("containers.id = ?", containerID). Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting container by id") diff --git a/backend/pkg/server/services/flows.go b/backend/pkg/server/services/flows.go index 6f1213bb..1267c0d3 100644 --- a/backend/pkg/server/services/flows.go +++ b/backend/pkg/server/services/flows.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -27,15 +28,17 @@ type flowsGrouped struct { Total uint64 `json:"total"` } -var flowsSQLMappers = map[string]interface{}{ - "id": "{{table}}.id", - "status": "{{table}}.status", - "title": "{{table}}.title", - "model": "{{table}}.model", - "model_provider": "{{table}}.model_provider", - "language": "{{table}}.language", - "user_id": "{{table}}.user_id", - "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.model || ' ' || {{table}}.model_provider || ' ' || {{table}}.language)", +var flowsSQLMappers = map[string]any{ + "id": "{{table}}.id", + "status": "{{table}}.status", + "title": "{{table}}.title", + "model": "{{table}}.model", + "model_provider_name": "{{table}}.model_provider_name", + "model_provider_type": "{{table}}.model_provider_type", + "language": "{{table}}.language", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.model || ' ' || {{table}}.model_provider || ' ' || {{table}}.language)", } type FlowService struct { @@ -56,6 +59,7 @@ func NewFlowService(db *gorm.DB, pc providers.ProviderController, fc controller. // @Summary Retrieve flows list // @Tags Flows // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=flows} "flows list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -95,6 +99,12 @@ func (s *FlowService) GetFlows(c *gin.Context) { query.Init("flows", flowsSQLMappers) if query.Group != "" { + if _, ok := flowsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding flows grouped: group field not found") + response.Error(c, response.ErrFlowsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped flowsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding flows grouped") @@ -127,6 +137,7 @@ func (s *FlowService) GetFlows(c *gin.Context) { // @Summary Retrieve flow by id // @Tags Flows // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Success 200 {object} response.successResp{data=models.Flow} "flow received successful" // @Failure 403 {object} response.errorResp "getting flow not permitted" @@ -180,6 +191,7 @@ func (s *FlowService) GetFlow(c *gin.Context) { // @Summary Retrieve flow graph by id // @Tags Flows // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Success 200 {object} response.successResp{data=models.FlowTasksSubtasks} "flow graph received successful" // @Failure 403 {object} response.errorResp "getting flow graph not permitted" @@ -291,6 +303,7 @@ func (s *FlowService) GetFlowGraph(c *gin.Context) { // @Tags Flows // @Accept json // @Produce json +// @Security BearerAuth // @Param json body models.CreateFlow true "flow model to create" // @Success 201 {object} response.successResp{data=models.Flow} "flow created successful" // @Failure 400 {object} response.errorResp "invalid flow request data" @@ -356,6 +369,7 @@ func (s *FlowService) CreateFlow(c *gin.Context) { // @Tags Flows // @Accept json // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param json body models.PatchFlow true "flow model to patch" // @Success 200 {object} response.successResp{data=models.Flow} "flow patched successful" @@ -471,6 +485,7 @@ func (s *FlowService) PatchFlow(c *gin.Context) { // DeleteFlow is a function to delete flow by id // @Summary Delete flow by id // @Tags Flows +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Success 200 {object} response.successResp{data=models.Flow} "flow deleted successful" // @Failure 403 {object} response.errorResp "deleting flow not permitted" diff --git a/backend/pkg/server/services/graphql.go b/backend/pkg/server/services/graphql.go index 9e071040..fb6882b4 100644 --- a/backend/pkg/server/services/graphql.go +++ b/backend/pkg/server/services/graphql.go @@ -14,6 +14,7 @@ import ( "pentagi/pkg/graph" "pentagi/pkg/graph/subscriptions" "pentagi/pkg/providers" + "pentagi/pkg/server/auth" "pentagi/pkg/server/logger" "pentagi/pkg/templates" @@ -51,6 +52,7 @@ func NewGraphqlService( cfg *config.Config, baseURL string, origins []string, + tokenCache *auth.TokenCache, providers providers.ProviderController, controller controller.FlowController, subscriptions subscriptions.SubscriptionsController, @@ -59,6 +61,7 @@ func NewGraphqlService( DB: db, Config: cfg, Logger: logrus.StandardLogger().WithField("component", "pentagi-gql-bl"), + TokenCache: tokenCache, DefaultPrompter: templates.NewDefaultPrompter(), ProvidersCtrl: providers, Controller: controller, @@ -117,6 +120,7 @@ func NewGraphqlService( // @Tags GraphQL // @Accept json // @Produce json +// @Security BearerAuth // @Param json body graphql.RawParams true "graphql request" // @Success 200 {object} graphql.Response "graphql response" // @Failure 400 {object} graphql.Response "invalid graphql request data" @@ -125,6 +129,7 @@ func NewGraphqlService( // @Router /graphql [post] func (s *GraphqlService) ServeGraphql(c *gin.Context) { uid := c.GetUint64("uid") + tid := c.GetString("tid") privs := c.GetStringSlice("prm") savedCtx := c.Request.Context() @@ -134,6 +139,7 @@ func (s *GraphqlService) ServeGraphql(c *gin.Context) { ctx := savedCtx ctx = graph.SetUserID(ctx, uid) + ctx = graph.SetUserType(ctx, tid) ctx = graph.SetUserPermissions(ctx, privs) c.Request = c.Request.WithContext(ctx) diff --git a/backend/pkg/server/services/msglogs.go b/backend/pkg/server/services/msglogs.go index c5937c45..bcaa0ba7 100644 --- a/backend/pkg/server/services/msglogs.go +++ b/backend/pkg/server/services/msglogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,16 +25,18 @@ type msglogsGrouped struct { Total uint64 `json:"total"` } -var msglogsSQLMappers = map[string]interface{}{ +var msglogsSQLMappers = map[string]any{ "id": "{{table}}.id", "type": "{{table}}.type", "message": "{{table}}.message", + "thinking": "{{table}}.thinking", "result": "{{table}}.result", "result_format": "{{table}}.result_format", "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", - "data": "({{table}}.type || ' ' || {{table}}.message || ' ' || {{table}}.result)", + "created_at": "{{table}}.created_at", + "data": "({{table}}.type || ' ' || {{table}}.message || ' ' || {{table}}.thinking || ' ' || {{table}}.result)", } type MsglogService struct { @@ -50,6 +53,7 @@ func NewMsglogService(db *gorm.DB) *MsglogService { // @Summary Retrieve msglogs list // @Tags Msglogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=msglogs} "msglogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -75,12 +79,12 @@ func (s *MsglogService) GetMsglogs(c *gin.Context) { if slices.Contains(privs, "msglogs.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id") + Joins("INNER JOIN flows f ON f.id = msglogs.flow_id") } } else if slices.Contains(privs, "msglogs.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = msglogs.flow_id"). Where("f.user_id = ?", uid) } } else { @@ -92,6 +96,12 @@ func (s *MsglogService) GetMsglogs(c *gin.Context) { query.Init("msglogs", msglogsSQLMappers) if query.Group != "" { + if _, ok := msglogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding msglogs grouped: group field not found") + response.Error(c, response.ErrMsglogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped msglogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding msglogs grouped") @@ -124,6 +134,7 @@ func (s *MsglogService) GetMsglogs(c *gin.Context) { // @Summary Retrieve msglogs list by flow id // @Tags Msglogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=msglogs} "msglogs list received successful" @@ -157,13 +168,13 @@ func (s *MsglogService) GetFlowMsglogs(c *gin.Context) { if slices.Contains(privs, "msglogs.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = msglogs.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "msglogs.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = msglogs.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -175,6 +186,12 @@ func (s *MsglogService) GetFlowMsglogs(c *gin.Context) { query.Init("msglogs", msglogsSQLMappers) if query.Group != "" { + if _, ok := msglogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding msglogs grouped: group field not found") + response.Error(c, response.ErrMsglogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped msglogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding msglogs grouped") diff --git a/backend/pkg/server/services/prompts.go b/backend/pkg/server/services/prompts.go index f7ab4684..c3d0db39 100644 --- a/backend/pkg/server/services/prompts.go +++ b/backend/pkg/server/services/prompts.go @@ -9,6 +9,7 @@ import ( "pentagi/pkg/server/models" "pentagi/pkg/server/rdb" "pentagi/pkg/server/response" + "pentagi/pkg/templates" "github.com/gin-gonic/gin" "github.com/jinzhu/gorm" @@ -19,19 +20,28 @@ type prompts struct { Total uint64 `json:"total"` } -var promptsSQLMappers = map[string]interface{}{ - "type": "{{table}}.type", - "prompt": "{{table}}.prompt", - "data": "({{table}}.type || ' ' || {{table}}.prompt)", +type promptsGrouped struct { + Grouped []string `json:"grouped"` + Total uint64 `json:"total"` +} + +var promptsSQLMappers = map[string]any{ + "type": "{{table}}.type", + "prompt": "{{table}}.prompt", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.type || ' ' || {{table}}.prompt)", } type PromptService struct { - db *gorm.DB + db *gorm.DB + prompter templates.Prompter } func NewPromptService(db *gorm.DB) *PromptService { return &PromptService{ - db: db, + db: db, + prompter: templates.NewDefaultPrompter(), } } @@ -39,6 +49,7 @@ func NewPromptService(db *gorm.DB) *PromptService { // @Summary Retrieve prompts list // @Tags Prompts // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=prompts} "prompts list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -59,7 +70,7 @@ func (s *PromptService) GetPrompts(c *gin.Context) { } privs := c.GetStringSlice("prm") - if !slices.Contains(privs, "prompts.view") { + if !slices.Contains(privs, "settings.prompts.view") { logger.FromContext(c).Errorf("error filtering user role permissions: permission not found") response.Error(c, response.ErrNotPermitted, nil) return @@ -73,8 +84,20 @@ func (s *PromptService) GetPrompts(c *gin.Context) { query.Init("prompts", promptsSQLMappers) if query.Group != "" { - logger.FromContext(c).Errorf("error grouping prompts: not allowed") - response.Error(c, response.ErrNotPermitted, nil) + if _, ok := promptsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding prompts grouped: group field not found") + response.Error(c, response.ErrPromptsInvalidRequest, errors.New("group field not found")) + return + } + + var respGrouped promptsGrouped + if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding prompts grouped") + response.Error(c, response.ErrInternal, err) + return + } + + response.Success(c, http.StatusOK, respGrouped) return } @@ -99,6 +122,7 @@ func (s *PromptService) GetPrompts(c *gin.Context) { // @Summary Retrieve prompt by type // @Tags Prompts // @Produce json +// @Security BearerAuth // @Param promptType path string true "prompt type" // @Success 200 {object} response.successResp{data=models.Prompt} "prompt received successful" // @Failure 400 {object} response.errorResp "invalid prompt request data" @@ -109,12 +133,18 @@ func (s *PromptService) GetPrompts(c *gin.Context) { func (s *PromptService) GetPrompt(c *gin.Context) { var ( err error - promptType string = c.Param("promptType") + promptType models.PromptType = models.PromptType(c.Param("promptType")) resp models.Prompt ) + if err = models.PromptType(promptType).Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating prompt type '%s'", promptType) + response.Error(c, response.ErrPromptsInvalidRequest, err) + return + } + privs := c.GetStringSlice("prm") - if !slices.Contains(privs, "prompts.view") { + if !slices.Contains(privs, "settings.prompts.view") { logger.FromContext(c).Errorf("error filtering user role permissions: permission not found") response.Error(c, response.ErrNotPermitted, nil) return @@ -148,9 +178,11 @@ func (s *PromptService) GetPrompt(c *gin.Context) { // @Tags Prompts // @Accept json // @Produce json +// @Security BearerAuth // @Param promptType path string true "prompt type" // @Param json body models.PatchPrompt true "prompt model to update" // @Success 200 {object} response.successResp{data=models.Prompt} "prompt updated successful" +// @Success 201 {object} response.successResp{data=models.Prompt} "prompt created successful" // @Failure 400 {object} response.errorResp "invalid prompt request data" // @Failure 403 {object} response.errorResp "updating prompt not permitted" // @Failure 404 {object} response.errorResp "prompt not found" @@ -160,7 +192,7 @@ func (s *PromptService) PatchPrompt(c *gin.Context) { var ( err error prompt models.PatchPrompt - promptType string = c.Param("promptType") + promptType models.PromptType = models.PromptType(c.Param("promptType")) resp models.Prompt ) @@ -172,10 +204,14 @@ func (s *PromptService) PatchPrompt(c *gin.Context) { logger.FromContext(c).WithError(err).Errorf("error validating prompt JSON") response.Error(c, response.ErrPromptsInvalidRequest, err) return + } else if err = promptType.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating prompt type '%s'", promptType) + response.Error(c, response.ErrPromptsInvalidRequest, err) + return } privs := c.GetStringSlice("prm") - if !slices.Contains(privs, "prompts.edit") { + if !slices.Contains(privs, "settings.prompts.edit") { logger.FromContext(c).Errorf("error filtering user role permissions: permission not found") response.Error(c, response.ErrNotPermitted, nil) return @@ -186,20 +222,33 @@ func (s *PromptService) PatchPrompt(c *gin.Context) { return db.Where("type = ? AND user_id = ?", promptType, uid) } - err = s.db.Model(&resp).Scopes(scope).UpdateColumn("prompt", prompt.Prompt).Error + err = s.db.Scopes(scope).Take(&resp).Error if err != nil && errors.Is(err, gorm.ErrRecordNotFound) { - logger.FromContext(c).Errorf("error updating prompt by type '%s', prompt not found", promptType) - response.Error(c, response.ErrPromptsNotFound, err) + resp = models.Prompt{ + Type: promptType, + UserID: uid, + Prompt: prompt.Prompt, + } + if err = s.db.Create(&resp).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error creating prompt by type '%s'", promptType) + response.Error(c, response.ErrInternal, err) + return + } + + response.Success(c, http.StatusCreated, resp) return } else if err != nil { - logger.FromContext(c).WithError(err).Errorf("error updating prompt by type '%s'", promptType) + logger.FromContext(c).WithError(err).Errorf("error finding updated prompt by type '%s'", promptType) response.Error(c, response.ErrInternal, err) return } - if err = s.db.Scopes(scope).Take(&resp).Error; err != nil { - logger.FromContext(c).Errorf("error finding updated prompt by type '%s'", promptType) - response.Error(c, response.ErrPromptsNotFound, err) + resp.Prompt = prompt.Prompt + + err = s.db.Scopes(scope).Save(&resp).Error + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error updating prompt by type '%s'", promptType) + response.Error(c, response.ErrInternal, err) return } @@ -211,8 +260,10 @@ func (s *PromptService) PatchPrompt(c *gin.Context) { // @Tags Prompts // @Accept json // @Produce json +// @Security BearerAuth // @Param promptType path string true "prompt type" // @Success 200 {object} response.successResp{data=models.Prompt} "prompt reset successful" +// @Success 201 {object} response.successResp{data=models.Prompt} "prompt created with default value successful" // @Failure 400 {object} response.errorResp "invalid prompt request data" // @Failure 403 {object} response.errorResp "updating prompt not permitted" // @Failure 404 {object} response.errorResp "prompt not found" @@ -221,12 +272,18 @@ func (s *PromptService) PatchPrompt(c *gin.Context) { func (s *PromptService) ResetPrompt(c *gin.Context) { var ( err error - promptType string = c.Param("promptType") + promptType models.PromptType = models.PromptType(c.Param("promptType")) resp models.Prompt ) + if err = promptType.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating prompt type '%s'", promptType) + response.Error(c, response.ErrPromptsInvalidRequest, err) + return + } + privs := c.GetStringSlice("prm") - if !slices.Contains(privs, "prompts.edit") { + if !slices.Contains(privs, "settings.prompts.edit") { logger.FromContext(c).Errorf("error filtering user role permissions: permission not found") response.Error(c, response.ErrNotPermitted, nil) return @@ -237,23 +294,98 @@ func (s *PromptService) ResetPrompt(c *gin.Context) { return db.Where("type = ? AND user_id = ?", promptType, uid) } - // TODO: use templates.GetTemplate - err = s.db.Model(&resp).Scopes(scope).UpdateColumn("prompt", "").Error + template, err := s.prompter.GetTemplate(templates.PromptType(promptType)) + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error getting template '%s'", promptType) + response.Error(c, response.ErrPromptsInvalidRequest, err) + return + } + + err = s.db.Scopes(scope).Take(&resp).Error if err != nil && errors.Is(err, gorm.ErrRecordNotFound) { - logger.FromContext(c).Errorf("error updating prompt by type '%s', prompt not found", promptType) - response.Error(c, response.ErrPromptsNotFound, err) + resp = models.Prompt{ + Type: promptType, + UserID: uid, + Prompt: template, + } + err = s.db.Create(&resp).Error + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error creating default prompt by type '%s'", promptType) + response.Error(c, response.ErrInternal, err) + } + + response.Success(c, http.StatusCreated, resp) return } else if err != nil { - logger.FromContext(c).WithError(err).Errorf("error updating prompt by type '%s'", promptType) + logger.FromContext(c).WithError(err).Errorf("error finding updated prompt by type '%s'", promptType) response.Error(c, response.ErrInternal, err) return } - if err = s.db.Scopes(scope).Take(&resp).Error; err != nil { - logger.FromContext(c).Errorf("error finding updated prompt by type '%s'", promptType) - response.Error(c, response.ErrPromptsNotFound, err) + resp.Prompt = template + + err = s.db.Scopes(scope).Save(&resp).Error + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error resetting prompt by type '%s'", promptType) + response.Error(c, response.ErrInternal, err) return } response.Success(c, http.StatusOK, resp) } + +// DeletePrompt is a function to delete prompt by type +// @Summary Delete prompt by type +// @Tags Prompts +// @Produce json +// @Security BearerAuth +// @Param promptType path string true "prompt type" +// @Success 200 {object} response.successResp "prompt deleted successful" +// @Failure 400 {object} response.errorResp "invalid prompt request data" +// @Failure 403 {object} response.errorResp "deleting prompt not permitted" +// @Failure 404 {object} response.errorResp "prompt not found" +// @Failure 500 {object} response.errorResp "internal error on deleting prompt" +// @Router /prompts/{promptType} [delete] +func (s *PromptService) DeletePrompt(c *gin.Context) { + var ( + err error + promptType models.PromptType = models.PromptType(c.Param("promptType")) + resp models.Prompt + ) + + if err = promptType.Valid(); err != nil { + logger.FromContext(c).WithError(err).Errorf("error validating prompt type '%s'", promptType) + response.Error(c, response.ErrPromptsInvalidRequest, err) + return + } + + privs := c.GetStringSlice("prm") + if !slices.Contains(privs, "settings.prompts.edit") { + logger.FromContext(c).Errorf("error filtering user role permissions: permission not found") + response.Error(c, response.ErrNotPermitted, nil) + return + } + + uid := c.GetUint64("uid") + scope := func(db *gorm.DB) *gorm.DB { + return db.Where("type = ? AND user_id = ?", promptType, uid) + } + + if err = s.db.Scopes(scope).Take(&resp).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding prompt by type '%s'", promptType) + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrPromptsNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + if err = s.db.Scopes(scope).Delete(&resp).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error deleting prompt by type '%s'", promptType) + response.Error(c, response.ErrInternal, err) + return + } + + response.Success(c, http.StatusOK, nil) +} diff --git a/backend/pkg/server/services/providers.go b/backend/pkg/server/services/providers.go index 6e10fb32..bc4224f0 100644 --- a/backend/pkg/server/services/providers.go +++ b/backend/pkg/server/services/providers.go @@ -26,6 +26,7 @@ func NewProviderService(providers providers.ProviderController) *ProviderService // @Summary Retrieve providers list // @Tags Providers // @Produce json +// @Security BearerAuth // @Success 200 {object} response.successResp{data=models.ProviderInfo} "providers list received successful" // @Failure 403 {object} response.errorResp "getting providers not permitted" // @Router /providers/ [get] diff --git a/backend/pkg/server/services/roles.go b/backend/pkg/server/services/roles.go index 36745248..9106d89d 100644 --- a/backend/pkg/server/services/roles.go +++ b/backend/pkg/server/services/roles.go @@ -20,7 +20,7 @@ type roles struct { Total uint64 `json:"total"` } -var rolesSQLMappers = map[string]interface{}{ +var rolesSQLMappers = map[string]any{ "id": "{{table}}.id", "name": "{{table}}.name", "data": "{{table}}.name", diff --git a/backend/pkg/server/services/searchlogs.go b/backend/pkg/server/services/searchlogs.go index 6bd274e4..42986ac9 100644 --- a/backend/pkg/server/services/searchlogs.go +++ b/backend/pkg/server/services/searchlogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,16 +25,18 @@ type searchlogsGrouped struct { Total uint64 `json:"total"` } -var searchlogsSQLMappers = map[string]interface{}{ +var searchlogsSQLMappers = map[string]any{ "id": "{{table}}.id", "initiator": "{{table}}.initiator", "executor": "{{table}}.executor", - "task": "{{table}}.task", + "engine": "{{table}}.engine", + "query": "{{table}}.query", "result": "{{table}}.result", "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", - "data": "({{table}}.task || ' ' || {{table}}.result)", + "created_at": "{{table}}.created_at", + "data": "({{table}}.query || ' ' || {{table}}.result)", } type SearchlogService struct { @@ -50,6 +53,7 @@ func NewSearchlogService(db *gorm.DB) *SearchlogService { // @Summary Retrieve searchlogs list // @Tags Searchlogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=searchlogs} "searchlogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -92,6 +96,12 @@ func (s *SearchlogService) GetSearchlogs(c *gin.Context) { query.Init("searchlogs", searchlogsSQLMappers) if query.Group != "" { + if _, ok := searchlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding searchlogs grouped: group field not found") + response.Error(c, response.ErrSearchlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped searchlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding searchlogs grouped") @@ -124,6 +134,7 @@ func (s *SearchlogService) GetSearchlogs(c *gin.Context) { // @Summary Retrieve searchlogs list by flow id // @Tags Searchlogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=searchlogs} "searchlogs list received successful" @@ -175,6 +186,12 @@ func (s *SearchlogService) GetFlowSearchlogs(c *gin.Context) { query.Init("searchlogs", searchlogsSQLMappers) if query.Group != "" { + if _, ok := searchlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding searchlogs grouped: group field not found") + response.Error(c, response.ErrSearchlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped searchlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding searchlogs grouped") diff --git a/backend/pkg/server/services/sreenshots.go b/backend/pkg/server/services/sreenshots.go index d581fde8..4c5428e7 100644 --- a/backend/pkg/server/services/sreenshots.go +++ b/backend/pkg/server/services/sreenshots.go @@ -1,6 +1,7 @@ package services import ( + "errors" "fmt" "net/http" "path/filepath" @@ -26,13 +27,14 @@ type screenshotsGrouped struct { Total uint64 `json:"total"` } -var screenshotsSQLMappers = map[string]interface{}{ +var screenshotsSQLMappers = map[string]any{ "id": "{{table}}.id", "name": "{{table}}.name", "url": "{{table}}.url", "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", + "created_at": "{{table}}.created_at", "data": "({{table}}.name || ' ' || {{table}}.url)", } @@ -52,6 +54,7 @@ func NewScreenshotService(db *gorm.DB, dataDir string) *ScreenshotService { // @Summary Retrieve screenshots list // @Tags Screenshots // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=screenshots} "screenshots list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -77,12 +80,12 @@ func (s *ScreenshotService) GetScreenshots(c *gin.Context) { if slices.Contains(privs, "screenshots.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id") + Joins("INNER JOIN flows f ON f.id = screenshots.flow_id") } } else if slices.Contains(privs, "screenshots.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = screenshots.flow_id"). Where("f.user_id = ?", uid) } } else { @@ -94,6 +97,12 @@ func (s *ScreenshotService) GetScreenshots(c *gin.Context) { query.Init("screenshots", screenshotsSQLMappers) if query.Group != "" { + if _, ok := screenshotsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding screenshots grouped: group field not found") + response.Error(c, response.ErrScreenshotsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped screenshotsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding screenshots grouped") @@ -126,6 +135,7 @@ func (s *ScreenshotService) GetScreenshots(c *gin.Context) { // @Summary Retrieve screenshots list by flow id // @Tags Screenshots // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=screenshots} "screenshots list received successful" @@ -159,13 +169,13 @@ func (s *ScreenshotService) GetFlowScreenshots(c *gin.Context) { if slices.Contains(privs, "screenshots.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = screenshots.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "screenshots.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = screenshots.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -177,6 +187,12 @@ func (s *ScreenshotService) GetFlowScreenshots(c *gin.Context) { query.Init("screenshots", screenshotsSQLMappers) if query.Group != "" { + if _, ok := screenshotsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding screenshots grouped: group field not found") + response.Error(c, response.ErrScreenshotsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped screenshotsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding screenshots grouped") @@ -209,6 +225,7 @@ func (s *ScreenshotService) GetFlowScreenshots(c *gin.Context) { // @Summary Retrieve screenshot info by id and flow id // @Tags Screenshots // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param screenshotID path int true "screenshot id" minimum(0) // @Success 200 {object} response.successResp{data=models.Screenshot} "screenshot info received successful" @@ -255,7 +272,7 @@ func (s *ScreenshotService) GetFlowScreenshot(c *gin.Context) { err = s.db.Model(&resp). Joins("INNER JOIN flows f ON f.id = flow_id"). Scopes(scope). - Where("id = ?", screenshotID). + Where("screenshots.id = ?", screenshotID). Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting screenshot by id") @@ -274,6 +291,7 @@ func (s *ScreenshotService) GetFlowScreenshot(c *gin.Context) { // @Summary Retrieve screenshot file by id and flow id // @Tags Screenshots // @Produce png,json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param screenshotID path int true "screenshot id" minimum(0) // @Success 200 {file} file "screenshot file" diff --git a/backend/pkg/server/services/subtasks.go b/backend/pkg/server/services/subtasks.go index ccd69c51..ea30771d 100644 --- a/backend/pkg/server/services/subtasks.go +++ b/backend/pkg/server/services/subtasks.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,14 +25,17 @@ type subtasksGrouped struct { Total uint64 `json:"total"` } -var subtasksSQLMappers = map[string]interface{}{ +var subtasksSQLMappers = map[string]any{ "id": "{{table}}.id", "status": "{{table}}.status", "title": "{{table}}.title", "description": "{{table}}.description", + "context": "{{table}}.context", "result": "{{table}}.result", "task_id": "{{table}}.task_id", - "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.description || ' ' || {{table}}.result)", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.description || ' ' || {{table}}.context || ' ' || {{table}}.result)", } type SubtaskService struct { @@ -48,6 +52,7 @@ func NewSubtaskService(db *gorm.DB) *SubtaskService { // @Summary Retrieve flow subtasks list // @Tags Subtasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=subtasks} "flow subtasks list received successful" @@ -81,14 +86,14 @@ func (s *SubtaskService) GetFlowSubtasks(c *gin.Context) { if slices.Contains(privs, "subtasks.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "subtasks.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } @@ -101,6 +106,12 @@ func (s *SubtaskService) GetFlowSubtasks(c *gin.Context) { query.Init("subtasks", subtasksSQLMappers) if query.Group != "" { + if _, ok := subtasksSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding subtasks grouped: group field not found") + response.Error(c, response.ErrSubtasksInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped subtasksGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding subtasks grouped") @@ -133,6 +144,7 @@ func (s *SubtaskService) GetFlowSubtasks(c *gin.Context) { // @Summary Retrieve flow task subtasks list // @Tags Subtasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param taskID path int true "task id" minimum(0) // @Param request query rdb.TableQuery true "query table params" @@ -174,14 +186,14 @@ func (s *SubtaskService) GetFlowTaskSubtasks(c *gin.Context) { if slices.Contains(privs, "subtasks.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ? AND t.id = ?", flowID, taskID) } } else if slices.Contains(privs, "subtasks.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ? AND f.user_id = ? AND t.id = ?", flowID, uid, taskID) } @@ -194,6 +206,12 @@ func (s *SubtaskService) GetFlowTaskSubtasks(c *gin.Context) { query.Init("subtasks", subtasksSQLMappers) if query.Group != "" { + if _, ok := subtasksSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding subtasks grouped: group field not found") + response.Error(c, response.ErrSubtasksInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped subtasksGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding subtasks grouped") @@ -226,6 +244,7 @@ func (s *SubtaskService) GetFlowTaskSubtasks(c *gin.Context) { // @Summary Retrieve flow task subtask by id // @Tags Subtasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param taskID path int true "task id" minimum(0) // @Param subtaskID path int true "subtask id" minimum(0) @@ -240,7 +259,7 @@ func (s *SubtaskService) GetFlowTaskSubtask(c *gin.Context) { flowID uint64 taskID uint64 subtaskID uint64 - resp models.Task + resp models.Subtask ) if flowID, err = strconv.ParseUint(c.Param("flowID"), 10, 64); err != nil { @@ -267,14 +286,14 @@ func (s *SubtaskService) GetFlowTaskSubtask(c *gin.Context) { if slices.Contains(privs, "subtasks.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ? AND t.id = ?", flowID, taskID) } } else if slices.Contains(privs, "subtasks.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN tasks t ON t.id = task_id"). + Joins("INNER JOIN tasks t ON t.id = subtasks.task_id"). Joins("INNER JOIN flows f ON f.id = t.flow_id"). Where("f.id = ? AND f.user_id = ? AND t.id = ?", flowID, uid, taskID) } @@ -286,7 +305,7 @@ func (s *SubtaskService) GetFlowTaskSubtask(c *gin.Context) { err = s.db.Model(&resp). Scopes(scope). - Where("id = ?", subtaskID). + Where("subtasks.id = ?", subtaskID). Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting flow task subtask by id") diff --git a/backend/pkg/server/services/tasks.go b/backend/pkg/server/services/tasks.go index 8f047967..d8472183 100644 --- a/backend/pkg/server/services/tasks.go +++ b/backend/pkg/server/services/tasks.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,14 +25,16 @@ type tasksGrouped struct { Total uint64 `json:"total"` } -var tasksSQLMappers = map[string]interface{}{ - "id": "{{table}}.id", - "status": "{{table}}.status", - "title": "{{table}}.title", - "input": "{{table}}.input", - "result": "{{table}}.result", - "flow_id": "{{table}}.flow_id", - "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.input || ' ' || {{table}}.result)", +var tasksSQLMappers = map[string]any{ + "id": "{{table}}.id", + "status": "{{table}}.status", + "title": "{{table}}.title", + "input": "{{table}}.input", + "result": "{{table}}.result", + "flow_id": "{{table}}.flow_id", + "created_at": "{{table}}.created_at", + "updated_at": "{{table}}.updated_at", + "data": "({{table}}.status || ' ' || {{table}}.title || ' ' || {{table}}.input || ' ' || {{table}}.result)", } type TaskService struct { @@ -48,6 +51,7 @@ func NewTaskService(db *gorm.DB) *TaskService { // @Summary Retrieve flow tasks list // @Tags Tasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=tasks} "flow tasks list received successful" @@ -81,13 +85,13 @@ func (s *TaskService) GetFlowTasks(c *gin.Context) { if slices.Contains(privs, "tasks.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = tasks.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "tasks.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = tasks.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -99,6 +103,12 @@ func (s *TaskService) GetFlowTasks(c *gin.Context) { query.Init("tasks", tasksSQLMappers) if query.Group != "" { + if _, ok := tasksSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding tasks grouped: group field not found") + response.Error(c, response.ErrTasksInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped tasksGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding tasks grouped") @@ -131,6 +141,7 @@ func (s *TaskService) GetFlowTasks(c *gin.Context) { // @Summary Retrieve flow task by id // @Tags Tasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param taskID path int true "task id" minimum(0) // @Success 200 {object} response.successResp{data=models.Task} "flow task received successful" @@ -164,13 +175,13 @@ func (s *TaskService) GetFlowTask(c *gin.Context) { if slices.Contains(privs, "tasks.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = tasks.flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "tasks.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = tasks.flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -181,7 +192,7 @@ func (s *TaskService) GetFlowTask(c *gin.Context) { err = s.db.Model(&resp). Scopes(scope). - Where("id = ?", taskID). + Where("tasks.id = ?", taskID). Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting flow task by id") @@ -200,6 +211,7 @@ func (s *TaskService) GetFlowTask(c *gin.Context) { // @Summary Retrieve flow task graph by id // @Tags Tasks // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param taskID path int true "task id" minimum(0) // @Success 200 {object} response.successResp{data=models.FlowTasksSubtasks} "flow task graph received successful" @@ -246,10 +258,10 @@ func (s *TaskService) GetFlowTaskGraph(c *gin.Context) { } err = s.db.Model(&resp). - Joins("INNER JOIN flows f ON f.id = flow_id"). + Joins("INNER JOIN flows f ON f.id = tasks.flow_id"). Scopes(scope). - Where("id = ?", taskID). - Take(&resp).Related(&flow).Error + Where("tasks.id = ?", taskID). + Take(&resp).Error if err != nil { logger.FromContext(c).WithError(err).Errorf("error on getting flow task by id") if gorm.IsRecordNotFoundError(err) { @@ -260,6 +272,17 @@ func (s *TaskService) GetFlowTaskGraph(c *gin.Context) { return } + err = s.db.Where("id = ?", resp.FlowID).Take(&flow).Error + if err != nil { + logger.FromContext(c).WithError(err).Errorf("error on getting flow by id") + if gorm.IsRecordNotFoundError(err) { + response.Error(c, response.ErrTasksNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + isSubtasksAdmin := slices.Contains(privs, "subtasks.admin") isSubtasksView := slices.Contains(privs, "subtasks.view") if !(flow.UserID == uid && isSubtasksView) && !(flow.UserID != uid && isSubtasksAdmin) { diff --git a/backend/pkg/server/services/termlogs.go b/backend/pkg/server/services/termlogs.go index a882f429..ff3b1095 100644 --- a/backend/pkg/server/services/termlogs.go +++ b/backend/pkg/server/services/termlogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,7 +25,7 @@ type termlogsGrouped struct { Total uint64 `json:"total"` } -var termlogsSQLMappers = map[string]interface{}{ +var termlogsSQLMappers = map[string]any{ "id": "{{table}}.id", "type": "{{table}}.type", "text": "{{table}}.text", @@ -32,6 +33,7 @@ var termlogsSQLMappers = map[string]interface{}{ "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", + "created_at": "{{table}}.created_at", "data": "({{table}}.type || ' ' || {{table}}.text)", } @@ -49,6 +51,7 @@ func NewTermlogService(db *gorm.DB) *TermlogService { // @Summary Retrieve termlogs list // @Tags Termlogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=termlogs} "termlogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -74,12 +77,12 @@ func (s *TermlogService) GetTermlogs(c *gin.Context) { if slices.Contains(privs, "termlogs.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = {{table}}.flow_id") + Joins("INNER JOIN flows f ON f.id = flow_id") } } else if slices.Contains(privs, "termlogs.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = {{table}}.flow_id"). + Joins("INNER JOIN flows f ON f.id = flow_id"). Where("f.user_id = ?", uid) } } else { @@ -91,6 +94,12 @@ func (s *TermlogService) GetTermlogs(c *gin.Context) { query.Init("termlogs", termlogsSQLMappers) if query.Group != "" { + if _, ok := termlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding termlogs grouped: group field not found") + response.Error(c, response.ErrTermlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped termlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding termlogs grouped") @@ -123,6 +132,7 @@ func (s *TermlogService) GetTermlogs(c *gin.Context) { // @Summary Retrieve termlogs list by flow id // @Tags Termlogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=termlogs} "termlogs list received successful" @@ -156,13 +166,13 @@ func (s *TermlogService) GetFlowTermlogs(c *gin.Context) { if slices.Contains(privs, "termlogs.admin") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = {{table}}.flow_id"). + Joins("INNER JOIN flows f ON f.id = flow_id"). Where("f.id = ?", flowID) } } else if slices.Contains(privs, "termlogs.view") { scope = func(db *gorm.DB) *gorm.DB { return db. - Joins("INNER JOIN flows f ON f.id = {{table}}.flow_id"). + Joins("INNER JOIN flows f ON f.id = flow_id"). Where("f.id = ? AND f.user_id = ?", flowID, uid) } } else { @@ -174,6 +184,12 @@ func (s *TermlogService) GetFlowTermlogs(c *gin.Context) { query.Init("termlogs", termlogsSQLMappers) if query.Group != "" { + if _, ok := termlogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding termlogs grouped: group field not found") + response.Error(c, response.ErrTermlogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped termlogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding termlogs grouped") diff --git a/backend/pkg/server/services/users.go b/backend/pkg/server/services/users.go index 625c5759..706cc3ef 100644 --- a/backend/pkg/server/services/users.go +++ b/backend/pkg/server/services/users.go @@ -5,6 +5,7 @@ import ( "net/http" "slices" + "pentagi/pkg/server/auth" "pentagi/pkg/server/logger" "pentagi/pkg/server/models" "pentagi/pkg/server/rdb" @@ -25,24 +26,27 @@ type usersGrouped struct { Total uint64 `json:"total"` } -var usersSQLMappers = map[string]interface{}{ - "id": "{{table}}.id", - "hash": "{{table}}.hash", - "type": "{{table}}.type", - "mail": "{{table}}.mail", - "name": "{{table}}.name", - "role_id": "{{table}}.role_id", - "status": "{{table}}.status", - "data": "({{table}}.hash || ' ' || {{table}}.mail || ' ' || {{table}}.name || ' ' || {{table}}.status)", +var usersSQLMappers = map[string]any{ + "id": "{{table}}.id", + "hash": "{{table}}.hash", + "type": "{{table}}.type", + "mail": "{{table}}.mail", + "name": "{{table}}.name", + "role_id": "{{table}}.role_id", + "status": "{{table}}.status", + "created_at": "{{table}}.created_at", + "data": "({{table}}.hash || ' ' || {{table}}.mail || ' ' || {{table}}.name || ' ' || {{table}}.status)", } type UserService struct { - db *gorm.DB + db *gorm.DB + userCache *auth.UserCache } -func NewUserService(db *gorm.DB) *UserService { +func NewUserService(db *gorm.DB, userCache *auth.UserCache) *UserService { return &UserService{ - db: db, + db: db, + userCache: userCache, } } @@ -63,9 +67,7 @@ func (s *UserService) GetCurrentUser(c *gin.Context) { uid := c.GetUint64("uid") - err = s.db.Take(&resp.User, "id = ?", uid). - Related(&resp.Role).Association("privileges").Find(&resp.Role.Privileges).Error - if err != nil { + if err = s.db.Take(&resp.User, "id = ?", uid).Error; err != nil { logger.FromContext(c).WithError(err).Errorf("error finding current user") if errors.Is(err, gorm.ErrRecordNotFound) { response.Error(c, response.ErrUsersNotFound, err) @@ -73,7 +75,29 @@ func (s *UserService) GetCurrentUser(c *gin.Context) { response.Error(c, response.ErrInternal, err) } return - } else if err = resp.Valid(); err != nil { + } + + if err = s.db.Take(&resp.Role, "id = ?", resp.User.RoleID).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding role by role id") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrGetUserModelsNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + if err = s.db.Model(&resp.Role).Association("privileges").Find(&resp.Role.Privileges).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding privileges by role id") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrGetUserModelsNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + if err = resp.Valid(); err != nil { logger.FromContext(c).WithError(err).Errorf("error validating user data '%s'", resp.Hash) response.Error(c, response.ErrUsersInvalidData, err) return @@ -144,7 +168,7 @@ func (s *UserService) ChangePasswordCurrentUser(c *gin.Context) { user.Password = string(encPass) user.PasswordChangeRequired = false - if err = s.db.Scopes(scope).Select("password", "password_change_required").Save(&user).Error; err != nil { + if err = s.db.Model(&user).Scopes(scope).Select("password", "password_change_required").Updates(&user).Error; err != nil { logger.FromContext(c).WithError(err).Errorf("error updating password for current user") response.Error(c, response.ErrInternal, err) return @@ -190,6 +214,12 @@ func (s *UserService) GetUsers(c *gin.Context) { query.Init("users", usersSQLMappers) if query.Group != "" { + if _, ok := usersSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding users grouped: group field not found") + response.Error(c, response.ErrUsersInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped usersGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding users grouped") @@ -272,9 +302,19 @@ func (s *UserService) GetUser(c *gin.Context) { } return } - err = s.db.Model(&resp.User).Related(&resp.Role).Association("privileges").Find(&resp.Role.Privileges).Error - if err != nil { - logger.FromContext(c).WithError(err).Errorf("error finding related models by user hash") + + if err = s.db.Take(&resp.Role, "id = ?", resp.User.RoleID).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding role by role id") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrGetUserModelsNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + if err = s.db.Model(&resp.Role).Association("privileges").Find(&resp.Role.Privileges).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding privileges by role id") if errors.Is(err, gorm.ErrRecordNotFound) { response.Error(c, response.ErrGetUserModelsNotFound, err) } else { @@ -366,17 +406,26 @@ func (s *UserService) CreateUser(c *gin.Context) { return } - err = s.db.Take(&resp.User, "hash = ?", user.Hash).Related(&resp.Role).Error - if err != nil { + if err = s.db.Take(&resp.User, "hash = ?", user.Hash).Error; err != nil { logger.FromContext(c).WithError(err).Errorf("error finding user by hash") response.Error(c, response.ErrInternal, err) return - } else if err = resp.Valid(); err != nil { + } + + if err = s.db.Take(&resp.Role, "id = ?", resp.User.RoleID).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding role by role id") + response.Error(c, response.ErrInternal, err) + return + } + + if err = resp.Valid(); err != nil { logger.FromContext(c).WithError(err).Errorf("error validating user data '%s'", resp.Hash) response.Error(c, response.ErrUsersInvalidData, err) return } + s.userCache.Invalidate(resp.User.ID) + response.Success(c, http.StatusCreated, resp) } @@ -435,7 +484,19 @@ func (s *UserService) PatchUser(c *gin.Context) { return } - public_info := []interface{}{"name", "status"} + // Check if user exists before updating + var existingUser models.User + if err = s.db.Scopes(scope).Take(&existingUser).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding user by hash") + if errors.Is(err, gorm.ErrRecordNotFound) { + response.Error(c, response.ErrUsersNotFound, err) + } else { + response.Error(c, response.ErrInternal, err) + } + return + } + + public_info := []any{"name", "status"} if user.Password != "" { var encPassword []byte encPassword, err = rdb.EncryptPassword(user.Password) @@ -447,16 +508,12 @@ func (s *UserService) PatchUser(c *gin.Context) { user.Password = string(encPassword) user.PasswordChangeRequired = false public_info = append(public_info, "password", "password_change_required") - err = s.db.Scopes(scope).Select("", public_info...).Save(&user).Error + err = s.db.Model(&existingUser).Select("", public_info...).Updates(&user).Error } else { - err = s.db.Scopes(scope).Select("", public_info...).Save(&user.User).Error + err = s.db.Model(&existingUser).Select("", public_info...).Updates(&user.User).Error } - if err != nil && errors.Is(err, gorm.ErrRecordNotFound) { - logger.FromContext(c).Errorf("error updating user by hash '%s', user not found", hash) - response.Error(c, response.ErrUsersNotFound, err) - return - } else if err != nil { + if err != nil { logger.FromContext(c).WithError(err).Errorf("error updating user by hash '%s'", hash) response.Error(c, response.ErrInternal, err) return @@ -471,8 +528,9 @@ func (s *UserService) PatchUser(c *gin.Context) { } return } - if err = s.db.Model(&resp.User).Related(&resp.Role).Error; err != nil { - logger.FromContext(c).WithError(err).Errorf("error finding related models by user hash") + + if err = s.db.Take(&resp.Role, "id = ?", resp.User.RoleID).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding role by role id") if errors.Is(err, gorm.ErrRecordNotFound) { response.Error(c, response.ErrPatchUserModelsNotFound, err) } else { @@ -486,6 +544,8 @@ func (s *UserService) PatchUser(c *gin.Context) { return } + s.userCache.Invalidate(resp.User.ID) + response.Success(c, http.StatusOK, resp) } @@ -531,8 +591,9 @@ func (s *UserService) DeleteUser(c *gin.Context) { } return } - if err = s.db.Model(&user.User).Related(&user.Role).Error; err != nil { - logger.FromContext(c).WithError(err).Errorf("error finding related models by user hash") + + if err = s.db.Take(&user.Role, "id = ?", user.User.RoleID).Error; err != nil { + logger.FromContext(c).WithError(err).Errorf("error finding role by role id") if errors.Is(err, gorm.ErrRecordNotFound) { response.Error(c, response.ErrDeleteUserModelsNotFound, err) } else { @@ -552,6 +613,8 @@ func (s *UserService) DeleteUser(c *gin.Context) { return } + s.userCache.Invalidate(user.ID) + response.Success(c, http.StatusOK, struct{}{}) } diff --git a/backend/pkg/server/services/vecstorelogs.go b/backend/pkg/server/services/vecstorelogs.go index 4a56a9a5..6d0647a3 100644 --- a/backend/pkg/server/services/vecstorelogs.go +++ b/backend/pkg/server/services/vecstorelogs.go @@ -1,6 +1,7 @@ package services import ( + "errors" "net/http" "slices" "strconv" @@ -24,7 +25,7 @@ type vecstorelogsGrouped struct { Total uint64 `json:"total"` } -var vecstorelogsSQLMappers = map[string]interface{}{ +var vecstorelogsSQLMappers = map[string]any{ "id": "{{table}}.id", "initiator": "{{table}}.initiator", "executor": "{{table}}.executor", @@ -35,6 +36,7 @@ var vecstorelogsSQLMappers = map[string]interface{}{ "flow_id": "{{table}}.flow_id", "task_id": "{{table}}.task_id", "subtask_id": "{{table}}.subtask_id", + "created_at": "{{table}}.created_at", "data": "({{table}}.filter || ' ' || {{table}}.query || ' ' || {{table}}.result)", } @@ -52,6 +54,7 @@ func NewVecstorelogService(db *gorm.DB) *VecstorelogService { // @Summary Retrieve vecstorelogs list // @Tags Vecstorelogs // @Produce json +// @Security BearerAuth // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=vecstorelogs} "vecstorelogs list received successful" // @Failure 400 {object} response.errorResp "invalid query request data" @@ -94,6 +97,12 @@ func (s *VecstorelogService) GetVecstorelogs(c *gin.Context) { query.Init("vecstorelogs", vecstorelogsSQLMappers) if query.Group != "" { + if _, ok := vecstorelogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding vecstorelogs grouped: group field not found") + response.Error(c, response.ErrVecstorelogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped vecstorelogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding vecstorelogs grouped") @@ -126,6 +135,7 @@ func (s *VecstorelogService) GetVecstorelogs(c *gin.Context) { // @Summary Retrieve vecstorelogs list by flow id // @Tags Vecstorelogs // @Produce json +// @Security BearerAuth // @Param flowID path int true "flow id" minimum(0) // @Param request query rdb.TableQuery true "query table params" // @Success 200 {object} response.successResp{data=vecstorelogs} "vecstorelogs list received successful" @@ -177,6 +187,12 @@ func (s *VecstorelogService) GetFlowVecstorelogs(c *gin.Context) { query.Init("vecstorelogs", vecstorelogsSQLMappers) if query.Group != "" { + if _, ok := vecstorelogsSQLMappers[query.Group]; !ok { + logger.FromContext(c).Errorf("error finding vecstorelogs grouped: group field not found") + response.Error(c, response.ErrVecstorelogsInvalidRequest, errors.New("group field not found")) + return + } + var respGrouped vecstorelogsGrouped if respGrouped.Total, err = query.QueryGrouped(s.db, &respGrouped.Grouped, scope); err != nil { logger.FromContext(c).WithError(err).Errorf("error finding vecstorelogs grouped") diff --git a/backend/pkg/tools/tools.go b/backend/pkg/tools/tools.go index cd788cfe..ea33a4ed 100644 --- a/backend/pkg/tools/tools.go +++ b/backend/pkg/tools/tools.go @@ -27,6 +27,18 @@ type Functions struct { Function []ExternalFunction `form:"functions,omitempty" json:"functions,omitempty" validate:"omitempty,valid"` } +func (f *Functions) Scan(input any) error { + switch v := input.(type) { + case string: + return json.Unmarshal([]byte(v), f) + case []byte: + return json.Unmarshal(v, f) + case json.RawMessage: + return json.Unmarshal(v, f) + } + return fmt.Errorf("unsupported type of input value to scan") +} + type DisableFunction struct { Name string `form:"name" json:"name" validate:"required"` Context []string `form:"context,omitempty" json:"context,omitempty" validate:"omitempty,dive,oneof=agent adviser coder searcher generator memorist enricher reporter assistant,required"` diff --git a/backend/sqlc/models/api_tokens.sql b/backend/sqlc/models/api_tokens.sql new file mode 100644 index 00000000..4aa656cb --- /dev/null +++ b/backend/sqlc/models/api_tokens.sql @@ -0,0 +1,83 @@ +-- name: GetAPITokens :many +SELECT + t.* +FROM api_tokens t +WHERE t.deleted_at IS NULL +ORDER BY t.created_at DESC; + +-- name: GetAPIToken :one +SELECT + t.* +FROM api_tokens t +WHERE t.id = $1 AND t.deleted_at IS NULL; + +-- name: GetAPITokenByTokenID :one +SELECT + t.* +FROM api_tokens t +WHERE t.token_id = $1 AND t.deleted_at IS NULL; + +-- name: GetUserAPITokens :many +SELECT + t.* +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.user_id = $1 AND t.deleted_at IS NULL +ORDER BY t.created_at DESC; + +-- name: GetUserAPIToken :one +SELECT + t.* +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.id = $1 AND t.user_id = $2 AND t.deleted_at IS NULL; + +-- name: GetUserAPITokenByTokenID :one +SELECT + t.* +FROM api_tokens t +INNER JOIN users u ON t.user_id = u.id +WHERE t.token_id = $1 AND t.user_id = $2 AND t.deleted_at IS NULL; + +-- name: CreateAPIToken :one +INSERT INTO api_tokens ( + token_id, + user_id, + role_id, + name, + ttl, + status +) VALUES ( + $1, $2, $3, $4, $5, $6 +) +RETURNING *; + +-- name: UpdateAPIToken :one +UPDATE api_tokens +SET name = $2, status = $3 +WHERE id = $1 +RETURNING *; + +-- name: UpdateUserAPIToken :one +UPDATE api_tokens +SET name = $3, status = $4 +WHERE id = $1 AND user_id = $2 +RETURNING *; + +-- name: DeleteAPIToken :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE id = $1 +RETURNING *; + +-- name: DeleteUserAPIToken :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE id = $1 AND user_id = $2 +RETURNING *; + +-- name: DeleteUserAPITokenByTokenID :one +UPDATE api_tokens +SET deleted_at = CURRENT_TIMESTAMP +WHERE token_id = $1 AND user_id = $2 +RETURNING *; diff --git a/frontend/graphql-schema.graphql b/frontend/graphql-schema.graphql index 7ae65d2c..3f38b7d1 100644 --- a/frontend/graphql-schema.graphql +++ b/frontend/graphql-schema.graphql @@ -331,6 +331,31 @@ fragment promptValidationResultFragment on PromptValidationResult { details } +fragment apiTokenFragment on APIToken { + id + tokenId + userId + roleId + name + ttl + status + createdAt + updatedAt +} + +fragment apiTokenWithSecretFragment on APITokenWithSecret { + id + tokenId + userId + roleId + name + ttl + status + createdAt + updatedAt + token +} + fragment usageStatsFragment on UsageStats { totalUsageIn totalUsageOut @@ -815,6 +840,18 @@ query flowsExecutionStatsByPeriod($period: UsageStatsPeriod!) { } } +query apiTokens { + apiTokens { + ...apiTokenFragment + } +} + +query apiToken($tokenId: String!) { + apiToken(tokenId: $tokenId) { + ...apiTokenFragment + } +} + # ==================== Mutations ==================== mutation createFlow($modelProvider: String!, $input: String!) { @@ -914,6 +951,22 @@ mutation deletePrompt($promptId: ID!) { deletePrompt(promptId: $promptId) } +mutation createAPIToken($input: CreateAPITokenInput!) { + createAPIToken(input: $input) { + ...apiTokenWithSecretFragment + } +} + +mutation updateAPIToken($tokenId: String!, $input: UpdateAPITokenInput!) { + updateAPIToken(tokenId: $tokenId, input: $input) { + ...apiTokenFragment + } +} + +mutation deleteAPIToken($tokenId: String!) { + deleteAPIToken(tokenId: $tokenId) +} + # ==================== Subscriptions ==================== subscription terminalLogAdded($flowId: ID!) { @@ -1059,3 +1112,21 @@ subscription providerDeleted { ...providerConfigFragment } } + +subscription apiTokenCreated { + apiTokenCreated { + ...apiTokenFragment + } +} + +subscription apiTokenUpdated { + apiTokenUpdated { + ...apiTokenFragment + } +} + +subscription apiTokenDeleted { + apiTokenDeleted { + ...apiTokenFragment + } +} diff --git a/frontend/package-lock.json b/frontend/package-lock.json index d6f4ec2b..a7caf52d 100644 --- a/frontend/package-lock.json +++ b/frontend/package-lock.json @@ -49,6 +49,7 @@ "lucide-react": "^0.553.0", "marked": "^17.0.3", "react": "^19.0.0", + "react-day-picker": "^9.13.2", "react-diff-viewer-continued": "^4.0.6", "react-dom": "^19.0.0", "react-hook-form": "^7.56.4", @@ -234,6 +235,7 @@ "integrity": "sha512-e7jT4DxYvIDLk1ZHmU/m/mB19rex9sv0c2ftBtjSBv+kVM/902eh0fINUzD7UwLLNR+jU585GxUJ8/EBfAM5fw==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "@babel/code-frame": "^7.27.1", "@babel/generator": "^7.28.5", @@ -1305,6 +1307,12 @@ "node": ">=v18" } }, + "node_modules/@date-fns/tz": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/@date-fns/tz/-/tz-1.4.1.tgz", + "integrity": "sha512-P5LUNhtbj6YfI3iJjw5EL9eUAG6OitD0W3fWQcpQjDRc/QIsL0tRNuO1PcDvPccWL1fSTXXdE1ds+l95DV/OFA==", + "license": "MIT" + }, "node_modules/@emotion/babel-plugin": { "version": "11.13.5", "resolved": "https://registry.npmjs.org/@emotion/babel-plugin/-/babel-plugin-11.13.5.tgz", @@ -6540,6 +6548,7 @@ "integrity": "sha512-xpr/lmLPQEj+TUnHmR+Ab91/glhJvsqcjB+yY0Ix9GO70H6Lb4FHH5GeqdOE5btAx7eIMwuHkp4H2MSkLcqWbA==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "undici-types": "~6.21.0" } @@ -6568,6 +6577,7 @@ "resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.2.tgz", "integrity": "sha512-6mDvHUFSjyT2B2yeNx2nUgMxh9LtOWvkhIU3uePn2I2oyNymUAX1NIsdgviM4CH+JSrp2D2hsMvJOkxY+0wNRA==", "license": "MIT", + "peer": true, "dependencies": { "csstype": "^3.0.2" } @@ -6578,6 +6588,7 @@ "integrity": "sha512-9KQPoO6mZCi7jcIStSnlOWn2nEF3mNmyr3rIAsGnAbQKYbRLyqmeSc39EVgtxXVia+LMT8j3knZLAZAh+xLmrw==", "devOptional": true, "license": "MIT", + "peer": true, "peerDependencies": { "@types/react": "^19.2.0" } @@ -6651,6 +6662,7 @@ "integrity": "sha512-tK3GPFWbirvNgsNKto+UmB/cRtn6TZfyw0D6IKrW55n6Vbs7KJoZtI//kpTKzE/DUmmnAFD8/Ca46s7Obs92/w==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "@typescript-eslint/scope-manager": "8.46.4", "@typescript-eslint/types": "8.46.4", @@ -7193,7 +7205,8 @@ "version": "5.5.0", "resolved": "https://registry.npmjs.org/@xterm/xterm/-/xterm-5.5.0.tgz", "integrity": "sha512-hqJHYaQb5OptNunnyAnkHyM8aCjZ1MEIDTQu1iIbbTD/xops91NB5yq1ZK/dC2JDbVWtF23zUtl9JE2NqwT87A==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/abs-svg-path": { "version": "0.1.1", @@ -7207,6 +7220,7 @@ "integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==", "dev": true, "license": "MIT", + "peer": true, "bin": { "acorn": "bin/acorn" }, @@ -7875,6 +7889,7 @@ } ], "license": "MIT", + "peer": true, "dependencies": { "baseline-browser-mapping": "^2.8.19", "caniuse-lite": "^1.0.30001751", @@ -8573,6 +8588,7 @@ "integrity": "sha512-itvL5h8RETACmOTFc4UfIyB2RfEHi71Ax6E/PivVxq9NseKbOWpeyHEOIbmAw1rs8Ak0VursQNww7lf7YtUwzg==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "env-paths": "^2.2.1", "import-fresh": "^3.3.0", @@ -8825,6 +8841,12 @@ "url": "https://github.com/sponsors/kossnocorp" } }, + "node_modules/date-fns-jalali": { + "version": "4.1.0-0", + "resolved": "https://registry.npmjs.org/date-fns-jalali/-/date-fns-jalali-4.1.0-0.tgz", + "integrity": "sha512-hTIP/z+t+qKwBDcmmsnmjWTduxCg+5KfdqWQvb2X/8C9+knYY6epN/pfxdDuyVlSVeFz0sM5eEfwIUQ70U4ckg==", + "license": "MIT" + }, "node_modules/debounce": { "version": "1.2.1", "resolved": "https://registry.npmjs.org/debounce/-/debounce-1.2.1.tgz", @@ -9529,6 +9551,7 @@ "integrity": "sha512-BhHmn2yNOFA9H9JmmIVKJmd288g9hrVRDkdoIgRCRuSySRUHH7r/DI6aAXW9T1WwUuY3DFgrcaqB+deURBLR5g==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "@eslint-community/eslint-utils": "^4.8.0", "@eslint-community/regexpp": "^4.12.1", @@ -10710,6 +10733,7 @@ "resolved": "https://registry.npmjs.org/graphql/-/graphql-16.12.0.tgz", "integrity": "sha512-DKKrynuQRne0PNpEbzuEdHlYOMksHSUI8Zc9Unei5gTsMNA2/vMpoMz/yKba50pejK56qj98qM0SjYxAKi13gQ==", "license": "MIT", + "peer": true, "engines": { "node": "^12.22.0 || ^14.16.0 || ^16.0.0 || >=17.0.0" } @@ -10843,6 +10867,7 @@ "resolved": "https://registry.npmjs.org/graphql-ws/-/graphql-ws-6.0.6.tgz", "integrity": "sha512-zgfER9s+ftkGKUZgc0xbx8T7/HMO4AV5/YuYiFc+AtgcO5T0v8AxYYNQ+ltzuzDZgNkYJaFspm5MMYLjQzrkmw==", "license": "MIT", + "peer": true, "engines": { "node": ">=20" }, @@ -15106,6 +15131,7 @@ "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "dev": true, "license": "MIT", + "peer": true, "engines": { "node": ">=12" }, @@ -15201,6 +15227,7 @@ "integrity": "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==", "dev": true, "license": "MIT", + "peer": true, "bin": { "prettier": "bin/prettier.cjs" }, @@ -15382,10 +15409,32 @@ "resolved": "https://registry.npmjs.org/react/-/react-19.2.0.tgz", "integrity": "sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ==", "license": "MIT", + "peer": true, "engines": { "node": ">=0.10.0" } }, + "node_modules/react-day-picker": { + "version": "9.13.2", + "resolved": "https://registry.npmjs.org/react-day-picker/-/react-day-picker-9.13.2.tgz", + "integrity": "sha512-IMPiXfXVIAuR5Yk58DDPBC8QKClrhdXV+Tr/alBrwrHUw0qDDYB1m5zPNuTnnPIr/gmJ4ChMxmtqPdxm8+R4Eg==", + "license": "MIT", + "dependencies": { + "@date-fns/tz": "^1.4.1", + "date-fns": "^4.1.0", + "date-fns-jalali": "^4.1.0-0" + }, + "engines": { + "node": ">=18" + }, + "funding": { + "type": "individual", + "url": "https://github.com/sponsors/gpbl" + }, + "peerDependencies": { + "react": ">=16.8.0" + } + }, "node_modules/react-diff-viewer-continued": { "version": "4.0.6", "resolved": "https://registry.npmjs.org/react-diff-viewer-continued/-/react-diff-viewer-continued-4.0.6.tgz", @@ -15411,6 +15460,7 @@ "resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.2.0.tgz", "integrity": "sha512-UlbRu4cAiGaIewkPyiRGJk0imDN2T3JjieT6spoL2UeSf5od4n5LB/mQ4ejmxhCFT1tYe8IvaFulzynWovsEFQ==", "license": "MIT", + "peer": true, "dependencies": { "scheduler": "^0.27.0" }, @@ -15423,6 +15473,7 @@ "resolved": "https://registry.npmjs.org/react-hook-form/-/react-hook-form-7.66.0.tgz", "integrity": "sha512-xXBqsWGKrY46ZqaHDo+ZUYiMUgi8suYu5kdrS20EG8KiL7VRQitEbNjm+UcrDYrNi1YLyfpmAeGjCZYXLT9YBw==", "license": "MIT", + "peer": true, "engines": { "node": ">=18.0.0" }, @@ -16920,7 +16971,8 @@ "version": "4.1.18", "resolved": "https://registry.npmjs.org/tailwindcss/-/tailwindcss-4.1.18.tgz", "integrity": "sha512-4+Z+0yiYyEtUVCScyfHCxOYP06L5Ne+JiHhY2IjR2KWMIWhJOYZKLSGZaP5HkZ8+bY0cxfzwDE5uOmzFXyIwxw==", - "license": "MIT" + "license": "MIT", + "peer": true }, "node_modules/tailwindcss-animate": { "version": "1.0.7", @@ -17175,6 +17227,7 @@ "integrity": "sha512-ytQKuwgmrrkDTFP4LjR0ToE2nqgy886GpvRSpU0JAnrdBYppuY5rLkRUYPU1yCryb24SsKBTL/hlDQAEFVwtZg==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "esbuild": "~0.25.0", "get-tsconfig": "^4.7.5" @@ -17299,6 +17352,7 @@ "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", "dev": true, "license": "Apache-2.0", + "peer": true, "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" @@ -17774,6 +17828,7 @@ "integrity": "sha512-BxAKBWmIbrDgrokdGZH1IgkIk/5mMHDreLDmCJ0qpyJaAteP8NvMhkwr/ZCQNqNH97bw/dANTE9PDzqwJghfMQ==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "esbuild": "^0.25.0", "fdir": "^6.5.0", @@ -17914,6 +17969,7 @@ "integrity": "sha512-urzu3NCEV0Qa0Y2PwvBtRgmNtxhj5t5ULw7cuKhIHh3OrkKTLlut0lnBOv9qe5OvbkMH2g38G7KPDCTpIytBVg==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "@vitest/expect": "4.0.8", "@vitest/mocker": "4.0.8", @@ -18218,6 +18274,7 @@ "integrity": "sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==", "devOptional": true, "license": "MIT", + "peer": true, "engines": { "node": ">=10.0.0" }, @@ -18339,6 +18396,7 @@ "resolved": "https://registry.npmjs.org/zod/-/zod-3.25.76.tgz", "integrity": "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ==", "license": "MIT", + "peer": true, "funding": { "url": "https://github.com/sponsors/colinhacks" } diff --git a/frontend/package.json b/frontend/package.json index 7bca2b45..935880d2 100644 --- a/frontend/package.json +++ b/frontend/package.json @@ -59,6 +59,7 @@ "lucide-react": "^0.553.0", "marked": "^17.0.3", "react": "^19.0.0", + "react-day-picker": "^9.13.2", "react-diff-viewer-continued": "^4.0.6", "react-dom": "^19.0.0", "react-hook-form": "^7.56.4", diff --git a/frontend/src/app.tsx b/frontend/src/app.tsx index eec60a9a..630c875f 100644 --- a/frontend/src/app.tsx +++ b/frontend/src/app.tsx @@ -25,6 +25,7 @@ const Flows = lazy(() => import('@/pages/flows/flows')); const NewFlow = lazy(() => import('@/pages/flows/new-flow')); const Login = lazy(() => import('@/pages/login')); const OAuthResult = lazy(() => import('@/pages/oauth-result')); +const SettingsAPITokens = lazy(() => import('@/pages/settings/settings-api-tokens')); const SettingsPrompt = lazy(() => import('@/pages/settings/settings-prompt')); const SettingsPrompts = lazy(() => import('@/pages/settings/settings-prompts')); const SettingsProvider = lazy(() => import('@/pages/settings/settings-provider')); @@ -113,6 +114,10 @@ const App = () => { element={} path="prompts/:promptId" /> + } + path="api-tokens" + /> {/* } diff --git a/frontend/src/components/layouts/settings-layout.tsx b/frontend/src/components/layouts/settings-layout.tsx index 8000c8e2..11478576 100644 --- a/frontend/src/components/layouts/settings-layout.tsx +++ b/frontend/src/components/layouts/settings-layout.tsx @@ -1,4 +1,4 @@ -import { ArrowLeft, FileText, Plug, Settings as SettingsIcon } from 'lucide-react'; +import { ArrowLeft, FileText, Key, Plug, Settings as SettingsIcon } from 'lucide-react'; import { useMemo } from 'react'; import { NavLink, Outlet, useLocation, useParams } from 'react-router-dom'; @@ -45,6 +45,12 @@ const menuItems: readonly MenuItem[] = [ path: '/settings/prompts', title: 'Prompts', }, + { + icon: , + id: 'api-tokens', + path: '/settings/api-tokens', + title: 'PentAGI API', + }, // { // id: 'mcp-servers', // title: 'MCP Servers', @@ -108,6 +114,10 @@ const SettingsHeader = () => { return 'Edit Prompt'; } + if (path === '/settings/api-tokens') { + return 'PentAGI API'; + } + // Find matching main section const activeItem = menuItems.find((item) => path.startsWith(item.path)); diff --git a/frontend/src/components/ui/calendar.tsx b/frontend/src/components/ui/calendar.tsx new file mode 100644 index 00000000..c3d9d5f1 --- /dev/null +++ b/frontend/src/components/ui/calendar.tsx @@ -0,0 +1,60 @@ +import { ChevronLeft, ChevronRight } from 'lucide-react'; +import { DayPicker, type DayPickerProps } from 'react-day-picker'; + +import { buttonVariants } from '@/components/ui/button'; +import { cn } from '@/lib/utils'; + +export type CalendarProps = DayPickerProps; + +function Calendar({ className, classNames, showOutsideDays = true, ...props }: CalendarProps) { + return ( + + orientation === 'left' ? : , + }} + showOutsideDays={showOutsideDays} + {...props} + /> + ); +} + +Calendar.displayName = 'Calendar'; + +export { Calendar }; diff --git a/frontend/src/graphql/types.ts b/frontend/src/graphql/types.ts index 235fc6e1..50e004d9 100644 --- a/frontend/src/graphql/types.ts +++ b/frontend/src/graphql/types.ts @@ -18,6 +18,31 @@ export type Scalars = { Time: { input: any; output: any }; }; +export type ApiToken = { + createdAt: Scalars['Time']['output']; + id: Scalars['ID']['output']; + name?: Maybe; + roleId: Scalars['ID']['output']; + status: TokenStatus; + tokenId: Scalars['String']['output']; + ttl: Scalars['Int']['output']; + updatedAt: Scalars['Time']['output']; + userId: Scalars['ID']['output']; +}; + +export type ApiTokenWithSecret = { + createdAt: Scalars['Time']['output']; + id: Scalars['ID']['output']; + name?: Maybe; + roleId: Scalars['ID']['output']; + status: TokenStatus; + token: Scalars['String']['output']; + tokenId: Scalars['String']['output']; + ttl: Scalars['Int']['output']; + updatedAt: Scalars['Time']['output']; + userId: Scalars['ID']['output']; +}; + export type AgentConfig = { frequencyPenalty?: Maybe; maxLength?: Maybe; @@ -186,6 +211,11 @@ export type AssistantLog = { type: MessageLogType; }; +export type CreateApiTokenInput = { + name?: InputMaybe; + ttl: Scalars['Int']['input']; +}; + export type DailyFlowsStats = { date: Scalars['Time']['output']; stats: FlowsStats; @@ -323,10 +353,12 @@ export type ModelUsageStats = { export type Mutation = { callAssistant: ResultType; + createAPIToken: ApiTokenWithSecret; createAssistant: FlowAssistant; createFlow: Flow; createPrompt: UserPrompt; createProvider: ProviderConfig; + deleteAPIToken: Scalars['Boolean']['output']; deleteAssistant: ResultType; deleteFlow: ResultType; deletePrompt: ResultType; @@ -337,6 +369,7 @@ export type Mutation = { stopFlow: ResultType; testAgent: AgentTestResult; testProvider: ProviderTestResult; + updateAPIToken: ApiToken; updatePrompt: UserPrompt; updateProvider: ProviderConfig; validatePrompt: PromptValidationResult; @@ -349,6 +382,10 @@ export type MutationCallAssistantArgs = { useAgents: Scalars['Boolean']['input']; }; +export type MutationCreateApiTokenArgs = { + input: CreateApiTokenInput; +}; + export type MutationCreateAssistantArgs = { flowId: Scalars['ID']['input']; input: Scalars['String']['input']; @@ -372,6 +409,10 @@ export type MutationCreateProviderArgs = { type: ProviderType; }; +export type MutationDeleteApiTokenArgs = { + tokenId: Scalars['String']['input']; +}; + export type MutationDeleteAssistantArgs = { assistantId: Scalars['ID']['input']; flowId: Scalars['ID']['input']; @@ -418,6 +459,11 @@ export type MutationTestProviderArgs = { type: ProviderType; }; +export type MutationUpdateApiTokenArgs = { + input: UpdateApiTokenInput; + tokenId: Scalars['String']['input']; +}; + export type MutationUpdatePromptArgs = { promptId: Scalars['ID']['input']; template: Scalars['String']['input']; @@ -566,6 +612,8 @@ export type ProvidersReadinessStatus = { export type Query = { agentLogs?: Maybe>; + apiToken?: Maybe; + apiTokens: Array; assistantLogs?: Maybe>; assistants?: Maybe>; flow: Flow; @@ -602,6 +650,10 @@ export type QueryAgentLogsArgs = { flowId: Scalars['ID']['input']; }; +export type QueryApiTokenArgs = { + tokenId: Scalars['String']['input']; +}; + export type QueryAssistantLogsArgs = { assistantId: Scalars['ID']['input']; flowId: Scalars['ID']['input']; @@ -742,6 +794,9 @@ export enum StatusType { export type Subscription = { agentLogAdded: AgentLog; + apiTokenCreated: ApiToken; + apiTokenDeleted: ApiToken; + apiTokenUpdated: ApiToken; assistantCreated: Assistant; assistantDeleted: Assistant; assistantLogAdded: AssistantLog; @@ -898,6 +953,12 @@ export type TestResult = { type: Scalars['String']['output']; }; +export enum TokenStatus { + Active = 'active', + Expired = 'expired', + Revoked = 'revoked', +} + export type ToolcallsStats = { totalCount: Scalars['Int']['output']; totalDurationSeconds: Scalars['Float']['output']; @@ -915,6 +976,11 @@ export type ToolsPrompts = { getTaskDescription: DefaultPrompt; }; +export type UpdateApiTokenInput = { + name?: InputMaybe; + status?: InputMaybe; +}; + export type UsageStats = { totalUsageCacheIn: Scalars['Int']['output']; totalUsageCacheOut: Scalars['Int']['output']; @@ -1198,6 +1264,31 @@ export type PromptValidationResultFragmentFragment = { details?: string | null; }; +export type ApiTokenFragmentFragment = { + id: string; + tokenId: string; + userId: string; + roleId: string; + name?: string | null; + ttl: number; + status: TokenStatus; + createdAt: any; + updatedAt: any; +}; + +export type ApiTokenWithSecretFragmentFragment = { + id: string; + tokenId: string; + userId: string; + roleId: string; + name?: string | null; + ttl: number; + status: TokenStatus; + createdAt: any; + updatedAt: any; + token: string; +}; + export type UsageStatsFragmentFragment = { totalUsageIn: number; totalUsageOut: number; @@ -1476,6 +1567,16 @@ export type FlowsExecutionStatsByPeriodQuery = { flowsExecutionStatsByPeriod: Array; }; +export type ApiTokensQueryVariables = Exact<{ [key: string]: never }>; + +export type ApiTokensQuery = { apiTokens: Array }; + +export type ApiTokenQueryVariables = Exact<{ + tokenId: Scalars['String']['input']; +}>; + +export type ApiTokenQuery = { apiToken?: ApiTokenFragmentFragment | null }; + export type CreateFlowMutationVariables = Exact<{ modelProvider: Scalars['String']['input']; input: Scalars['String']['input']; @@ -1606,6 +1707,25 @@ export type DeletePromptMutationVariables = Exact<{ export type DeletePromptMutation = { deletePrompt: ResultType }; +export type CreateApiTokenMutationVariables = Exact<{ + input: CreateApiTokenInput; +}>; + +export type CreateApiTokenMutation = { createAPIToken: ApiTokenWithSecretFragmentFragment }; + +export type UpdateApiTokenMutationVariables = Exact<{ + tokenId: Scalars['String']['input']; + input: UpdateApiTokenInput; +}>; + +export type UpdateApiTokenMutation = { updateAPIToken: ApiTokenFragmentFragment }; + +export type DeleteApiTokenMutationVariables = Exact<{ + tokenId: Scalars['String']['input']; +}>; + +export type DeleteApiTokenMutation = { deleteAPIToken: boolean }; + export type TerminalLogAddedSubscriptionVariables = Exact<{ flowId: Scalars['ID']['input']; }>; @@ -1740,6 +1860,18 @@ export type ProviderDeletedSubscriptionVariables = Exact<{ [key: string]: never export type ProviderDeletedSubscription = { providerDeleted: ProviderConfigFragmentFragment }; +export type ApiTokenCreatedSubscriptionVariables = Exact<{ [key: string]: never }>; + +export type ApiTokenCreatedSubscription = { apiTokenCreated: ApiTokenFragmentFragment }; + +export type ApiTokenUpdatedSubscriptionVariables = Exact<{ [key: string]: never }>; + +export type ApiTokenUpdatedSubscription = { apiTokenUpdated: ApiTokenFragmentFragment }; + +export type ApiTokenDeletedSubscriptionVariables = Exact<{ [key: string]: never }>; + +export type ApiTokenDeletedSubscription = { apiTokenDeleted: ApiTokenFragmentFragment }; + export const FlowOverviewFragmentFragmentDoc = gql` fragment flowOverviewFragment on Flow { id @@ -2096,6 +2228,33 @@ export const PromptValidationResultFragmentFragmentDoc = gql` details } `; +export const ApiTokenFragmentFragmentDoc = gql` + fragment apiTokenFragment on APIToken { + id + tokenId + userId + roleId + name + ttl + status + createdAt + updatedAt + } +`; +export const ApiTokenWithSecretFragmentFragmentDoc = gql` + fragment apiTokenWithSecretFragment on APITokenWithSecret { + id + tokenId + userId + roleId + name + ttl + status + createdAt + updatedAt + token + } +`; export const UsageStatsFragmentFragmentDoc = gql` fragment usageStatsFragment on UsageStats { totalUsageIn @@ -3907,6 +4066,96 @@ export type FlowsExecutionStatsByPeriodQueryResult = Apollo.QueryResult< FlowsExecutionStatsByPeriodQuery, FlowsExecutionStatsByPeriodQueryVariables >; +export const ApiTokensDocument = gql` + query apiTokens { + apiTokens { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; + +/** + * __useApiTokensQuery__ + * + * To run a query within a React component, call `useApiTokensQuery` and pass it any options that fit your needs. + * When your component renders, `useApiTokensQuery` returns an object from Apollo Client that contains loading, error, and data properties + * you can use to render your UI. + * + * @param baseOptions options that will be passed into the query, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options; + * + * @example + * const { data, loading, error } = useApiTokensQuery({ + * variables: { + * }, + * }); + */ +export function useApiTokensQuery(baseOptions?: Apollo.QueryHookOptions) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useQuery(ApiTokensDocument, options); +} +export function useApiTokensLazyQuery( + baseOptions?: Apollo.LazyQueryHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useLazyQuery(ApiTokensDocument, options); +} +export function useApiTokensSuspenseQuery( + baseOptions?: Apollo.SkipToken | Apollo.SuspenseQueryHookOptions, +) { + const options = baseOptions === Apollo.skipToken ? baseOptions : { ...defaultOptions, ...baseOptions }; + return Apollo.useSuspenseQuery(ApiTokensDocument, options); +} +export type ApiTokensQueryHookResult = ReturnType; +export type ApiTokensLazyQueryHookResult = ReturnType; +export type ApiTokensSuspenseQueryHookResult = ReturnType; +export type ApiTokensQueryResult = Apollo.QueryResult; +export const ApiTokenDocument = gql` + query apiToken($tokenId: String!) { + apiToken(tokenId: $tokenId) { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; + +/** + * __useApiTokenQuery__ + * + * To run a query within a React component, call `useApiTokenQuery` and pass it any options that fit your needs. + * When your component renders, `useApiTokenQuery` returns an object from Apollo Client that contains loading, error, and data properties + * you can use to render your UI. + * + * @param baseOptions options that will be passed into the query, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options; + * + * @example + * const { data, loading, error } = useApiTokenQuery({ + * variables: { + * tokenId: // value for 'tokenId' + * }, + * }); + */ +export function useApiTokenQuery( + baseOptions: Apollo.QueryHookOptions & + ({ variables: ApiTokenQueryVariables; skip?: boolean } | { skip: boolean }), +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useQuery(ApiTokenDocument, options); +} +export function useApiTokenLazyQuery(baseOptions?: Apollo.LazyQueryHookOptions) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useLazyQuery(ApiTokenDocument, options); +} +export function useApiTokenSuspenseQuery( + baseOptions?: Apollo.SkipToken | Apollo.SuspenseQueryHookOptions, +) { + const options = baseOptions === Apollo.skipToken ? baseOptions : { ...defaultOptions, ...baseOptions }; + return Apollo.useSuspenseQuery(ApiTokenDocument, options); +} +export type ApiTokenQueryHookResult = ReturnType; +export type ApiTokenLazyQueryHookResult = ReturnType; +export type ApiTokenSuspenseQueryHookResult = ReturnType; +export type ApiTokenQueryResult = Apollo.QueryResult; export const CreateFlowDocument = gql` mutation createFlow($modelProvider: String!, $input: String!) { createFlow(modelProvider: $modelProvider, input: $input) { @@ -4620,6 +4869,121 @@ export type DeletePromptMutationOptions = Apollo.BaseMutationOptions< DeletePromptMutation, DeletePromptMutationVariables >; +export const CreateApiTokenDocument = gql` + mutation createAPIToken($input: CreateAPITokenInput!) { + createAPIToken(input: $input) { + ...apiTokenWithSecretFragment + } + } + ${ApiTokenWithSecretFragmentFragmentDoc} +`; +export type CreateApiTokenMutationFn = Apollo.MutationFunction; + +/** + * __useCreateApiTokenMutation__ + * + * To run a mutation, you first call `useCreateApiTokenMutation` within a React component and pass it any options that fit your needs. + * When your component renders, `useCreateApiTokenMutation` returns a tuple that includes: + * - A mutate function that you can call at any time to execute the mutation + * - An object with fields that represent the current status of the mutation's execution + * + * @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2; + * + * @example + * const [createApiTokenMutation, { data, loading, error }] = useCreateApiTokenMutation({ + * variables: { + * input: // value for 'input' + * }, + * }); + */ +export function useCreateApiTokenMutation( + baseOptions?: Apollo.MutationHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useMutation(CreateApiTokenDocument, options); +} +export type CreateApiTokenMutationHookResult = ReturnType; +export type CreateApiTokenMutationResult = Apollo.MutationResult; +export type CreateApiTokenMutationOptions = Apollo.BaseMutationOptions< + CreateApiTokenMutation, + CreateApiTokenMutationVariables +>; +export const UpdateApiTokenDocument = gql` + mutation updateAPIToken($tokenId: String!, $input: UpdateAPITokenInput!) { + updateAPIToken(tokenId: $tokenId, input: $input) { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; +export type UpdateApiTokenMutationFn = Apollo.MutationFunction; + +/** + * __useUpdateApiTokenMutation__ + * + * To run a mutation, you first call `useUpdateApiTokenMutation` within a React component and pass it any options that fit your needs. + * When your component renders, `useUpdateApiTokenMutation` returns a tuple that includes: + * - A mutate function that you can call at any time to execute the mutation + * - An object with fields that represent the current status of the mutation's execution + * + * @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2; + * + * @example + * const [updateApiTokenMutation, { data, loading, error }] = useUpdateApiTokenMutation({ + * variables: { + * tokenId: // value for 'tokenId' + * input: // value for 'input' + * }, + * }); + */ +export function useUpdateApiTokenMutation( + baseOptions?: Apollo.MutationHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useMutation(UpdateApiTokenDocument, options); +} +export type UpdateApiTokenMutationHookResult = ReturnType; +export type UpdateApiTokenMutationResult = Apollo.MutationResult; +export type UpdateApiTokenMutationOptions = Apollo.BaseMutationOptions< + UpdateApiTokenMutation, + UpdateApiTokenMutationVariables +>; +export const DeleteApiTokenDocument = gql` + mutation deleteAPIToken($tokenId: String!) { + deleteAPIToken(tokenId: $tokenId) + } +`; +export type DeleteApiTokenMutationFn = Apollo.MutationFunction; + +/** + * __useDeleteApiTokenMutation__ + * + * To run a mutation, you first call `useDeleteApiTokenMutation` within a React component and pass it any options that fit your needs. + * When your component renders, `useDeleteApiTokenMutation` returns a tuple that includes: + * - A mutate function that you can call at any time to execute the mutation + * - An object with fields that represent the current status of the mutation's execution + * + * @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2; + * + * @example + * const [deleteApiTokenMutation, { data, loading, error }] = useDeleteApiTokenMutation({ + * variables: { + * tokenId: // value for 'tokenId' + * }, + * }); + */ +export function useDeleteApiTokenMutation( + baseOptions?: Apollo.MutationHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useMutation(DeleteApiTokenDocument, options); +} +export type DeleteApiTokenMutationHookResult = ReturnType; +export type DeleteApiTokenMutationResult = Apollo.MutationResult; +export type DeleteApiTokenMutationOptions = Apollo.BaseMutationOptions< + DeleteApiTokenMutation, + DeleteApiTokenMutationVariables +>; export const TerminalLogAddedDocument = gql` subscription terminalLogAdded($flowId: ID!) { terminalLogAdded(flowId: $flowId) { @@ -5388,3 +5752,108 @@ export function useProviderDeletedSubscription( } export type ProviderDeletedSubscriptionHookResult = ReturnType; export type ProviderDeletedSubscriptionResult = Apollo.SubscriptionResult; +export const ApiTokenCreatedDocument = gql` + subscription apiTokenCreated { + apiTokenCreated { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; + +/** + * __useApiTokenCreatedSubscription__ + * + * To run a query within a React component, call `useApiTokenCreatedSubscription` and pass it any options that fit your needs. + * When your component renders, `useApiTokenCreatedSubscription` returns an object from Apollo Client that contains loading, error, and data properties + * you can use to render your UI. + * + * @param baseOptions options that will be passed into the subscription, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options; + * + * @example + * const { data, loading, error } = useApiTokenCreatedSubscription({ + * variables: { + * }, + * }); + */ +export function useApiTokenCreatedSubscription( + baseOptions?: Apollo.SubscriptionHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useSubscription( + ApiTokenCreatedDocument, + options, + ); +} +export type ApiTokenCreatedSubscriptionHookResult = ReturnType; +export type ApiTokenCreatedSubscriptionResult = Apollo.SubscriptionResult; +export const ApiTokenUpdatedDocument = gql` + subscription apiTokenUpdated { + apiTokenUpdated { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; + +/** + * __useApiTokenUpdatedSubscription__ + * + * To run a query within a React component, call `useApiTokenUpdatedSubscription` and pass it any options that fit your needs. + * When your component renders, `useApiTokenUpdatedSubscription` returns an object from Apollo Client that contains loading, error, and data properties + * you can use to render your UI. + * + * @param baseOptions options that will be passed into the subscription, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options; + * + * @example + * const { data, loading, error } = useApiTokenUpdatedSubscription({ + * variables: { + * }, + * }); + */ +export function useApiTokenUpdatedSubscription( + baseOptions?: Apollo.SubscriptionHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useSubscription( + ApiTokenUpdatedDocument, + options, + ); +} +export type ApiTokenUpdatedSubscriptionHookResult = ReturnType; +export type ApiTokenUpdatedSubscriptionResult = Apollo.SubscriptionResult; +export const ApiTokenDeletedDocument = gql` + subscription apiTokenDeleted { + apiTokenDeleted { + ...apiTokenFragment + } + } + ${ApiTokenFragmentFragmentDoc} +`; + +/** + * __useApiTokenDeletedSubscription__ + * + * To run a query within a React component, call `useApiTokenDeletedSubscription` and pass it any options that fit your needs. + * When your component renders, `useApiTokenDeletedSubscription` returns an object from Apollo Client that contains loading, error, and data properties + * you can use to render your UI. + * + * @param baseOptions options that will be passed into the subscription, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options; + * + * @example + * const { data, loading, error } = useApiTokenDeletedSubscription({ + * variables: { + * }, + * }); + */ +export function useApiTokenDeletedSubscription( + baseOptions?: Apollo.SubscriptionHookOptions, +) { + const options = { ...defaultOptions, ...baseOptions }; + return Apollo.useSubscription( + ApiTokenDeletedDocument, + options, + ); +} +export type ApiTokenDeletedSubscriptionHookResult = ReturnType; +export type ApiTokenDeletedSubscriptionResult = Apollo.SubscriptionResult; diff --git a/frontend/src/lib/apollo.ts b/frontend/src/lib/apollo.ts index cab9c981..a825853c 100644 --- a/frontend/src/lib/apollo.ts +++ b/frontend/src/lib/apollo.ts @@ -133,6 +133,9 @@ const streamingLink = new ApolloLink((operation: Operation, forward) => { // Mapping of subscription names to their corresponding cache field names const subscriptionToCacheFieldMap: Record = { agentLogAdded: 'agentLogs', + apiTokenCreated: 'apiTokens', + apiTokenDeleted: 'apiTokens', + apiTokenUpdated: 'apiTokens', assistantCreated: 'assistants', assistantDeleted: 'assistants', assistantLogAdded: 'assistantLogs', @@ -143,6 +146,9 @@ const subscriptionToCacheFieldMap: Record = { flowUpdated: 'flows', messageLogAdded: 'messageLogs', messageLogUpdated: 'messageLogs', + providerCreated: 'settingsProviders', + providerDeleted: 'settingsProviders', + providerUpdated: 'settingsProviders', screenshotAdded: 'screenshots', searchLogAdded: 'searchLogs', taskCreated: 'tasks', @@ -287,6 +293,9 @@ const cache = new InMemoryCache({ AgentLog: { keyFields: ['id'], }, + APIToken: { + keyFields: ['tokenId'], + }, Assistant: { keyFields: ['id'], }, @@ -321,6 +330,12 @@ const cache = new InMemoryCache({ return incoming; }, }, + // API tokens - always use latest + apiTokens: { + merge(_existing, incoming) { + return incoming; + }, + }, // Assistant logs - cache by flowId and assistantId arguments assistantLogs: { keyArgs: ['flowId', 'assistantId'], diff --git a/frontend/src/pages/settings/settings-api-tokens.tsx b/frontend/src/pages/settings/settings-api-tokens.tsx new file mode 100644 index 00000000..39d048d5 --- /dev/null +++ b/frontend/src/pages/settings/settings-api-tokens.tsx @@ -0,0 +1,857 @@ +import type { ColumnDef } from '@tanstack/react-table'; + +import { + AlertCircle, + ArrowDown, + ArrowUp, + CalendarIcon, + Check, + Copy, + ExternalLink, + Key, + Loader2, + MoreHorizontal, + Pencil, + Plus, + Trash, + X, +} from 'lucide-react'; +import { useCallback, useMemo, useState } from 'react'; +import { toast } from 'sonner'; + +import type { ApiTokenFragmentFragment } from '@/graphql/types'; + +import ConfirmationDialog from '@/components/shared/confirmation-dialog'; +import { Alert, AlertDescription, AlertTitle } from '@/components/ui/alert'; +import { Badge } from '@/components/ui/badge'; +import { Button } from '@/components/ui/button'; +import { Calendar } from '@/components/ui/calendar'; +import { DataTable } from '@/components/ui/data-table'; +import { Dialog, DialogContent, DialogDescription, DialogHeader, DialogTitle } from '@/components/ui/dialog'; +import { + DropdownMenu, + DropdownMenuContent, + DropdownMenuItem, + DropdownMenuSeparator, + DropdownMenuTrigger, +} from '@/components/ui/dropdown-menu'; +import { Input } from '@/components/ui/input'; +import { Popover, PopoverContent, PopoverTrigger } from '@/components/ui/popover'; +import { Select, SelectContent, SelectGroup, SelectItem, SelectTrigger, SelectValue } from '@/components/ui/select'; +import { StatusCard } from '@/components/ui/status-card'; +import { + TokenStatus as TokenStatusEnum, + useApiTokenCreatedSubscription, + useApiTokenDeletedSubscription, + useApiTokensQuery, + useApiTokenUpdatedSubscription, + useCreateApiTokenMutation, + useDeleteApiTokenMutation, + useUpdateApiTokenMutation, +} from '@/graphql/types'; +import { cn } from '@/lib/utils'; +import { baseUrl } from '@/models/api'; + +type APIToken = ApiTokenFragmentFragment; + +interface CreateFormData { + expiresAt: Date | null; + name: string; +} + +interface EditFormData { + name: string; + status: TokenStatusEnum; +} + +const isTokenExpired = (token: APIToken): boolean => { + const expiresAt = new Date(token.createdAt); + + expiresAt.setSeconds(expiresAt.getSeconds() + token.ttl); + + return expiresAt < new Date(); +}; + +const getTokenExpirationDate = (token: APIToken): Date => { + const expiresAt = new Date(token.createdAt); + + expiresAt.setSeconds(expiresAt.getSeconds() + token.ttl); + + return expiresAt; +}; + +const getStatusDisplay = ( + token: APIToken, +): { label: string; variant: 'default' | 'destructive' | 'outline' | 'secondary' } => { + const expired = isTokenExpired(token); + + if (expired) { + return { label: 'expired', variant: 'destructive' }; + } + + if (token.status === 'active') { + return { label: 'active', variant: 'default' }; + } + + if (token.status === 'revoked') { + return { label: 'revoked', variant: 'outline' }; + } + + return { label: token.status, variant: 'secondary' }; +}; + +const calculateTTL = (expiresAt: Date): number => { + const now = new Date(); + const diffMs = expiresAt.getTime() - now.getTime(); + const diffSeconds = Math.ceil(diffMs / 1000); + + return Math.max(60, diffSeconds); +}; + +const copyToClipboard = async (text: string): Promise => { + try { + await navigator.clipboard.writeText(text); + + return true; + } catch (error) { + console.error('Failed to copy to clipboard:', error); + + return false; + } +}; + +const SettingsAPITokensHeader = ({ onCreateClick }: { onCreateClick: () => void }) => { + return ( +
+
+

Manage API tokens for programmatic access

+ +
+ + +
+ ); +}; + +const createNewTokenPlaceholder: APIToken = { + createdAt: new Date().toISOString(), + id: 'create-new', + name: null, + roleId: '0', + status: TokenStatusEnum.Active, + tokenId: '', + ttl: 0, + updatedAt: new Date().toISOString(), + userId: '0', +}; + +const SettingsAPITokens = () => { + const { data, error, loading: isLoading } = useApiTokensQuery(); + const [createAPIToken, { error: createError, loading: isCreateLoading }] = useCreateApiTokenMutation(); + const [updateAPIToken, { error: updateError, loading: isUpdateLoading }] = useUpdateApiTokenMutation(); + const [deleteAPIToken, { error: deleteError, loading: isDeleteLoading }] = useDeleteApiTokenMutation(); + + const [editingTokenId, setEditingTokenId] = useState(null); + const [creatingToken, setCreatingToken] = useState(false); + const [editFormData, setEditFormData] = useState({ name: '', status: TokenStatusEnum.Active }); + const [createFormData, setCreateFormData] = useState({ expiresAt: null, name: '' }); + const [tokenSecret, setTokenSecret] = useState(null); + const [showTokenDialog, setShowTokenDialog] = useState(false); + const [deleteErrorMessage, setDeleteErrorMessage] = useState(null); + const [isDeleteDialogOpen, setIsDeleteDialogOpen] = useState(false); + const [deletingToken, setDeletingToken] = useState(null); + + useApiTokenCreatedSubscription({ + onData: ({ client }) => { + client.refetchQueries({ include: ['apiTokens'] }); + }, + }); + + useApiTokenUpdatedSubscription({ + onData: ({ client }) => { + client.refetchQueries({ include: ['apiTokens'] }); + }, + }); + + useApiTokenDeletedSubscription({ + onData: ({ client }) => { + client.refetchQueries({ include: ['apiTokens'] }); + }, + }); + + const handleEdit = useCallback((token: APIToken) => { + setEditingTokenId(token.tokenId); + setEditFormData({ + name: token.name || '', + status: token.status, + }); + }, []); + + const handleCancelEdit = useCallback(() => { + setEditingTokenId(null); + setEditFormData({ name: '', status: TokenStatusEnum.Active }); + }, []); + + const handleSave = useCallback( + async (tokenId: string) => { + try { + await updateAPIToken({ + refetchQueries: ['apiTokens'], + variables: { + input: { + name: editFormData.name || null, + status: editFormData.status, + }, + tokenId, + }, + }); + + setEditingTokenId(null); + setEditFormData({ name: '', status: TokenStatusEnum.Active }); + } catch (error) { + console.error('Failed to update token:', error); + } + }, + [editFormData, updateAPIToken], + ); + + const handleCreateNew = useCallback(() => { + setCreatingToken(true); + setCreateFormData({ expiresAt: null, name: '' }); + }, []); + + const handleCancelCreate = useCallback(() => { + setCreatingToken(false); + setCreateFormData({ expiresAt: null, name: '' }); + }, []); + + const handleCreate = useCallback(async () => { + if (!createFormData.expiresAt) { + return; + } + + try { + const ttl = calculateTTL(createFormData.expiresAt); + const result = await createAPIToken({ + refetchQueries: ['apiTokens'], + variables: { + input: { + name: createFormData.name || null, + ttl, + }, + }, + }); + + if (result.data?.createAPIToken) { + setTokenSecret(result.data.createAPIToken.token); + setShowTokenDialog(true); + } + + setCreatingToken(false); + setCreateFormData({ expiresAt: null, name: '' }); + } catch (error) { + console.error('Failed to create token:', error); + } + }, [createAPIToken, createFormData]); + + const handleDeleteDialogOpen = useCallback((token: APIToken) => { + setDeletingToken(token); + setIsDeleteDialogOpen(true); + }, []); + + const handleDelete = useCallback( + async (tokenId: string | undefined) => { + if (!tokenId) { + return; + } + + try { + setDeleteErrorMessage(null); + + await deleteAPIToken({ + refetchQueries: ['apiTokens'], + variables: { tokenId }, + }); + + setDeletingToken(null); + setDeleteErrorMessage(null); + } catch (error) { + setDeleteErrorMessage(error instanceof Error ? error.message : 'An error occurred while deleting'); + } + }, + [deleteAPIToken], + ); + + const handleCopyTokenId = useCallback(async (tokenId: string) => { + const success = await copyToClipboard(tokenId); + + if (success) { + toast.success('Token ID copied to clipboard'); + + return; + } + + toast.error('Failed to copy token ID to clipboard'); + }, []); + + const columns: ColumnDef[] = useMemo( + () => [ + { + accessorKey: 'name', + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + const isEditing = editingTokenId === token.tokenId; + + if (isCreating) { + return ( + setCreateFormData((prev) => ({ ...prev, name: e.target.value }))} + placeholder="Token name (optional)" + value={createFormData.name} + /> + ); + } + + if (isEditing) { + return ( + setEditFormData((prev) => ({ ...prev, name: e.target.value }))} + placeholder="Token name (optional)" + value={editFormData.name} + /> + ); + } + + return ( +
+ {token.name || (unnamed)} +
+ ); + }, + header: ({ column }) => { + const sorted = column.getIsSorted(); + + return ( + + ); + }, + size: 300, + }, + { + accessorKey: 'tokenId', + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + + if (isCreating) { + return
N/A
; + } + + const tokenId = row.getValue('tokenId') as string; + + return ( +
+ {tokenId} + +
+ ); + }, + header: ({ column }) => { + const sorted = column.getIsSorted(); + + return ( + + ); + }, + size: 200, + }, + { + accessorKey: 'status', + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + + if (isCreating) { + return active; + } + + const isEditing = editingTokenId === token.tokenId; + const expired = isTokenExpired(token); + const statusDisplay = getStatusDisplay(token); + + if (isEditing) { + if (expired) { + return {statusDisplay.label}; + } + + return ( + + ); + } + + return {statusDisplay.label}; + }, + header: ({ column }) => { + const sorted = column.getIsSorted(); + + return ( + + ); + }, + size: 120, + }, + { + accessorKey: 'expires', + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + + if (isCreating) { + const tomorrow = new Date(); + + tomorrow.setDate(tomorrow.getDate() + 1); + tomorrow.setHours(0, 0, 0, 0); + + return ( + + + + + + { + setCreateFormData((prev) => ({ ...prev, expiresAt: date || null })); + }} + selected={createFormData.expiresAt || undefined} + /> + + + ); + } + + const expiresAt = getTokenExpirationDate(token); + + return
{expiresAt.toLocaleDateString()}
; + }, + header: ({ column }) => { + const sorted = column.getIsSorted(); + + return ( + + ); + }, + size: 150, + sortingFn: (rowA, rowB) => { + const expiresA = getTokenExpirationDate(rowA.original); + const expiresB = getTokenExpirationDate(rowB.original); + + return expiresA.getTime() - expiresB.getTime(); + }, + }, + { + accessorKey: 'createdAt', + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + + if (isCreating) { + return
N/A
; + } + + const date = new Date(row.getValue('createdAt')); + + return
{date.toLocaleDateString()}
; + }, + header: ({ column }) => { + const sorted = column.getIsSorted(); + + return ( + + ); + }, + size: 120, + }, + { + cell: ({ row }) => { + const token = row.original; + const isCreating = token.id === 'create-new'; + const isEditing = editingTokenId === token.tokenId; + + if (isCreating) { + return ( +
+ + +
+ ); + } + + if (isEditing) { + return ( +
+ + +
+ ); + } + + return ( +
+ + + + + + handleEdit(token)}> + + Edit + + + handleDeleteDialogOpen(token)} + > + {isDeleteLoading && deletingToken?.tokenId === token.tokenId ? ( + <> + + Deleting... + + ) : ( + <> + + Delete + + )} + + + +
+ ); + }, + enableHiding: false, + header: () => null, + id: 'actions', + size: 48, + }, + ], + [ + createFormData.expiresAt, + createFormData.name, + editFormData.name, + editFormData.status, + editingTokenId, + handleCancelCreate, + handleCancelEdit, + handleCopyTokenId, + handleCreate, + handleDeleteDialogOpen, + handleEdit, + handleSave, + isCreateLoading, + isDeleteLoading, + isUpdateLoading, + deletingToken, + ], + ); + + if (isLoading) { + return ( +
+ + } + title="Loading tokens..." + /> +
+ ); + } + + if (error) { + return ( +
+ + + + Error loading tokens + {error.message} + +
+ ); + } + + const tokens = data?.apiTokens || []; + + if (tokens.length === 0 && !creatingToken) { + return ( +
+ + + + Create Token + + } + description="Create your first API token to access PentAGI programmatically" + icon={} + title="No API tokens configured" + /> +
+ ); + } + + return ( +
+ + + {(createError || updateError || deleteError || deleteErrorMessage) && ( + + + Error + + {createError?.message || updateError?.message || deleteError?.message || deleteErrorMessage} + + + )} + + + + + + + API Token Created + + Copy this token now. You won't be able to see it again for security reasons. + + +
+ {tokenSecret} +
+
+ + +
+
+
+ + handleDelete(deletingToken?.tokenId)} + handleOpenChange={setIsDeleteDialogOpen} + isOpen={isDeleteDialogOpen} + itemName={deletingToken?.name || deletingToken?.tokenId} + itemType="token" + /> +
+ ); +}; + +export default SettingsAPITokens;