- tools allowed callers for client and server
- all tool definitions common options
- new code_execution, web_fetch, web_search tools
- top-level cache_contol
- thinking with disabled summaries for speed
- message updates with container variants
-fix tool_search_tool results
Replace inline toISOString() with prettyTimestampForFilenames(false)
to match the other two export options that already use local time.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- default Joy color scheme to system
- cycle theme control through light, dark, and system modes
- update labels and icons to reflect the active theme preference
Signed-off-by: blacksuan19 <abubakaryagob@gmail.com>
Add keyboard shortcuts for toggling left drawer (Ctrl+() and right panel
(Ctrl+)). Also adds a reusable `skipIfInput` flag on ShortcutObject that
skips shortcuts when a text input, textarea, or contenteditable element
(or child thereof) is focused - not applied to these layout shortcuts but
available for future use.
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
The useLogicSherpaStore.setState() call at module level ran during
server-side rendering where localStorage is unavailable, causing a
hydration crash. Wrap with isBrowser so it only executes in the
browser context.
Signed-off-by: blacksuan19 <abubakaryagob@gmail.com>
Instead of clipping on the Collapsee box, we just use it as the FR target
with a minHeight of 0; have the parent take the correct height, and clip all to the parent.
This is because the exception gets actually trapped locally in the deeper layers
due to client-side processing, which then created a particle for the abort,
which then is never used because the outer will discard it without notice
Fixes#948: OpenAI-through-OR verbosity is sync'd with OpenAI models.
Fixes#893: Gemini-through-OR parameters are synchronized with Gemini models
Fixes#940: OpenAI-through-OR reasoning effort is synced with OpenAI models and much improved. We will have to still fix#944 for OpenAI levels to be fully sync'd with upstream (in progress)
This caused all unknown models to default to the responses API.
We need heuristics for determining OpenAI vs OpenAI-compatible.
This reverts commit a6862d8c58.
Add the ability for users to configure a custom API host for DeepSeek,
allowing them to use alternative endpoints like https://api.deepseek.com/beta.
Changes:
- Add `deepseekHost` to DDeepseekServiceSettings interface
- Wire deepseekHost to oaiHost in transport layer
- Add API Host form field visible in advanced settings
Closes#909
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
The Midori AI Subsystem is being sunset as announced in issue #902.
This removes the deployment section from the installation documentation.
Closes#902
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
- Add gpt-5.1-2025-11-12 (GPT-5.1 Thinking) with adaptive reasoning
- Add gpt-5.1-chat-latest (GPT-5.1 Instant) with adaptive reasoning
- Both models include full feature set: chat, vision, function calling, JSON, prompt caching, reasoning, web search
- Pricing set to match GPT-5 (to be updated when official pricing is announced)
- Added models to manual ordering list for proper UI sorting
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
Add userPricing field to DLLM interface following the established pattern
for user overrides (similar to userContextTokens and userMaxOutputTokens).
This enables users to set custom pricing for local models (Ollama, LM Studio, etc.)
to track "what if" costs and compare with cloud models.
Changes:
- Added userPricing field to DLLM interface (llms.types.ts)
- Added getLLMPricing() getter function with override precedence
- Updated store to preserve userPricing during model updates
- Updated all llm.pricing access points to use getLLMPricing()
- Added pricing override UI in LLMOptionsModal (Details section)
- Input price ($/M tokens)
- Output price ($/M tokens)
- Reset buttons for each field
- Cost calculations automatically use user pricing when set
- Existing cost display in tooltips works with user pricing
Resolves#860
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
This refactor allows for low-level looping on the client side.
This can be used for network errors between server<>upstream reported as particles,
as well as for client<>server connections.
One special case of this is the OpenAI system to reattach to detached (background) requests,
or as an alternative to re-fetch them from the server once completed.
This refactor decomposes the chatGeneration procedure into composable blocks.
Allows for instance chatGeneration-like outputs from different inputs,
allowing for instance `resumability` of a background connection.
Moreover this reorganizes the phases of a CG operation, and includes a generic executor
that takes creator functions for Dispatchers.
- Read recognitionState.errorMessage in Telephone component
- Pass error message to CallStatus component
- Display specific error messages instead of generic fallback
- Matches error handling pattern used in Chat/Composer
This ensures users see detailed error messages instead of generic
Browser may not support text.
Fixes#829 by making speech recognition errors visible to users.
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
Fixes#850 - Model visibility was being reset after app updates.
Root cause: User visibility changes were stored in `hidden` field instead of
`userHidden`, but the preservation logic only looked for `userHidden`. This
caused user preferences to be lost during model updates.
Changes:
- Added isLLMHidden() helper to compute effective visibility (userHidden ?? hidden)
- Fixed all write paths to set userHidden instead of hidden (3 files)
- Fixed all read paths to use isLLMHidden() (7 files, 14 locations)
This ensures:
- User preferences persist across updates
- Vendor visibility changes still propagate for untouched models
- Bulk operations work correctly
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
NOTEs: this works without saving the server-side tool invocation and the subsequent responses
to AIX particles, and consequently to DMessageFragments of the opportune type.
-> Shall do it with execution graph fragments.
Implements comprehensive support for Anthropic's web search (web_search_20250305) and web fetch (web_fetch_20250910) tools.
- Add llmVndAntWebSearch and llmVndAntWebFetch parameters with ['auto', 'off'] options
- Enable tools for Claude 4.5, 4.1, 4, 3.7, 3.5 Sonnet/Haiku/Opus models (including thinking variants)
- Inject web_search_20250305 and web_fetch_20250910 tools based on parameter values
- Configure web search with max_uses=5 for progressive searches
- Configure web fetch with max_uses=5 and citations enabled
- Add dynamic beta header injection for web fetch (web-fetch-2025-09-10)
- Add UI controls in model settings for easy parameter configuration
Parser already supports web_search_tool_result and web_fetch_tool_result blocks (no changes needed).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Enrico Ros <enricoros@users.noreply.github.com>
- Use console.warn for benign removeChild errors
- Skip PostHog reporting for these errors
- More succinct implementation
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
- Added parameter to GPT-5, GPT-5 Pro, GPT-5 Codex, and GPT-5 Mini
- These models require Organization ID verification for streaming
- Kept existing parameter on o3 and o3-pro models
- Did not modify GPT-5 Nano, GPT-5 Chat Latest, GPT-4.1, or GPT-4o models which work fine
Fixes#847
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
- Add nullish coalescing (??) after optional chaining to ensure string return
- Prevents undefined from propagating through the promise chain
- Fixes potential TypeError when calling .startsWith() on undefined
Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
Note that display:grid was fitting to contents, but we prefer display:flex (direction:column)
so we had to make the maxWidth property from 700 to adaptive to the screen size.
Azure OpenAI doesn't support the web_search_preview tool, which was causing
"Hosted tool 'web_search_preview' is not supported" errors with GPT-5 models.
## Changes:
- Pass dialect information to aixToOpenAIResponses function
- Skip web_search_preview tool addition when dialect is 'azure'
- Add logging when web search is skipped for Azure
- Document known Azure limitations in implementation guide
## Impact:
- Fixes web browsing errors with Azure GPT-5 models
- Maintains web search functionality for regular OpenAI models
- Provides clear logging for debugging
This is a critical fix for Azure OpenAI compatibility as web search is not
currently supported on Azure's Responses API implementation.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
This commit addresses GitHub issue #828 by fixing URL construction for Azure OpenAI's Responses API
and preventing malformed URLs from client configuration issues.
## Problems Fixed:
1. Host normalization: Prevents malformed URLs when client config includes paths/queries
2. API paradigm support: Properly handles Azure's next-gen v1 Responses API
3. API version consistency: Centralizes version management with env overrides
## Key Changes:
- Normalize Azure host URLs to origin only (strip path/query)
- Prefer server environment variables over client-provided hosts
- Add special handling for Responses API (/openai/v1/responses)
- Support both traditional (deployment-based) and v1 API paradigms
- Add configurable API versions via environment variables
- Include debug logging for API paradigm selection
## New Environment Variables:
- AZURE_API_V1: Enable next-gen v1 API explicitly
- AZURE_RESPONSES_API_VERSION: Control Responses API version
- AZURE_CHAT_API_VERSION: Control Chat Completions API version
- AZURE_DEPLOYMENTS_API_VERSION: Control deployments listing API version
## Testing:
Validated with Azure OpenAI endpoint showing:
- List Deployments: ✅ Works
- Chat Completions: ✅ Works (with correct params for GPT-5)
- Responses API (v1): ✅ Works with /openai/v1/responses?api-version=preview
- Responses API (traditional): ❌ 404 (Azure doesn't support this pattern)
The fix defaults to using Azure's recommended next-gen v1 API for Responses
while maintaining backward compatibility for existing deployments.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
Warning: older browsers will ignore the entire CSS lines containing round() calls.
However we already introduced top-level layout rounds in 85e4946f (Fix fractional sizes of drawer and pane).
To restore support of old browsers, calls to 'round()' need to be stripped of the round part.
Background: all of a sudden Sharp started not building anymore with the following error message:
```
./public/images/covers/release-cover-v1.12.0.png
Error: Could not load the "sharp" module using the win32-x64 runtime
Possible solutions:
- Ensure optional dependencies can be installed:
npm install --include=optional sharp
- Ensure your package manager supports multi-platform installation:
See https://sharp.pixelplumbing.com/install#cross-platform
- Add platform-specific dependencies:
npm install --os=win32 --cpu=x64 sharp
- Consult the installation documentation:
See https://sharp.pixelplumbing.com/install
at Object.<anonymous> (PATH\node_modules\sharp\lib\sharp.js:113:9)
at Module._compile (node:internal/modules/cjs/loader:1730:14)
at Object..js (node:internal/modules/cjs/loader:1895:10)
at Module.load (node:internal/modules/cjs/loader:1465:32)
at Function._load (node:internal/modules/cjs/loader:1282:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:235:24)
at Module.<anonymous> (node:internal/modules/cjs/loader:1487:12)
at mod.require (PATH\node_modules\next\dist\server\require-hook.js:65:28)
at require (node:internal/modules/helpers:135:16)
```
This is without changing anything in the system nor in the build. May be a faulty env detection, and happens across all branches.
Deploying this and trying it out.
Suddenly all Vercel builds experienced exceptions and connection terminations.
On 2025-05-22 around 8PM CET, Vercel servers started to log errors on tRPC calls.
This fix waits 1 extra event loop tick. Shall work around the issue until a proper fix is found.
A new setting to export all the settings in localstorage and IndexedDB into
single 'flash' files for Big-AGI to reload.
This allows to quickly and easily migrate a full installation, including images,
from a v2-dev open installation to another.
This won't likely work across other branches, but it's meant to be forward-compatible.
Introducing the Events module with per-Domain extensibility.
Depends on @Logger.
Required eventemitter3.
A pleasure to extend, and start using both for Subsystems and AGI events.
When this property is set, a re-layout (force reflow) is performed by the browser even with a simple hovering of the separator.
Since we may have very large walls of text/images, we need to minimize relayouts, so for now, we set a min size on the contained scrollable container instead of preventing the resize.
See also this upstream issues: https://github.com/bvaughn/react-resizable-panels/issues/337
To turn it on, either|or:
- server side: aix.router.ts: DEBUG_LOG_PROFILER_ON_SERVER=true
- client side: DEV BUILD + "debug mode" + DEBUG_LOG_PROFILER_ON_CLIENT=true to show on the console
This requires EITHER:
- on the server-side, in aix.router.ts, set DEBUG_LOG_PROFILER=true;
- on the client side, and only for Development builds, this is automatic in "Debug Mode"
What's left:
- figure out how to turn this on/off
- figure out which models can or cannot use it, without having too much to maintain
- figure out the runtime implementation
- parse the annotation ranges
- render the original icons
- figure out how to escape the Vertex rewriting of URLs
Somehow the developer instruction is not enabled for Gemma3-IT, and we got this message:
"Gemini: Bad Request - Developer instruction is not enabled for models/gemma-3-27b-it"
So we convert any System message to a User message instead (see the hotfix)
Note: the ModelAux/reasoning block is only sent if there's a signature or there is redacted data.
We could even further reduce its sending to only Anthropic llms in CGR.
Note: if there aren't enough tokens in the chat, the Anthropic API will throw. Nothing to do there for now.
We will wait for Anthropic to step in and fix the issue before fixing something on our end that's clearly not our issue.
Verified on: Azure, Groq, LMStudio, LocalAI, Mistral, OpenAI, TogetherAI
FC not supported on: Deepseek, Perplexity
FC poorly supported on: OpenRouter (basically only in OpenAI it works)
description: Sync Anthropic API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for Anthropic: message wire types `src/modules/aix/server/dispatch/wiretypes/anthropic.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/anthropic.messageCreate.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/anthropic.parser.ts`.
IMPORTANT: we only support the Messages API (message create). We do NOT support other APIs such as the older Completions API.
We support Anthropic caching natively, and want to make sure tools and state (crafting the history) are also done well.
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
- Recent news and announcements: Web Search for "anthropic api changelog" or "new claude api" or "new claude api pricing"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
description: Sync Google Gemini API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for Google Gemini: message wire types `src/modules/aix/server/dispatch/wiretypes/gemini.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/gemini.generateContent.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/gemini.parser.ts`.
IMPORTANT: we only support the generateContent API, not other Gemini APIs such as embeddings, etc.
Caching is only supported when implicit, we do not explicitly manage Gemini Caches. Same for file uploads and other systems.
Image generation happens through models, i.e. 'Gemini 2.5 Flash - Nano Banana' generates images using AIX from generateContent (chat input).
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources:**
- Docs API 1/2: https://ai.google.dev/api/generate-content
- Docs API 2/2: https://ai.google.dev/api/caching#Content
- Google AI JavaScript SDK: https://github.com/googleapis/js-genai (check latest commits, README, type definitions)
Recent news and announcements: Web Search for "gemini api changelog" or "nwe gemini api updates" or "new gemini api pricing"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
description: Sync OpenAI API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for OpenAI: message wire types `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/openai.chatCompletions.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.parser.ts`.
IMPORTANT: we prioritize the new Responses API, while Chat Completions is still supported but legacy.
We do NOT support other APIs such as Realtime (incl. websockets), etc.
We also do not support Agentic APIs (Agent SDK, AgentKit, ChatKit, Assistants API etc), as we perform similar functionality in AIX (server or client side).
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources:**
- Responses API (AIX prioritizes it): https://platform.openai.com/docs/api-reference/responses/create
Recent news and announcements: Web Search for "openai api changelog" or "openai new models" or "openai new prices"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
GOAL: Ensure complete support for OpenRouter's API including advanced features like reasoning/thinking tokens, tool use, search integration, and multi-modal capabilities. OpenRouter is OpenAI-compatible but has important extensions and differences.
Use Task tool with subagent_type=Explore and thoroughness="very thorough" to discover:
1. Map API structure - all endpoints, parameters, capabilities from https://openrouter.ai/docs
2.**Advanced features** - How to use: reasoning/thinking tokens (o1, DeepSeek R1), tool use/function calling, search integration, multi-modal (vision/audio)
3. Changelog location - How does OpenRouter communicate API updates and breaking changes?
4. Model metadata - What capabilities are exposed in the models list API? How to detect feature support?
- Web Search for "xai grok api changelog 2026" or "xai responses api new features"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully for discrepancies between our implementation and the current API docs:
1.**Request fields**: Compare `XAIWire_API_Responses.Request_schema` against current docs - any new, changed, or deprecated parameters?
2.**Tool definitions**: Compare `XAIWire_Responses_Tools` - any new parameters on web_search/x_search/code_interpreter? Any new hosted tool types?
3.**Input/Output item types**: Any xAI-specific output items not handled by the shared OpenAI parser (e.g., x_search_call, web_search_call, code_interpreter_call)?
4.**Streaming events**: Any xAI-specific SSE event types beyond what the OpenAI Responses parser handles?
5.**Response shape**: Usage reporting differences, new fields in the response object?
6.**Adapter logic**: Message role mapping, content type handling, system message approach - still correct?
7.**Include options**: Any new values for the `include` array?
8.**Reasoning config**: Which models support it and with what values?
Prioritize breaking changes and new capabilities that would improve the user experience.
When making changes, add comments with date: `// [xAI, 2026-MM-DD]: explanation`
**Self-update this skill**: After completing the sync, if your research reveals that assumptions in THIS skill file (`.claude/commands/aix/sync-xai-api.md`) are wrong or outdated - e.g., new APIs we now implement, new tool types added, URLs moved, file paths changed - update this skill file to stay accurate for next time.
description: Review in-flight changes for coherence, completeness, and quality
---
Review the current in-flight changes in the big-agi-private repository (dev branch, continuously rebased ~1800 commits on top of main).
**Step 1: Scope and read**
`git diff --stat` + `git status` for breadth. Then full `git diff` (if empty: `git diff --cached`, then `git diff HEAD~1`).
For every file in the diff, read surrounding context in the actual source file - the diff alone hides bugs in adjacent untouched code.
**Step 2: Reverse-engineer the intent**
From the diff, determine the **what**, **how**, and **why**. Present this concisely so the author can confirm or correct,
but don't stop here, continue to the full review in the same response.
**Step 3: Validate**
Run `tsc --noEmit --pretty` and `npm run lint` (in parallel). Report any errors with the review.
If the diff removes/renames identifiers, grep the codebase for stale references to the OLD names. This catches broken guards, stale imports, and incomplete migrations.
description: Sync LLM parameter options between full model dialog and chat side panel
---
Audit and sync LLM parameter configurations between the two UI editors. Goal: identical `value` fields in option arrays + equivalent onChange logic. Labels/descriptions can differ for UI space.
**Files to Compare:**
1.**Full Model Dialog**: `src/modules/llms/models-modal/LLMParametersEditor.tsx` (main branch)
2.**Chat Side Panel**: `src/apps/chat/components/layout-panel/ChatPanelModelParameters.tsx` (main derived branches only)
description: Update Alibaba model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/alibaba.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
- Search "alibaba model studio latest pricing", "alibaba latest models", "qwen models pricing", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
description: Update DeepSeek model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/deepseek.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
- Model List: https://api-docs.deepseek.com/api/list-models
- Release Notes: https://api-docs.deepseek.com/updates (check for version updates like V3.2-Exp)
**Note:** DeepSeek frequently releases new versions with significant pricing changes. Always check release notes first.
**Fallbacks if blocked:** Search "deepseek api latest pricing", "deepseek latest models", "deepseek models list" or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
Validate that the dynamic (API-fetched) vendor model parsers are up to date and not silently broken.
These vendors do NOT have hardcoded model lists - they fetch models from APIs at runtime. But their parsers, filters, heuristic detection, and capability mapping can break if upstream APIs change. This skill covers all dynamic vendors NOT covered by the other `llms:update-models-{vendor}` skills.
description: Update Gemini model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/gemini/gemini.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.types.ts`, `src/modules/llms/server/llm.server.types.ts`, and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Fallbacks if blocked:** Check Google AI JS SDK at https://github.com/googleapis/js-genai, search "gemini models latest pricing", "gemini latest models", or search GitHub for latest model prices and context windows
**Important:**
- Ignore context windows (auto-determined at runtime) and training cutoffs (not supported)
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review, do NOT remove comments
description: Update Groq model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/groq.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Source:**
- Fetch https://console.groq.com/docs/models.md directly (markdown format, no search needed)
- Pricing: https://groq.com/pricing/
**Do NOT use web search.** The `.md` endpoint provides structured markdown content directly.
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
description: Update Kimi model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/moonshot.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources (fetch directly, no search needed):**
description: Update Mistral model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/mistral.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
- Search "mistral [model-name] latest pricing", "mistral api latest pricing", "mistral latest models", or search GitHub for latest model prices and context windows
description: Update Ollama model definitions with latest featured models
---
Update `src/modules/llms/server/ollama/ollama.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Automated Workflow:**
```bash
# 1. Fetch the HTML (sorted by newest for stable ordering)
**Fallbacks if blocked:** Check https://github.com/ollama/ollama, search "ollama featured models", "ollama latest models", or search GitHub for latest model info
**Important:**
- Parser filtering rules:
- Top 30 newest models are always included (regardless of pull count)
- After top 30, only models with 50K+ pulls are included
- Models with 'cloud' capability are automatically excluded
- Models with 'embedding' capability are automatically excluded
- Sort them in the EXACT same order as the source (newest first, for stable ordering)
- Extract tags: 'tools' → hasTools, 'vision' → hasVision, 'embedding' → isEmbeddings (note the 's'), 'thinking' → tags only
- Extract 'b' tags (1.5b, 7b, 32b) to tags field
- Set today's date (YYYYMMDD format) for newly added models only
- Update OLLAMA_LAST_UPDATE constant to today's date
- Do NOT change dates of existing models
- Review the full model list for additions, removals, and changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments and newlines to make diffs easy to review
description: Update OpenAI model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/openai.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Manual hint:** For pricing page, expand all tables before copying content.
- Search "openai models latest pricing", "openai latest models" for third-party aggregators, or search GitHub for latest model prices and context windows
- OpenAI Node SDK (https://github.com/openai/openai-node) has limited model metadata only
- As last resort: Use Chrome DevTools MCP to navigate and extract from official docs
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
description: Update OpenPipe model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/openpipe.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Base Models: https://docs.openpipe.ai/base-models
**Fallbacks if blocked:** Search "openpipe models latest pricing", "openpipe latest models", "openpipe base models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
description: Update Perplexity model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/perplexity.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
**Fallbacks if blocked:** Search "perplexity api latest pricing", "perplexity latest models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
description: Update xAI model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/xai.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
- Search "xai grok latest pricing", "xai latest models", "xai api models", or search GitHub for latest model prices and context windows
- Random sites? https://the-rogue-marketing.github.io/grok-api-latest-llms-pricing-october-2025/ (find a newer version), https://langdb.ai/app/providers/xai/ (browse by model, limited coverage)
- As last resort: Use Chrome DevTools MCP to access docs.x.ai
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
Compare model `parameterSpecs` in definition files against API-validated sweep data.
If `$ARGUMENTS` provided, verify only that dialect, which includes reading the pair of sweep results and model defintions. Otherwise verify all four, and read the pairs in sequence.
## Files
**Sweep results** (source of truth for select parameters):
By the time you see these files, the repo owner has already updated them via `tools/develop/llm-parameter-sweep/sweep.sh` (very long running, 15 min per vendor).
**Model definitions (source of truth for model defintions for the user and application, including constants, interfaces, supported parameters and sometimes allowed parameter values)**:
org.opencontainers.image.description=Big-AGI Open - Multi-model AI workspace for experts who need to think broader, decide smarter, and build with confidence.
--annotation='index:org.opencontainers.image.description=Big-AGI Open - Multi-model AI workspace for experts who need to think broader, decide smarter, and build with confidence.' \
Guidance to Claude Code when working with code in this repository.
## Architecture Overview
Big-AGI is a Next.js 15 application with a sophisticated modular architecture built for professional AI interactions.
### Development Commands
Dev servers may be already running on ports 3000, 3001, 3002, or 3003 (not always this app - other projects may occupy these ports). Never start or stop dev servers, let the user do it.
```bash
# Validate (~5s, safe while dev server runs, do NOT use `next build` ~45s for same checks)
tsc --noEmit --pretty && npm run lint # Type check (~3.5s) + ESLint (~2s)
eslint src/path/to/file.ts # Lint specific file
# Full build (~60s+, only when suspecting runtime/bundle issues)
npm run build # next build runs compile+lint+types but stops at first type-error file; tsc shows all at once
# Database & External Services
# npm run supabase:local-update-types # Generate TypeScript types
# npm run stripe:listen # Listen for Stripe webhooks
```
### Git/GitHub remotes
The `gh` command is available to interact with GitHub from the terminal, but **NEVER PUSH TO ANY BRANCH**. The user manages all 'write' git operations.
Located at `/src/server/trpc/trpc.router-cloud.ts`
**Key Pattern**: Edge runtime for AI (fast, distributed), Cloud runtime for data ops (centralized, Node.js)
@kb/KB.md
@kb/vision-inlined.md
As a side note, the product tiers (independent, non-VC-funded) are: **Open** (self-host, MIT) · **Free** (big-agi.com) · **Pro** (paid, includes Sync + backup). All tiers use the user's own API keys.
# Instead of `COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next`, we only extract some parts, excluding .next/cache which is build time only:
[](https://big-agi.com)
[](https://github.com/enricoros/big-AGI/pkgs/container/big-agi)
[](https://vercel.com/new/clone?repository-url=https://github.com/enricoros/big-agi)
[](https://github.com/enricoros/big-agi/issues/new?template=ai-triage.yml)
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI&env=OPENAI_API_KEY&envDescription=Backend%20API%20keys%2C%20optional%20and%20may%20be%20overridden%20by%20the%20UI.&envLink=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI%2Fblob%2Fmain%2Fdocs%2Fenvironment-variables.md&project-name=big-AGI)
[//]: # ([](https://stats.uptimerobot.com/59MXcnmjrM))
[//]: # ([](https://x.com/enricoros))
> Note: bigger better features (incl. Beam-2) are being cooked outside of `main`.
<br/>
[//]: # (big-AGI is an open book; see the **[ready-to-ship and future ideas](https://github.com/users/enricoros/projects/4/views/2)** in our open roadmap)
# Big-AGI Open 🧠
### What's New in 1.16.1...1.16.3 · Jun 20, 2024 (patch releases)
This is the open-source foundation of **Big-AGI**, ___the multi-model AI workspace for experts___.
Big-AGI is the multi-model AI workspace for experts: Engineers architecting systems. Founders making decisions. Researchers validating hypotheses.
You need to think broader, decide faster, and build with confidence, then you need Big-AGI.
It comes packed with **world-class features** like Beam, and is praised for its **best-in-class AI chat UX**.
**As an independent, non-VC-funded project, Pro subscriptions at $10.99/mo fund development for everyone, including the free and open-source tiers.**
**Intelligence**: with [Beam & Merge](https://big-agi.com/beam) for multi-model de-hallucination, native search, and bleeding-edge AI models like Opus 4.6, Nano Banana Pro, Kimi K2.5 or GPT 5.4 -
**Control**: with personas, data ownership, requests inspection, unlimited usage with API keys, and *no vendor lock-in* -
and **Speed**: with a local-first, over-powered, zero-latency, madly optimized web app.
Choose Big-AGI because you don't need another clone or slop - you need an AI tool that scales with you.
### Show me a screenshot:
Sure - here is real-world screeengrab as I'm writing this, while running a Beam to extract SVG from an image with Sonnet 4.5, Opus 4.1, GPT 5.1, Gemini 2.5 Pro, Nano Banana, etc.
<img alt="Real-world screen capture as of Nov 15 2025, 2am" src="https://github.com/user-attachments/assets/853f4160-27cb-4ac9-826b-402f1e63d4af" />
\*: **Configuration requires your API keys**. *Big-AGI does not charge for model usage or limit your access*.
**Why Pro?** As an independent project, Pro subscriptions fund all development. Early subscribers shape the roadmap directly.
[](https://big-agi.com)
**Self-host and developers** (full control)
- Develop locally or self-host with Docker on your own infrastructure – [guide](docs/installation.md)
- Or fork & run on Vercel:
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI&env=OPENAI_API_KEY&envDescription=Backend%20API%20keys%2C%20optional%20and%20may%20be%20overridden%20by%20the%20UI.&envLink=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI%2Fblob%2Fmain%2Fdocs%2Fenvironment-variables.md&project-name=big-AGI)
[//]: # (**For the latest Big-AGI:**)
[//]: # (- [**Big-AGI Open**](https://github.com/enricoros/big-AGI/tree/main) - Open Source, latest models and features (main branch))
[//]: # (- [**Big-AGI Pro**](https://big-agi.com) - Hosted with Cloud Sync)
---
## Our Philosophy
We're an independent, non-VC-funded project with a simple belief: **AI should elevate you, not replace you**.
This is why we built Big-AGI to be **local-first**, madly optimized to 0-latency, launched multi-model first to
defeat hallucinations, designed Beam around the **humans in the loop**, re-wrote frameworks and abstractions
so you **are not vendor locked-in**, and obsessed over a powerful UI that works, just works.
NOTE: this is a powerful tool - if you need a toy UI or clone, this ain't it.
---
## Release Notes
👉 **[See the Live Release Notes](https://big-agi.com/changes)**
**For teams and institutions:** Need shared prompts, SSO, or managed deployments? Reach out at enrico@big-agi.com. We're actively collecting requirements from research groups and IT departments.
<details>
<summary>5,000 Commits Milestone</summary>
Hit 5k commits last week. That's a lot of code.
Recent work has been intense:
- Chain of thought reasoning across multiple LLMs: **OpenAI o3** and o1, **DeepSeek R1**, **Gemini 2.0 Flash Thinking**, and more
- Beam is real - ~35% of our users run it daily to compare models
- New AIX framework lets us scale features we couldn't before
- UI is faster than ever. Like, terminal-fast
The new architecture is solid and the speed improvements are real.
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more [#517](https://github.com/enricoros/big-AGI/issues/517), [507](https://github.com/enricoros/big-AGI/pull/507)
#### 3,000 Commits Milestone · April 7, 2024
</details>
<details>
<summary>3,000 Commits Milestone · April 7, 2024</summary>
- 🥇 Today we <b>celebrate commit 3000</b> in just over one year, and going stronger 🚀
- 📢️ Thanks everyone for your support and words of love for Big-AGI, we are committed to creating the best AI experiences for everyone.
### What's New in 1.15.0 · April 1, 2024 · Beam
</details>
<details>
<summary>What's New in 1.15.0 · April 1, 2024 · Beam</summary>
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
@@ -59,6 +240,8 @@ Or fork & run on Vercel
- 1.15.1: Support for Gemini Pro 1.5 and OpenAI Turbo models
- Beast release, over 430 commits, 10,000+ lines changed: [release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.15.0), and changes [v1.14.1...v1.15.0](https://github.com/enricoros/big-AGI/compare/v1.14.1...v1.15.0)
</details>
<details>
<summary>What's New in 1.14.1 · March 7, 2024 · Modelmorphic</summary>
@@ -66,7 +249,7 @@ Or fork & run on Vercel
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
- **Mistral** Large and Google **Gemini 1.5** support
- [ ] 📢️ [**Chat with us** on Discord](https://discord.gg/MkH4qj2Jp9)
- [ ] ⭐ **Give us a star** on GitHub 👆
- [ ] 🚀 **Do you like code**? You'll love this gem of a project! [_Pick up a task!_](https://github.com/users/enricoros/projects/4/views/4) - _easy_ to _pro_
- [ ] 💡 Got a feature suggestion? [_Add your roadmap ideas_](https://github.com/enricoros/big-agi/issues/new?&template=roadmap-request.md)
- [ ] ✨ [Deploy](docs/installation.md) your [fork](docs/customizations.md) for your friends and family, or [customize it for work](docs/customizations.md)
⭐ [Star the repo](https://github.com/enricoros/big-agi) if Big-AGI is useful to you
### Contribute
**🤖 AI-Powered Issue Assistance**
When you open an issue, our custom AI triage system (powered by [Claude Code](https://github.com/anthropics/claude-code-action) with Big-AGI architecture documentation) analyzes it, searches the codebase, and provides solutions - typically within 30 minutes. We've trained the system on our modules and subsystems so it handles most issues effectively. Your feedback drives development!
[](https://github.com/enricoros/big-agi/issues/new?template=ai-triage.yml)
[](https://github.com/users/enricoros/projects/4/views/4)
| | FC options structure | JSON Schema for input | Object with properties | JSON Schema for parameters |
| | FC Force invocation | Via `tool_choice` parameter | Via `toolConfig` parameter | Via `tool_choice` parameter |
| | FC Model invocation | Model generates a `tool_use` block with predicted parameters | Generates a `functionCall` part with predicted parameters | Generates a message.`tool_calls` item with predicted arguments |
| | FC Result injection | Client appends a `user` message with a `tool_result` content block | Client appends a `function` message with `functionResponse` part | Client sends a new `tool` message with `tool_call_id` and `content` |
| | Built-in Code execution | No | **Yes** | No |
| | Tool use with vision | Yes | Yes | Yes |
| **Generation Configuration** |
| | temperature | Yes | Yes | Yes |
| | max_tokens | Yes | Yes | Yes |
| | stop_sequences | Yes | Yes | Yes |
| | top_k | Yes | Yes | **No** |
| | top_p | Yes | Yes | Yes |
| | seed | No | No | **Yes** |
| | Multiple candidates | No | No | Yes (with 'n' parameter, breaks streaming?) |
- **OpenAI-compatible endpoints**: Any provider with an OpenAI-compatible API works out of the box - models, pricing, and capabilities are auto-detected
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
### What's New in 2 · Oct 31, 2025 · Open
### What's New in 1.16.1...1.16.3 · Jun 20, 2024 (patch releases)
- **Big-AGI Open** is ready and more productive and faster than ever, with:
- **Beam 2**: multi-modal, program-based, follow-ups, save presets
- Top-notch AI models support including **agentic models** and **reasoning models**
- **Image Generation** and editing with Nano Banana and gpt-image-1
- **Web Search** with citations for supported models
- **UI** & Mobile UI overhaul with peeking and side panels
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
- Built for the future, madly optimized
### What's New in 1.16.1...1.16.13 · (patch releases)
- 1.16.13: Docker fix (#840)
- 1.16.12: Dockerfile update (#840)
- 1.16.11: v1 final release, documentation updates
- 1.16.10: OpenRouter models support
- 1.16.9: Docker Gemini fix, R1 models support
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
- 1.16.6: Groq support for Llama 3.1 models
- 1.16.5: GPT-4o Mini support
- 1.16.4: 8192 tokens support for Claude 3.5 Sonnet
- 1.16.3: Anthropic Claude 3.5 Sonnet model support
- 1.16.2: Improve web downloads, as text, markdwon, or HTML
- 1.16.2: Improve web downloads, as text, markdown, or HTML
- 1.16.2: Proper support for Gemini models
- 1.16.2: Added the latest Mistral model
- 1.16.2: Tokenizer support for gpt-4o
@@ -41,7 +61,7 @@ by release.
### What's New in 1.15.0 · April 1, 2024 · Beam
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
- Managed Deployments **Auto-Configuration**: simplify the UI mdoels setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
- Message **Starring ⭐**: star important messages within chats, to attach them later. [#476](https://github.com/enricoros/big-AGI/issues/476)
- Enhanced the default Persona
- Fixes to Gemini models and SVGs, improvements to UI and icons
@@ -53,7 +73,7 @@ by release.
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
- **Mistral** Large and Google **Gemini 1.5** support
This enables Azure OpenAI for all users without requiring individual API keys. For more details, see [environment-variables.md](environment-variables.md).
## Azure OpenAI API Versions
Azure OpenAI supports both traditional deployment-based API and the next-generation v1 API:
### Next-Generation v1 API (Default)
- **Enabled by default** for GPT-5-like models (GPT-5, GPT-6, o3, o4, etc.)
- Uses direct `/openai/v1/responses` endpoint without deployment IDs
- Optimized for advanced reasoning models and new features
- Can be disabled by setting `AZURE_OPENAI_DISABLE_V1=true`
- Fill in the required fields (including the subscription ID) and click on **Apply**
Once your application is accepted, you can create OpenAI resources on Azure.
### Step 3: Create Azure OpenAI Resource
### Step 2: Create Azure OpenAI Resource
For more information, see [Azure: Create and deploy OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
@@ -55,31 +74,32 @@ For more information, see [Azure: Create and deploy OpenAI](https://learn.micros

- Select the subscription
- Select a resource group or create a new one
- Select the region. Note that the region determines the available models.
> For instance, **Canada East** offers GPT-4-32k models, For the full list, see [GPT-4 models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
- Select the region. **Important**: The region determines which models are available.
> Popular regions like **East US**, **West Europe**, and **Australia East** typically have the best model availability. For the latest model availability by region, see [Azure OpenAI Model Availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
- Name the service (e.g., `your-openai-api-1234`)
- Select a pricing tier (e.g., `S0` for standard)
- Select: "All networks, including the internet, can access this resource."
- Click on **Review + create** and then **Create**
After creating the resource, you can access the API Keys and Endpoints. At any point, you can go to
the OpenAI Service instance page to get this information.
After creating the resource, you can access the API Keys and Endpoints:
- Click on **Go to resource**
- Click on **Develop**
- Copy the `Endpoint`, called "Language API", e.g. 'https://your-openai-api-1234.openai.azure.com/'
-Copy `KEY 1`
1. Click on **Go to resource** (or navigate to your Azure OpenAI resource)
2. In the left sidebar, under **Resource Management**, click on **Keys and Endpoint**
a specialized interface that includes a custom variant of the OpenAI API for a smooth integration process.
_Last updated on Dec 7, 2023_
### Components
The implementation of local LLMs involves the following components:
* **text-generation-webui**: A Python application with a Gradio web UI for operating Large Language Models.
* **Local Large Language Models "LLMs"**: Use large language models on your personal computer with consumer-grade GPUs or CPUs.
* **big-AGI**: An LLM UI that offers features such as Personas, OCR, Voice Support, Code Execution, AGI functions, and more.
## Instructions
This guide assumes that **big-AGI** is already installed on your system. Note that the text-generation-webui IP address must be accessible from the server running **big-AGI**.
@@ -31,17 +31,14 @@ At time of writing, big-AGI has only 2 operations that run on Node.js Functions:
browsing (fetching web pages) and sharing. They both can exceed 10 seconds, especially
when fetching large pages or waiting for websites to be completed.
We provide `vercel_PRODUCTION.json` to raise the duration to 25 seconds (from a default of 10), to use it,
make sure to rename it to `vercel.json` before build.
From the Vercel Project > Settings > General > Build & Development Settings,
you can for instance set the build command to:
```bash
mv vercel_PRODUCTION.json vercel.json;next build
next build
```
### Change the Personas
### Change the Personas (v1.x only)
Edit the `src/data.ts` file to customize personas. This file houses the default personas. You can add, remove, or modify these to meet your project's needs.
@@ -52,19 +49,44 @@ Edit the `src/data.ts` file to customize personas. This file houses the default
Adapt the UI to match your project's aesthetic, incorporate new features, or exclude unnecessary ones.
- [ ] Modify `src/common/app.config.tsx` to alter the application's name
- [ ] Update `src/common/app.nav.tsx` to revise the navigation bar
- [ ] Modify `src/common/app.release.ts` to alter the application's name
- [ ] Update `src/common/app.nav.ts` to revise the navigation bar
### Add a Message of the Day
You can display a temporary announcement banner at the top of the app using the `NEXT_PUBLIC_MOTD` environment variable.
- Set this variable in your deployment environment
- The message supports template variables:
-`{{app_build_hash}}`: Current git commit hash
-`{{app_build_pkgver}}`: Package version
-`{{app_build_time}}`: Build timestamp as date
-`{{app_deployment_type}}`: Deployment type (local, docker, vercel, etc.)
- Users can dismiss the message (until next page refresh)
- Use it for version announcements, maintenance notices, or feature highlights
Example: `NEXT_PUBLIC_MOTD=🚀 New features available in {{app_build_pkgver}}! Try the improved Beam.`
## Testing & Deployment
Test your application thoroughly using local development (refer to README.md for local build instructions). Deploy using your preferred hosting service. big-AGI supports deployment on platforms like Vercel, Docker, or any Node.js-compatible service, especially those supporting NextJS's "Edge Runtime."
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Workers deployment
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Pages deployment (limited support)
- [deploy-docker.md](deploy-docker.md): for Docker deployment instructions and examples
- [deploy-k8s.md](deploy-k8s.md): for Kubernetes deployment instructions and examples
## Debugging
We introduced the `/info/debug` page that provides a detailed overview of the application's environment, including the API keys, environment variables, and other configuration settings.
The application includes a client-side logging system. You can view recent logs via the UI (Settings > Tools > Logs).
For deeper debugging during development:
1.**Debug Page**: Access the `/info/debug` page for an overview of the application's environment, configuration, API status, and environment variables available to the client.
2.**Conditional Breakpoints**: To automatically pause execution in your browser's developer tools when critical errors (`error`, `critical`, `DEV` levels) are logged to the console, set the following environment variable in your local `.env.local` file and restart your development server:
```bash
NEXT_PUBLIC_DEBUG_BREAKS=true
```
This allows you to inspect the application state at the exact moment an important error occurs. This feature only works in development mode (`npm run dev`) and requires the environment variable to be explicitly set to `true`.
- What: page views, page leave events, user interactions, and deployment context
PostHog provides comprehensive product analytics with privacy controls. It helps understand how users interact with big-AGI's features, identify opportunities for improvement, and optimize the user experience.
To enable PostHog, set the `NEXT_PUBLIC_POSTHOG_KEY` environment variable at build time. PostHog is configured with tracking optimization and privacy in mind:
- Uses a proxy endpoint (`/a/ph`) to avoid ad blockers
- Respects user opt-out preferences via local storage
- Tracks only essential information without PII
- Adds deployment context for better segmentation
The implementation follows PostHog's best practices for Next.js applications and includes manual page view tracking for proper single-page application support.
### Vercel Analytics
- Why: understand coarse traction, and identify deployment issues - all without tracking individual users
- What: top pages, top referrers, country of origin, operating system, browser, and page speed metrics
Vercel Analytics and Speed Insights are local API endpoints deployed to your domain, so everything stays within your
domain. Furthermore, the Vercel Analytics service is privacy-friendly, and does not track individual users.
This service is avaialble to system administrators when deploying to Vercel. It is automatically enabled when deploying to Vercel.
The code that activates Vercel Analytics is located in the `pages/_app.tsx` file:
| Your source builds of big-AGI | None | **Vercel**: enable Vercel Analytics from the dashboard. · **Google Analytics**: set environment variable at build time. |
| Your docker builds of big-AGI | None | **Vercel**: n/a. · **Google Analytics**: set environment variable at `docker build` time. |
| [big-agi.com](https://big-agi.com) | Vercel + Google | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
| [official Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements. |
| Your **Source** builds of big-AGI | None | **Google Analytics**: set environment variable at build time · **PostHog**: set environment variable at build time · **Vercel**: enable Vercel Analytics from the dashboard |
| Your **Docker** builds of big-AGI | None | (**Vercel**: n/a) · **Google Analytics**: set environment variable at `docker build` time · **PostHog**: set environment variable at `docker build` time. |
| [get.big-agi.com](https://get.big-agi.com) (**Big-AGI 1.x Legacy**) | Vercel + Google + PostHog | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
| [prebuilt Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) (**Big-AGI 1.x**, 'latest' tag) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements · **PostHog**: n/a |
Note: this information is updated as of Feb 27, 2024 and can change at any time.
Note: this information is updated as of March 3, 2025 and can change at any time.
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path| Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
| `AZURE_OPENAI_API_KEY`| Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as [config-aws-bedrock.md](config-aws-bedrock.md) | Optional |
| `GEMINI_API_KEY` | The API key for Google AI's Gemini | Optional |
| `GROQ_API_KEY` | The API key for Groq Cloud | Optional |
| `LOCALAI_API_HOST` | Sets the URL of the LocalAI server, or defaults to http://127.0.0.1:8080 | Optional |
| `LOCALAI_API_KEY`| The (Optional) API key for LocalAI | Optional |
| `MISTRAL_API_KEY`| The API key for Mistral | Optional |
| `OLLAMA_API_HOST` | Changes the backend host for the Ollama vendor. See [config-local-ollama.md](config-local-ollama) | |
| `OPENROUTER_API_KEY` | The API key for OpenRouter | Optional |
| `PERPLEXITY_API_KEY` | The API key for Perplexity | Optional |
| `TOGETHERAI_API_KEY` | The API key for Together AI | Optional |
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
| `ALIBABA_API_HOST` | The Alibaba AI OpenAI-compatible endpoint | Optional |
| `ALIBABA_API_KEY` | The API key for Alibaba AI | Optional |
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path | Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
| `AZURE_OPENAI_API_KEY`| Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
| `AZURE_OPENAI_DISABLE_V1` | Disables the next-generation v1 API for GPT-5-like models (set to 'true' to disable) | Optional, defaults to enabled |
| `AZURE_OPENAI_API_VERSION` | API version for traditional deployment-based endpoints | Optional, defaults to '2025-04-01-preview' |
| `AZURE_DEPLOYMENTS_API_VERSION` | API version for the deployments listing endpoint | Optional, defaults to '2023-03-15-preview' |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, for proxies or custom endpoints | Optional |
| `BEDROCK_BEARER_TOKEN` | Bedrock long-term API key (`ABSK...`). Takes priority over IAM credentials. Short-term keys only work for runtime, not model listing | Optional |
| `BEDROCK_ACCESS_KEY_ID`| AWS IAM Access Key ID for Bedrock (Claude models via AWS) | Optional, but if set `BEDROCK_SECRET_ACCESS_KEY` must also be set |
| `BEDROCK_SECRET_ACCESS_KEY` | AWS IAM Secret Access Key for Bedrock | Optional, but if set `BEDROCK_ACCESS_KEY_ID` must also be set |
| `NEXT_PUBLIC_DEBUG_BREAKS` | (optional, development) When set to 'true', enables automatic debugger breaks on DEV/error/critical logs in development builds |
| `NEXT_PUBLIC_MOTD` | Message of the Day - displays a dismissible banner at the top of the app (see [customizations](customizations.md) for the template variables). Example: 🔔 Welcome to our deployment! Version {{app_build_pkgver}} built on {{app_build_time}}. |
| `NEXT_PUBLIC_GA4_MEASUREMENT_ID` | (optional) The measurement ID for Google Analytics 4. (see [deploy-analytics](deploy-analytics.md)) |
| `NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID` | (optional) Google OAuth Client ID for Drive Picker. Can reuse `AUTH_GOOGLE_ID`. See [Google Drive](config-feature-google-drive.md) |
| `NEXT_PUBLIC_PLANTUML_SERVER_URL` | The URL of the PlantUML server, used for rendering UML diagrams. Allows using custom local servers. |
| `NEXT_PUBLIC_POSTHOG_KEY` | (optional) Key for PostHog analytics. (see [deploy-analytics](deploy-analytics.md)) |
> Important: these variables must be set at build time, which is required by Next.js to pass them to the frontend.
> This is in contrast to the backend variables, which can be set when starting the local server/container.
> 🚨 This file is not meant for publication, and it's just been created as a handbook with tips
> and tricks to make Big-AGI more efficient and productive. 🚨
Welcome to the advanced tips and tricks guide for Big-AGI. This document will help you make the most of the platform's existing features.
---
## Hidden Gems
- **Shift + Double-Click** on a chat message to **edit** it.
- **Shift + Trash Icon** to **delete** a chats and messages without confirmation.
- also applies elsewhere: delete Attachments, etc.
- **Shift + Click** on **New Chat** to create an incognito chat.
- Drag a big-AGI saved chat into Big-AGI to load (or attach) it.
## Not-so-obvious Shortcuts
- When sending a message:
- Enter is for newlines
- **Shift + Enter** to send the message.
- **Ctrl + Enter** to **Beam** the message.
- **Alt/Option + Enter** to send the message without an answer.
- When editing a message:
- **Ctrl + Enter** to **Save** the changes.
- **Shift + Ctrl + Enter** to **Save & Regenerate**.
- Scroll between messages:
- **Ctrl + Up/Down** to scroll between **messages** and/or **Beams**.
## Worth the Effort:
- [LiveFile](help-feature-livefile.md) works on **Chrome**: Pair and synchronize your documents and code blocks with files on your local system: refresh, save, update them.
## Best User Hacks:
-
---
Note: this document is just at the beginning. It's here so we can capture
Big-AGI is a **client-first** web application, which means it prioritizes speed and data ownership compared to cloud apps.
Your *API keys*, *chat history*, and *settings* live in your
browser's [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage), not
on cloud servers.
You can use Big-AGI in two ways:
1. Run it yourself (open-source)
2. Use big-agi.com (hosted service)
This guide explains how the open-source version handles your data. You can verify everything in [the source code](https://github.com/enricoros/big-agi).
## Client-Side Storage
Within Big-AGI almost all chat/keys data is handled client-side in your browser using two
standard browser storage mechanisms:
- **Local Storage**: API keys, settings, and configurations ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage))
- **IndexedDB**: Chat history and larger files ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API))
The Big-AGI backend mainly passes requests to AI services (OpenAI, Anthropic, etc.). It doesn't store your data, except for the chat-sharing function if used.
You can see your data in your browser's local storage and IndexedDB - try it yourself:
1. In Chrome: Open DevTools (press F12 on Windows, ⌘ + ⌥ + I on Mac)
2. Click 'Application' > 'Local Storage'
3. See your settings and API keys

### Sync for Authenticated Users
Users with accounts on big-agi.com who opt into Sync (a Pro feature) have their entity data - such as conversations and personas - replicated to the server for multi-device access.
Server-side data is isolated per-user using Row Level Security (RLS), ensuring that no other user can access your synced data.
Sync is entirely optional; without it, all data remains local to your browser.
### What This Means For You
Storing data in your browser means:
- Your data stays on **one device/browser only**
- Clearing browser data **erases your chats** - make backups
- Anyone using your browser can see your chats and keys
- Running your own server needs technical skills
### Local Device Identifier
Big-AGI generates a _device identifier_ that combines timestamp and random components, stored only on your device. This identifier:
- Is used only for the **optional sync functionality** between your devices
- Helps maintain data consistency when using Big-AGI across multiple devices
- Remains completely local unless you explicitly enable sync
- Is not used for tracking, analytics, or telemetry
- Can be deleted anytime by clearing local storage
- Is fully transparent - see the implementation in `src/common/stores/store-client.ts`
## How Data Flows
AI interactions in Big-AGI, such as chats, AI titles, text to speech, browsing, flow through three components:
1.**Browser** (client/installed App) - Stores your keys & data locally
2.**Backend** (routing server) - Passes requests to AI services
3.**AI Services** - Where the actual AI processing happens
### Self-Deployed Version: Your Infrastructure
You run the server. Your data only leaves when making AI requests.
The keys and chats are under your control and pass through your code, and are sent to
# LiveFile: Synchronize Your Documents with Local Files
## Introduction
**LiveFile** is a powerful feature in big-AGI that allows you to **pair and synchronize
your documents and code blocks** with files on your local system.
This feature enables a **two-way connection between big-AGI and your local files on disk**,
saving you time and effort.
With LiveFile, you can:
- **Pair** documents and code blocks with local files.
- **Monitor** changes in local files and update content in big-AGI.
- **Refresh** chat attachments with the latest content.
- **Save** edits made in big-AGI back to your local files.
- **Store** AI-generated code and content.
---
## Requirements
- **Supported Browsers:**
- **Google Chrome** (desktop)
- **Microsoft Edge** (desktop)
- **Operating Systems:**
- **Desktop platforms only**
- **Note:** Mobile devices (iOS and Android) are **not supported** due to browser limitations.
- **File Types:**
- Designed for **text-based files** (e.g., `.txt`, `.md`, `.js`, `.py`).
- **Performance:**
- Can handle **dozens of files efficiently**.
- **Limitations:**
- **File Size Limit**:
- Supports text files up to **10 MB**.
- **Pairing Persistence:**
- LiveFile connections **do not persist across sessions**.
- After reloading the page, you will need to re-pair your files.
- **Saving Overwrites:**
- Saving changes in big-AGI will **overwrite the entire file**.
- Use external tools for version control or incremental backups.
---
## Enabling LiveFile
LiveFile can be enabled automatically or manually in your Big-AGI workflow.
### Automatic Pairing
When you:
- **Attach**, **drop**, or **paste** a file into a chat message,
LiveFile is **automatically enabled** for that attachment. This means you can start
monitoring and reloading changes without any additional setup.
### Manual Pairing
For existing attachments or code blocks that:
- **Do not have LiveFile enabled** (e.g., created on other devices),
- **Are AI-generated code snippets without an associated file**,
You can manually pair them with a local file.
#### Pairing Attachments
1.**Select the Attachment:**
- Click on the attachment in the chat to view it in the previewer.
2.**Initiate Pairing:**
- Click on **"Pair File"** (🔗).
- If you have open LiveFiles, they will be listed for easy selection.
- Alternatively, you can select a new file from your local system.
3.**Grant Permissions**
- When prompted, allow big-AGI to access the file.
#### Pairing Code Blocks
1.**Access Code Block Options:**
- Click on the code block to reveal the header with options.
2.**Initiate Pairing:**
- Click the **"Pair File"** button (🔗).
- Select from your open LiveFiles or choose a new file.
3.**Confirm Pairing:**
- Grant permission when prompted.
---
## Using LiveFile
### Monitoring Changes
- **Automatic Monitoring:**
- LiveFile watches for changes in your paired local files.
- If the file is modified outside of big-AGI, you'll be shown the changes in the LiveFile bar.
- There is also a **"Replace with File"** option to manually load the latest content and see the changes.
- **Refreshing Content:**
- Click **"Replace with File"** (🔄) to load the latest content from the paired file into big-AGI.
### Saving Edits Back to Paired Files
- **Editing Attachments or Code Blocks:**
- Modify the content directly within big-AGI.
- Attachments: Click on the attachment to open the previewer and click on "Edit" to make changes.
- Code Blocks: Select "Edit" on the chat message to update code blocks.
- **Saving Changes:**
- Click **"Save to File"** (💾) to overwrite the local file with your changes.
- **Note:** This action overwrites the entire file. Ensure this is what you want before proceeding.
---
## Best Practices
- **Monitor External Changes:**
- Refresh content in big-AGI if the local file has been modified outside the application.
- **Use a Version Control System:**
- For critical files, consider using Git or other version control systems to track and monitor changes, authorship, and history.
---
## Troubleshooting
- **LiveFile Options Not Visible:**
- Ensure you are using a **supported desktop browser**.
- Check that you have the latest version of big-AGI.
- **Permission Issues:**
- Confirm that you granted big-AGI permission to access your files.
- Check your browser's settings to ensure file access is allowed.
---
## Technical Details
LiveFile uses the [File System Access API](https://developer.mozilla.org/en-US/docs/Web/API/File_System_Access_API) to
interact with your local files securely. It leverages the [browser-fs-access](https://github.com/GoogleChromeLabs/browser-fs-access) library,
an open-source project by Google Chrome Labs, which provides an easy interface to the File System Access API with fallbacks for broader browser support.
- **Security:**
- Access to files requires explicit user permission.
- **Performance:**
- Designed to handle dozens of files efficiently (tested on hundreds).
- Works with the Big-AGI attachment system to recursively add directories.
- **Browser Support:**
- Fully supported on **Google Chrome** and **Microsoft Edge** desktop versions.
---
## Another Big-AGI First!
You can significantly boost your productivity and streamline your workflow within big-AGI
by understanding how to utilize LiveFile's features fully.
This Feature is in Beta as there are a few limitations and improvements to be made.
Join us in enjoying and enhancing this feature on [big-AGI.com](https://big-agi.com), or
[GitHub](https://github.com/enricoros/big-AGI) for support and [Discord](https://discord.gg/MkH4qj2Jp9)
# Enabling Microphone Access for Speech Recognition
This guide explains how to enable microphone access for speech recognition in various browsers and mobile devices.
Ensuring microphone access is essential for using voice features in applications like big-AGI.
## Desktop Browsers
### Google Chrome (All Platforms, recommended)
1. Open the website (e.g., big-AGI) in Chrome.
2. Click the **lock icon** in the address bar.
3. In the dropdown, find **"Microphone"**.
- Set it to **"Allow"**.
4. If "Microphone" isn't listed:
- Click on **"Site settings"**.
- Find **"Microphone"** in the permissions list.
- Change the setting to **"Allow"**.
5.**Refresh** the page.
### Safari (macOS)
**[Watch the video tutorial: How to enable Speech Recognition in Safari](https://vimeo.com/1010342201)**
If you're seeing a "Speech Recognition permission denied" error, follow these steps:
1. Open **System Settings**.
- Go to **Privacy & Security** > **Speech Recognition**.
- Enable Safari in the list of allowed applications.
- Quit & Open Safari.
2. Click **Safari** in the top menu bar.
- Select **Settings**.
- Go to the **Websites** tab.
- Select **Microphone** from the sidebar.
- Find big-AGI (or localhost for developers) in the list and set it to **Allow**.
- Close the Settings window.
3.**Refresh** the page.
This quick and simple fix should get essential voice input working in big-AGI on your Mac.
### Microsoft Edge (Windows)
1. Open the website in Edge.
2. Click the **lock icon** in the address bar.
3. Click **"Permissions for this site"**.
4. Find **"Microphone"**.
- Set it to **"Allow"**.
5.**Refresh** the page.
### Firefox (All Platforms)
> **Note:** The Speech Recognition API is **not supported** in Firefox. If you're using Firefox, please switch to a supported browser to use speech recognition
> features.
## Mobile Devices
### Android (Chrome)
1. Open the website in Chrome.
2. Tap the **lock icon** in the address bar.
3. Tap **"Permissions"**.
4. Find **"Microphone"**.
- Set it to **"Allow"**.
5.**Refresh** the page.
### iOS (Safari)
1. Open the **Settings** app on your device.
2. Scroll down and tap **"Safari"**.
3. Tap **"Microphone"**.
4. Ensure **"Ask"** or **"Allow"** is selected.
5. Return to Safari and open the website.
6. If prompted, allow microphone access.
7.**Refresh** the page.
### iOS (Chrome)
> **Note:** Chrome on iOS uses Safari's engine due to system limitations. Microphone permissions are managed through iOS settings.
1. Open the **Settings** app.
2. Scroll down and tap **"Chrome"**.
3. Ensure **"Microphone"** is toggled **on**.
4. Open Chrome and navigate to the website.
5. If prompted, allow microphone access.
6.**Refresh** the page.
## Troubleshooting
If you're still experiencing issues after enabling microphone access:
**Check System Permissions (macOS):**
- Open **System Settings**.
- Go to **"Privacy & Security"**.
- Select the **"Privacy"** tab.
- Click **"Microphone"** in the sidebar.
- Ensure your browser (e.g., Chrome, Safari) is checked.
- You may need to unlock the settings by clicking the lock icon at the bottom.
**Check Microphone Access (Windows):**
- Open **Settings**.
- Go to **"Privacy"** > **"Microphone"**.
- Ensure **"Allow apps to access your microphone"** is **on**.
- Scroll down and make sure your browser is allowed.
**Close Other Applications:**
- Close any applications that might be using the microphone.
**Restart the Browser:**
- Close all browser windows and reopen.
**Update Your Browser:**
- Ensure you're using the latest version.
**Check for Browser Extensions:**
- Disable extensions that might block access to the microphone.
For persistent issues, consult your browser's official support resources or contact big-AGI support.
## Technical Details
Big-AGI uses the [Web Speech API (SpeechRecognition)](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition)
to transcribe spoken words into text. This API provides real-time transcription with live previews and works on most
Your big-AGI instance is now accessible at `http://localhost:3000`.
For more detailed instructions on Kubernetes deployment, including updating and troubleshooting, refer to our [Kubernetes Deployment Guide](deploy-k8s.md).
## Enterprise-Grade Installation
@@ -114,6 +145,6 @@ Enjoy all the features of big-AGI without the hassle of infrastructure managemen
Join our vibrant community of developers, researchers, and AI enthusiasts. Share your projects, get help, and collaborate with others.
| Chat | Just type and send | **Pre-trained knowledge only** | Only shows final response | Quick answers, general knowledge queries |
| ReAct | Type "/react" before the question | **Web loads, Web searches, Wikipedia, calculations** | Shows step-by-step thought process | Complex, multi-step, or research-based questions |
Example of ReAct in action, taking a question about current events, googling results, opening a page, and summarizing the information:
- **browse**: loads web pages (URLs) and extracts information, using a correctly configured `Tools > Browsing` API
- **search**: searches the web to produce page URLs, using a correctly configured `Tools > Google Search` ([Google Programmable Search Engine](https://programmablesearchengine.google.com/about/)) API
- **wikipedia**: looks up information on Wikipedia pages
- **calculate**: performs mathematical calculations by executing typescript code
- warning: (!) unsafe and dangerous, do not use for untrusted code/LLMs
## How to Use ReAct in Big-AGI
1.**Invoking ReAct**: Type "/react" followed by your question in the chat.
2.**What to Expect**:
- An ephemeral space will show the AI's thought process and actions, showing all the steps taken.
- The final answer will appear in the main chat.
3.**Available Actions**: Web searches, Wikipedia lookups, calculations, and optionally web browsing.
## Good to know:
- **ReAct operates in isolation** from the main chat history.
- It **will take longer than standard responses** due to multiple steps.
- Web searches and browsing may have privacy implications, and require **tool configuration** in the UI.
- Errors or limitations in accessing external resources may affect results.
- ReAct does not use the [Tool or Function Calling](https://platform.openai.com/docs/guides/function-calling) feature of AI models, rather uses the old school approach of parsing and executing actions.
- **[AIX-callers-analysis.md](modules/AIX-callers-analysis.md)** - Analysis of AIX entry points, call chains, common and different rendering, error handling, etc.
#### CSF - Client-Side Fetch
- **[CSF.md](systems/client-side-fetch.md)** - Direct browser-to-API communication for LLM requests
### Systems Documentation
#### Core Platform Systems
- **[app-routing.md](systems/app-routing.md)** - Next.js routing, provider stack, and display state hierarchy
- **[LLM-parameters-system.md](systems/LLM-parameters-system.md)** - Language model parameter flow across the system
- **[LLM-vendor-integration.md](modules/LLM-vendor-integration.md)** - Adding new LLM providers
### KB Guidelines
#### Writing Style
- **Direct and factual** - No marketing language
- **Present tense** - "AIX handles streaming" not "AIX will handle"
- **Active voice** - "The system processes" not "Processing is done by"
- **Concrete examples** - Show actual code/config when helpful, briefly
#### Maintenance
- Remove outdated knowledge base information when detected
- Use `messageWasInterruptedAtStart()` for consistent detection
- Only remove messages with no content that were client-aborted
- Consider UI context when choosing removal vs. clearing strategy
### Error Handling
- **Fragment-level**: Use `messageEditor.fragmentReplace()` with error fragments
- **Message-level**: Use `messageEditor.removeMessage()` or array removal
- **Status-level**: Update component state for UI feedback
### Placeholder Management
- Initialize with descriptive placeholders using `createPlaceholderVoidFragment()`
- Replace during streaming updates
- Clean up on error to prevent stale content
## Architectural Insights
1.**Layered Error Handling**: Sophistication increases closer to UI
2.**Context Specialization**: Different contexts for different use cases
3.**Streaming vs Non-Streaming**: Conversation functions stream, utilities typically don't
4.**Message vs Fragment Management**: Different strategies for different UI needs
The most sophisticated handling is in **Beam modules** and **Chat Persona** with comprehensive removal logic, while simpler callers rely on upstream error handling.
- 1: Continuation marks: a. sends reason=max-tokens (streaming/non-streaming), b. TBA
- 2: Ollama has not been ported to AIX yet due to the custom APIs.
## 1. System Architecture
The subsystem comprises three main components:
1.**Client (e.g. Next.js Frontend)**
- Initiates requests
- Renders AI-generated content in real-time
- Reconstructs streamed data
2.**Server (e.g. Next.js Backend)**
- Acts as an intermediary between client and AI providers
- Handles request preparation, dispatching, and response processing
- Streams responses back to the client
3.**Upstream AI Providers**
- Generate AI content based on requests
### ChatGenerate workflow:
1. Request Initialization: AIX Client prepares and sends request (systemInstruction, messages=AixWire_Parts[], etc.) to AIX Server
2. Dispatch Preparation: AIX Server prepares for upstream communication
3. AI Provider Interaction: AIX Server communicates with AI Provider (streaming or non-streaming)
4. Data Decoding, Transformation and Transmission: AIX Server sends AixWire_Particles to AIX Client
5. Client-side Processing: Client's ContentReassembler processes AixWire_Particles into a list (likely a single) of multi-fragment (DMessageContentFragment[]) messages
6. Completion: AIX Server sends 'done' control message, AIX Client finalizes data update
7. Error Handling: AIX Server sends specific error messages when necessary
## 2. Files and Folders
AIX is organized into the following files and folders:
1. Client-Side (`/client/`):
-`aix.client.ts`: Main client-side entry point for AIX operations.
-`aix.client.chatGenerateRequest.ts`: Handles conversion of chat messages to AIX-compatible format (AixWire_Content, AixWire_Parts, etc.).
2. Server-Side (`/server/`):
- API (`/server/api/`) - Client to Server communication:
-`aix.router.ts`: Defines the tRPC router for AIX operations.
-`aix.wiretypes.ts`: Contains Zod schemas for types and calls incoming from the client (AixWire_Parts, AixWire_Content, AixWire_Tooling, AixWire_API, ...), and outgoing (AixWire_Particles)
- Dispatch (`/server/dispatch/`) - Server to AI Provider communication:
-`/server/dispatch/chatGenerate/`: Content Generation with chat-style inputs:
-`./adapters/`: Adapters for creating API requests for different AI protocols (Anthropic, Bedrock, Gemini, OpenAI Chat Completions, OpenAI Responses, xAI Responses).
-`./parsers/`: Parsers for parsing streaming/non-streaming responses from different AI protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, OpenAI Responses).
-`chatGenerate.dispatch.ts`: Creates a pipeline to execute Chat Generation to a specific provider.
-`ChatGenerateTransmitter.ts`: Used to serialize and transmit AixWire_Particles to the client.
-`/server/dispatch/wiretypes/`: AI provider Wire Types:
- Type definitions for different AI providers/protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, xAI).
-`stream.demuxers.ts`: Handles demuxing of different stream formats.
This document describes how parameters flow through Big-AGI's LLM parameters system, from definition to API invocation.
## System Overview
The LLM parameters system operates across five layers that transform parameters from global definitions to vendor-specific API calls. Each layer serves a specific purpose in the parameter resolution pipeline.
The `DModelParameterRegistry` defines all available parameters with their constraints and metadata. Each parameter includes type information, validation rules, and default behavior.
**Default Value System**: The registry supports multiple default mechanisms:
-`nullable` - Parameters that can be explicitly null to skip API transmission
-`initialValue` - Parameter's base default (e.g., `llmVndOaiRestoreMarkdown: true`)
{paramId:'llmVndGeminiThinkingBudget',rangeOverride:[0,8192]},// Custom range
]
```
**Parameter Visibility**: The `hidden` flag removes parameters from the UI while keeping them functional. Models can also mark parameters as `required`.
### Layer 3: Client Configuration
The system provides two UI configurators with different scopes:
**Hidden Parameters**: Parameters like `llmRef` are marked `hidden: true` in the registry and never appear in the UI, but remain functional for system use.
**Nullable Parameters**: Parameters with `nullable` configuration can be explicitly set to `null` to prevent transmission to the API, distinct from being undefined.
**Range Overrides**: Models can override parameter ranges (e.g., different Gemini models support different thinking budget ranges).
**Parameter Interactions**: The UI implements business logic like disabling web search when reasoning effort is 'minimal'.
## Type Safety Mechanisms
The system maintains type safety through:
-`DModelParameterId` union from registry keys
-`DModelParameterValue<T>` conditional types for values
-`DModelParameterSpecAny` interfaces for specifications
- Runtime validation via Zod schemas at API boundaries
## Model Variant Pattern
Some vendors use model variants to enable features, for instance:
- **Anthropic**: Creates separate `idVariant: 'thinking'` entries forcing value of hidden parameters
- **Google/OpenAI**: Parameters directly on base models
## Migration and Compatibility
The architecture supports parameter evolution:
- **Precedence Rules**: Newer parameters take priority during AIX translation
Client-Side Fetch (CSF) enables direct browser-to-API communication, bypassing the server for LLM requests. When enabled, the browser makes requests directly to vendor APIs (e.g., `api.openai.com`, `api.groq.com`) instead of routing through the Next.js server. This reduces latency, decreases server load, and is particularly useful for local models where the browser can communicate directly with Ollama or LM Studio.
## Implementation
CSF is implemented as an opt-in setting stored as `csf: boolean` in each vendor's service settings. The vendor interface exposes `csfAvailable?: (setup) => boolean` to determine if CSF can be enabled (typically checking if an API key or host is configured). The actual execution happens in `aix.client.direct-chatGenerate.ts` which dynamically imports when CSF is active, making direct fetch calls using the same wire protocols as the server.
All 20+ supported vendors (OpenAI, Anthropic, Gemini, Ollama, LocalAI, Deepseek, Groq, Mistral, xAI, OpenRouter, Perplexity, Together AI, Alibaba, Moonshot, OpenPipe, LM Studio, Z.ai, Azure, Bedrock) support CSF. Cloud vendors require CORS support from the API provider (all tested vendors return `access-control-allow-origin: *`). Local vendors (Ollama, LocalAI, LM Studio) require CORS to be enabled on the local server.
## UI
The CSF toggle appears in each vendor's setup panel under "Advanced" settings, labeled "Direct Connection". It becomes visible when the prerequisites are met (API key present for cloud vendors, host configured for local vendors). The setting is managed through `useModelServiceClientSideFetch` hook which provides `csfAvailable`, `csfActive`, `csfToggle`, and `csfReset` for UI consumption.
process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE=process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE||(process.env.VERCEL_ENV?`vercel-${process.env.VERCEL_ENV}`:'local');// Docker or custom, Vercel
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.