mirror of
https://github.com/enricoros/big-AGI.git
synced 2026-05-10 21:50:14 -07:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| f9d30114c5 |
@@ -1 +0,0 @@
|
||||
commands/code/apply-issue-main.md
|
||||
@@ -1,20 +0,0 @@
|
||||
---
|
||||
description: Increment the AIX monotonic version number
|
||||
allowed-tools: Bash(git add:*),Bash(git status:*),Bash(git commit:*),Edit,Write
|
||||
model: haiku
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
Increment `Monotonics.Aix` in `src/common/app.release.ts` and commit it.
|
||||
|
||||
**Pre-flight checks (MUST pass or abort):**
|
||||
1. Run `git branch --show-current` - MUST be on `main` branch
|
||||
2. Run `git status src/common/app.release.ts` - file MUST be unmodified (no changes on this specific file)
|
||||
|
||||
**Execute:**
|
||||
1. Read current `Monotonics.Aix` value from `src/common/app.release.ts`
|
||||
2. Increment by 1
|
||||
3. Update ONLY that line
|
||||
4. Run: `git add src/common/app.release.ts && git commit -m "Roll AIX"`
|
||||
|
||||
Confirm new version number.
|
||||
@@ -1,31 +0,0 @@
|
||||
---
|
||||
description: Sync Anthropic API implementation with latest upstream documentation
|
||||
argument-hint: specific feature to check
|
||||
---
|
||||
|
||||
Please take a look at my API code for Anthropic: message wire types `src/modules/aix/server/dispatch/wiretypes/anthropic.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/anthropic.messageCreate.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/anthropic.parser.ts`.
|
||||
|
||||
IMPORTANT: we only support the Messages API (message create). We do NOT support other APIs such as the older Completions API.
|
||||
We support Anthropic caching natively, and want to make sure tools and state (crafting the history) are also done well.
|
||||
|
||||
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
|
||||
|
||||
**Primary Sources:**
|
||||
- Docs API: https://docs.claude.com/en/api/messages
|
||||
- Release notes: https://docs.claude.com/en/release-notes/api
|
||||
- Tools use: https://docs.claude.com/en/docs/agents-and-tools/tool-use/overview
|
||||
- Handling stop reasons: https://docs.claude.com/en/api/handling-stop-reasons
|
||||
|
||||
**Alternative Sources if primary blocked:**
|
||||
- Anthropic TypeScript SDK: https://github.com/anthropics/anthropic-sdk-typescript
|
||||
- Anthropic Python SDK: https://github.com/anthropics/anthropic-sdk-python
|
||||
- Recent news and announcements: Web Search for "anthropic api changelog" or "new claude api" or "new claude api pricing"
|
||||
|
||||
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
|
||||
|
||||
$ARGUMENTS
|
||||
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
|
||||
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
|
||||
|
||||
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
|
||||
Prioritize breaking changes and new capabilities that would improve the user experience.
|
||||
@@ -1,30 +0,0 @@
|
||||
---
|
||||
description: Sync Google Gemini API implementation with latest upstream documentation
|
||||
argument-hint: specific feature to check
|
||||
---
|
||||
|
||||
Please take a look at my API code for Google Gemini: message wire types `src/modules/aix/server/dispatch/wiretypes/gemini.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/gemini.generateContent.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/gemini.parser.ts`.
|
||||
|
||||
IMPORTANT: we only support the generateContent API, not other Gemini APIs such as embeddings, etc.
|
||||
Caching is only supported when implicit, we do not explicitly manage Gemini Caches. Same for file uploads and other systems.
|
||||
Image generation happens through models, i.e. 'Gemini 2.5 Flash - Nano Banana' generates images using AIX from generateContent (chat input).
|
||||
|
||||
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
|
||||
|
||||
**Primary Sources:**
|
||||
- Docs API 1/2: https://ai.google.dev/api/generate-content
|
||||
- Docs API 2/2: https://ai.google.dev/api/caching#Content
|
||||
- Release notes: https://ai.google.dev/gemini-api/docs/changelog
|
||||
|
||||
**Alternative Sources if primary blocked:**
|
||||
- Google AI JavaScript SDK: https://github.com/googleapis/js-genai (check latest commits, README, type definitions)
|
||||
Recent news and announcements: Web Search for "gemini api changelog" or "nwe gemini api updates" or "new gemini api pricing"
|
||||
|
||||
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
|
||||
|
||||
$ARGUMENTS
|
||||
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
|
||||
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
|
||||
|
||||
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
|
||||
Prioritize breaking changes and new capabilities that would improve the user experience.
|
||||
@@ -1,34 +0,0 @@
|
||||
---
|
||||
description: Sync OpenAI API implementation with latest upstream documentation
|
||||
argument-hint: specific feature to check
|
||||
---
|
||||
|
||||
Please take a look at my API code for OpenAI: message wire types `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/openai.chatCompletions.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.parser.ts`.
|
||||
|
||||
IMPORTANT: we prioritize the new Responses API, while Chat Completions is still supported but legacy.
|
||||
We do NOT support other APIs such as Realtime (incl. websockets), etc.
|
||||
We also do not support Agentic APIs (Agent SDK, AgentKit, ChatKit, Assistants API etc), as we perform similar functionality in AIX (server or client side).
|
||||
|
||||
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
|
||||
|
||||
**Primary Sources:**
|
||||
- Responses API (AIX prioritizes it): https://platform.openai.com/docs/api-reference/responses/create
|
||||
- Chat Completions API: https://platform.openai.com/docs/api-reference/chat/create
|
||||
- Changelog: https://platform.openai.com/docs/changelog
|
||||
- Models: https://platform.openai.com/docs/models
|
||||
- Pricing (use Copy Page button to download markdown): https://platform.openai.com/docs/pricing
|
||||
|
||||
**Alternative Sources if primary blocked:**
|
||||
- OpenAI Node.js SDK: https://github.com/openai/openai-node
|
||||
- OpenAI Python SDK: https://github.com/openai/openai-python
|
||||
- OpenAI OpenAPI spec: https://github.com/openai/openai-openapi
|
||||
Recent news and announcements: Web Search for "openai api changelog" or "openai new models" or "openai new prices"
|
||||
|
||||
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
|
||||
|
||||
$ARGUMENTS
|
||||
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
|
||||
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
|
||||
|
||||
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
|
||||
Prioritize breaking changes and new capabilities that would improve the user experience.
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
description: Sync OpenRouter API implementation with latest upstream documentation
|
||||
argument-hint: specific feature to check
|
||||
---
|
||||
|
||||
Review the OpenRouter implementation:
|
||||
- Models list: `src/modules/llms/server/openai/openrouter.wiretypes.ts` (list API response schema)
|
||||
- Chat wire types: `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts` (OpenAI-compatible)
|
||||
- Request adapter: `src/modules/aix/server/dispatch/chatGenerate/adapters/openai.chatCompletions.ts` ('openrouter' dialect)
|
||||
- Response parser: `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.parser.ts` (shared OpenAI parser)
|
||||
- Vendor config: `src/modules/llms/vendors/openrouter/openrouter.vendor.ts`
|
||||
|
||||
GOAL: Ensure complete support for OpenRouter's API including advanced features like reasoning/thinking tokens, tool use, search integration, and multi-modal capabilities. OpenRouter is OpenAI-compatible but has important extensions and differences.
|
||||
|
||||
Use Task tool with subagent_type=Explore and thoroughness="very thorough" to discover:
|
||||
1. Map API structure - all endpoints, parameters, capabilities from https://openrouter.ai/docs
|
||||
2. **Advanced features** - How to use: reasoning/thinking tokens (o1, DeepSeek R1), tool use/function calling, search integration, multi-modal (vision/audio)
|
||||
3. Changelog location - How does OpenRouter communicate API updates and breaking changes?
|
||||
4. Model metadata - What capabilities are exposed in the models list API? How to detect feature support?
|
||||
5. OpenAI deviations - Extensions, special headers (HTTP-Referer, X-Title), response fields, streaming differences
|
||||
|
||||
Then check the latest API information. Try these sources (be creative if blocked):
|
||||
|
||||
**Primary Sources:**
|
||||
- API Reference: https://openrouter.ai/docs/api-reference
|
||||
- Chat Completions: https://openrouter.ai/docs/api-reference#chat-completions
|
||||
- Models List: https://openrouter.ai/docs/api-reference#models-list
|
||||
- Parameters Guide: https://openrouter.ai/docs/parameters
|
||||
- Announcements: https://openrouter.ai/announcements (feature launches, API updates, new models)
|
||||
- Models Directory: https://openrouter.ai/models (check metadata for capabilities)
|
||||
|
||||
**Alternative Sources:**
|
||||
- GitHub: https://github.com/OpenRouterTeam (SDKs, examples, issues for recent changes)
|
||||
- Web Search: "openrouter api changelog" or "openrouter reasoning tokens" or "openrouter tool use"
|
||||
|
||||
**If blocked:** Ask user to provide documentation.
|
||||
|
||||
$ARGUMENTS
|
||||
Focus on discrepancies and gaps:
|
||||
- **Request/Response structure**: New fields, changed requirements, streaming event types
|
||||
- **Feature support**: Thinking tokens format, tool calling protocol, search parameters
|
||||
- **Model capabilities**: How to detect and enable advanced features per model
|
||||
- **OpenRouter extensions**: Headers, routing, fallbacks, rate limiting (free vs paid)
|
||||
- **Breaking changes**: Protocol updates, deprecated fields, new required parameters
|
||||
|
||||
Report differences in wire types, adapter logic, parser handling, or dialect-specific quirks.
|
||||
Prioritize new capabilities that improve user experience (reasoning visibility, better tool use, etc.).
|
||||
|
||||
When making changes, add comments with date: `// [OpenRouter, 2026-MM-DD]: explanation`
|
||||
@@ -1,56 +0,0 @@
|
||||
---
|
||||
description: Sync xAI Responses API implementation with latest upstream documentation
|
||||
argument-hint: specific feature to check
|
||||
---
|
||||
|
||||
Review the xAI Responses API implementation:
|
||||
- xAI wire types: `src/modules/aix/server/dispatch/wiretypes/xai.wiretypes.ts` (xAI-specific request schema, tools)
|
||||
- Request adapter: `src/modules/aix/server/dispatch/chatGenerate/adapters/xai.responsesCreate.ts` (AIX → xAI Responses API)
|
||||
- Response parser: `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.responses.parser.ts` (shared with OpenAI Responses)
|
||||
- Dispatch routing: `src/modules/aix/server/dispatch/chatGenerate/chatGenerate.dispatch.ts` (dialect='xai' routing)
|
||||
- OpenAI shared types: `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts` (InputItem/OutputItem schemas reused by xAI)
|
||||
|
||||
IMPORTANT context:
|
||||
- We use ONLY the xAI Responses API (`POST /v1/responses`). We do NOT use the Chat Completions API (`/v1/chat/completions`) for xAI anymore.
|
||||
- xAI's Responses API is similar to OpenAI's but has key differences - the skill should find what changed since our last sync.
|
||||
- Response streaming/parsing reuses the OpenAI Responses parser since the format is compatible.
|
||||
- We do NOT implement: Files API, Collections Search, Remote MCP tools, Voice Agent API, Image/Video generation, Batch API, or Deferred Completions.
|
||||
|
||||
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
|
||||
|
||||
**Primary Sources (guide pages work well with WebFetch despite being JS-rendered):**
|
||||
- Responses API Guide: https://docs.x.ai/docs/guides/chat
|
||||
- Stateful Responses: https://docs.x.ai/docs/guides/responses-api
|
||||
- Tools Overview: https://docs.x.ai/docs/guides/tools/overview
|
||||
- Search Tools (web_search, x_search): https://docs.x.ai/docs/guides/tools/search-tools
|
||||
- Code Execution Tool: https://docs.x.ai/docs/guides/tools/code-execution-tool
|
||||
- Function Calling: https://docs.x.ai/docs/guides/function-calling
|
||||
- Streaming: https://docs.x.ai/docs/guides/streaming-response
|
||||
- Reasoning: https://docs.x.ai/docs/guides/reasoning
|
||||
- Structured Outputs: https://docs.x.ai/docs/guides/structured-outputs
|
||||
- Models & Pricing: https://docs.x.ai/developers/models
|
||||
- Release Notes: https://docs.x.ai/developers/release-notes
|
||||
- API Reference: https://docs.x.ai/developers/api-reference#create-new-response
|
||||
|
||||
**Alternative Sources if primary blocked:**
|
||||
- xAI Python SDK: https://github.com/xai-org/xai-sdk-python
|
||||
- Web Search for "xai grok api changelog 2026" or "xai responses api new features"
|
||||
|
||||
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
|
||||
|
||||
$ARGUMENTS
|
||||
Check carefully for discrepancies between our implementation and the current API docs:
|
||||
|
||||
1. **Request fields**: Compare `XAIWire_API_Responses.Request_schema` against current docs - any new, changed, or deprecated parameters?
|
||||
2. **Tool definitions**: Compare `XAIWire_Responses_Tools` - any new parameters on web_search/x_search/code_interpreter? Any new hosted tool types?
|
||||
3. **Input/Output item types**: Any xAI-specific output items not handled by the shared OpenAI parser (e.g., x_search_call, web_search_call, code_interpreter_call)?
|
||||
4. **Streaming events**: Any xAI-specific SSE event types beyond what the OpenAI Responses parser handles?
|
||||
5. **Response shape**: Usage reporting differences, new fields in the response object?
|
||||
6. **Adapter logic**: Message role mapping, content type handling, system message approach - still correct?
|
||||
7. **Include options**: Any new values for the `include` array?
|
||||
8. **Reasoning config**: Which models support it and with what values?
|
||||
|
||||
Prioritize breaking changes and new capabilities that would improve the user experience.
|
||||
When making changes, add comments with date: `// [xAI, 2026-MM-DD]: explanation`
|
||||
|
||||
**Self-update this skill**: After completing the sync, if your research reveals that assumptions in THIS skill file (`.claude/commands/aix/sync-xai-api.md`) are wrong or outdated - e.g., new APIs we now implement, new tool types added, URLs moved, file paths changed - update this skill file to stay accurate for next time.
|
||||
@@ -1,34 +0,0 @@
|
||||
---
|
||||
description: Review in-flight changes for coherence, completeness, and quality
|
||||
---
|
||||
|
||||
Review the current in-flight changes in the big-agi-private repository (dev branch, continuously rebased ~1800 commits on top of main).
|
||||
|
||||
**Step 1: Scope and read**
|
||||
|
||||
`git diff --stat` + `git status` for breadth. Then full `git diff` (if empty: `git diff --cached`, then `git diff HEAD~1`).
|
||||
For every file in the diff, read surrounding context in the actual source file - the diff alone hides bugs in adjacent untouched code.
|
||||
|
||||
**Step 2: Reverse-engineer the intent**
|
||||
|
||||
From the diff, determine the **what**, **how**, and **why**. Present this concisely so the author can confirm or correct,
|
||||
but don't stop here, continue to the full review in the same response.
|
||||
|
||||
**Step 3: Validate**
|
||||
|
||||
Run `tsc --noEmit --pretty` and `npm run lint` (in parallel). Report any errors with the review.
|
||||
If the diff removes/renames identifiers, grep the codebase for stale references to the OLD names. This catches broken guards, stale imports, and incomplete migrations.
|
||||
|
||||
**Step 4: Deep review**
|
||||
|
||||
Evaluate every file in the diff.
|
||||
Leave no rocks unturned - correctness, coherence, completeness, excess, generalization, maintenance burden,
|
||||
codebase consistency, etc.
|
||||
|
||||
**Step 5: Prioritized next steps**
|
||||
|
||||
Think about what happens when the next developer touches this code.
|
||||
Rank findings by severity (bug > correctness > cleanup > cosmetic). Be specific about what to change and where.
|
||||
|
||||
Remember: design values for this codebase: orthogonal features, features that generalize well, modularized and reusable code,
|
||||
type-discriminated data, optimized code, zero maintenance burden. Minimize future pain, etc.
|
||||
@@ -1,63 +0,0 @@
|
||||
---
|
||||
description: Sync LLM parameter options between full model dialog and chat side panel
|
||||
---
|
||||
|
||||
Audit and sync LLM parameter configurations between the two UI editors. Goal: identical `value` fields in option arrays + equivalent onChange logic. Labels/descriptions can differ for UI space.
|
||||
|
||||
**Files to Compare:**
|
||||
1. **Full Model Dialog**: `src/modules/llms/models-modal/LLMParametersEditor.tsx` (main branch)
|
||||
2. **Chat Side Panel**: `src/apps/chat/components/layout-panel/ChatPanelModelParameters.tsx` (main derived branches only)
|
||||
|
||||
**Reference Documentation:**
|
||||
- Parameter system: `kb/systems/LLM-parameters-system.md`
|
||||
- Parameter registry: `src/common/stores/llms/llms.parameters.ts`
|
||||
|
||||
**Task: Perform a comprehensive audit**
|
||||
|
||||
1. **Read both files** and extract all option arrays (e.g., `_reasoningEffortOptions`, `_antEffortOptions`, `_geminiThinkingLevelOptions`, etc.)
|
||||
|
||||
2. **Check for missing parameters:**
|
||||
- Parameters handled in `LLMParametersEditor.tsx` but NOT in `ChatPanelModelParameters.tsx`
|
||||
- Parameters in `ChatPanelModelParameters.tsx`'s `_interestingParameters` array but missing UI controls
|
||||
- Note: The side panel intentionally shows only "interesting" parameters - focus on those listed in `_interestingParameters`
|
||||
|
||||
3. **Check for value mismatches** between corresponding option arrays:
|
||||
- Different number of options (e.g., 3 vs 4 options)
|
||||
- Same label but different `value` (this causes the bug in issue #926)
|
||||
- Different labels for the same `value`
|
||||
- Missing `_UNSPECIFIED`/Default option in one but not the other
|
||||
|
||||
4. **Check onChange handler consistency:**
|
||||
- Both should remove parameter on `_UNSPECIFIED` selection
|
||||
- Both should set explicit values the same way
|
||||
- Watch for conditions like `value === 'high'` that may differ
|
||||
|
||||
**Output Format:**
|
||||
|
||||
```
|
||||
## Parameter Sync Audit Report
|
||||
|
||||
### Missing Parameters
|
||||
- [ ] `llmVndXyz` - In full dialog, missing from side panel
|
||||
|
||||
### Value Mismatches
|
||||
- [ ] `_xyzOptions`:
|
||||
- Full dialog: [values...]
|
||||
- Side panel: [values...]
|
||||
- Issue: [description]
|
||||
|
||||
### Handler Inconsistencies
|
||||
- [ ] `llmVndXyz` onChange differs: [explanation]
|
||||
|
||||
### Recommended Fixes
|
||||
1. [Specific fix with code snippet if needed]
|
||||
```
|
||||
|
||||
**Fix Direction:** Full dialog is source of truth. Update side panel to match its values when mismatched.
|
||||
|
||||
**Notes:**
|
||||
- Side panel uses shorter descriptions (space-constrained) - that's fine
|
||||
- Variable names may differ (e.g., `_anthropicEffortOptions` vs `_antEffortOptions`) - that's fine, but same is better
|
||||
- `value` fields must be identical sets
|
||||
- `_UNSPECIFIED` must mean the same thing in both
|
||||
- onChange: remove on `_UNSPECIFIED`, set explicit value otherwise
|
||||
@@ -1,20 +0,0 @@
|
||||
---
|
||||
description: Update Alibaba model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/alibaba.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models & Pricing: https://www.alibabacloud.com/help/en/model-studio/models
|
||||
- Billing Guide: https://www.alibabacloud.com/help/en/model-studio/billing-for-model-studio
|
||||
|
||||
**Fallbacks if blocked:**
|
||||
- Search "alibaba model studio latest pricing", "alibaba latest models", "qwen models pricing", or search GitHub for latest model prices and context windows
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
description: Update Anthropic model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/anthropic/anthropic.models.ts` with latest model definitions.
|
||||
|
||||
Reference files (for context only, do not modify):
|
||||
- `src/modules/llms/server/llm.server.types.ts`
|
||||
- `src/modules/llms/server/models.mappings.ts`
|
||||
- `src/common/stores/llms/llms.parameters.ts`
|
||||
|
||||
**Workflow: Start with recent changes, then verify the full model list.**
|
||||
|
||||
**Primary Sources (append `.md` to any path for clean markdown):**
|
||||
1. Recent changes: https://platform.claude.com/docs/en/release-notes/overview.md
|
||||
2. Models & IDs: https://platform.claude.com/docs/en/about-claude/models/overview.md
|
||||
3. Pricing (base, cache, batch, long context): https://platform.claude.com/docs/en/about-claude/pricing.md
|
||||
4. Deprecations & retirement dates: https://platform.claude.com/docs/en/about-claude/model-deprecations.md
|
||||
|
||||
**Discovering feature docs:** The release notes and models overview markdown
|
||||
contain inline links to feature-specific pages (thinking modes, effort,
|
||||
context windows, what's-new pages, etc.). When a new capability is
|
||||
referenced, follow those links - append `.md` to get markdown. Examples of
|
||||
pages you might discover this way:
|
||||
- `about-claude/models/whats-new-claude-*` - per-generation changes
|
||||
- `build-with-claude/extended-thinking` - thinking budget configuration
|
||||
- `build-with-claude/effort` - effort parameter levels
|
||||
- `build-with-claude/adaptive-thinking` - adaptive thinking mode
|
||||
|
||||
**Fallback web pages** (crawl if `.md` paths break or structure changes):
|
||||
- https://platform.claude.com/docs/en/about-claude/models/overview
|
||||
- https://platform.claude.com/docs/en/about-claude/pricing
|
||||
- https://platform.claude.com/docs/en/release-notes/overview
|
||||
- https://claude.com/pricing
|
||||
|
||||
**Fallbacks if blocked:** Check the Anthropic TypeScript SDK at
|
||||
https://github.com/anthropics/anthropic-sdk-typescript, or web-search
|
||||
for "anthropic models latest pricing" / "anthropic latest models".
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- For new models: check which `parameterSpecs` are needed (thinking mode,
|
||||
effort levels, 1M context, skills, web tools) by reading the linked
|
||||
feature docs and comparing with existing model entries
|
||||
- When thinking/effort semantics change between generations
|
||||
(e.g. adaptive vs manual thinking), document in comments
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,22 +0,0 @@
|
||||
---
|
||||
description: Update DeepSeek model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/deepseek.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Pricing: https://api-docs.deepseek.com/quick_start/pricing
|
||||
- Model List: https://api-docs.deepseek.com/api/list-models
|
||||
- Release Notes: https://api-docs.deepseek.com/updates (check for version updates like V3.2-Exp)
|
||||
|
||||
**Note:** DeepSeek frequently releases new versions with significant pricing changes. Always check release notes first.
|
||||
|
||||
**Fallbacks if blocked:** Search "deepseek api latest pricing", "deepseek latest models", "deepseek models list" or search GitHub for latest model prices and context windows
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,91 +0,0 @@
|
||||
---
|
||||
description: Update/validate dynamic vendor model parsers (OpenRouter, TogetherAI, Alibaba, Azure, Novita, ChutesAI, FireworksAI, TLUS, LM Studio, LocalAI, FastAPI)
|
||||
---
|
||||
|
||||
Validate that the dynamic (API-fetched) vendor model parsers are up to date and not silently broken.
|
||||
|
||||
These vendors do NOT have hardcoded model lists - they fetch models from APIs at runtime. But their parsers, filters, heuristic detection, and capability mapping can break if upstream APIs change. This skill covers all dynamic vendors NOT covered by the other `llms:update-models-{vendor}` skills.
|
||||
|
||||
## Vendors to Validate
|
||||
|
||||
### High Risk
|
||||
|
||||
**OpenRouter** - `src/modules/llms/server/openai/models/openrouter.models.ts`
|
||||
- Most complex parser. Vendor-specific parameter inheritance (Anthropic thinking variants, Gemini thinking/image, OpenAI reasoning effort, xAI/DeepSeek reasoning).
|
||||
- Hardcoded family ordering list (lines ~24-37) - check if new leading vendors are missing.
|
||||
- Hardcoded old/deprecated model hiding list (lines ~39-49) - check if stale.
|
||||
- Cache pricing detection (Anthropic-style vs OpenAI-style) - verify format still valid.
|
||||
- Variant injection for Anthropic thinking/non-thinking - verify still correct.
|
||||
- Reference: https://openrouter.ai/docs/models
|
||||
|
||||
### Medium Risk
|
||||
|
||||
**Novita** - `src/modules/llms/server/openai/models/novita.models.ts`
|
||||
- Features array mapping (`function-calling`, `reasoning`, `structured-outputs`) and input modalities parsing.
|
||||
- Pricing unit conversion (hundredths of cent per million → dollars per 1K).
|
||||
- Hostname heuristic: `novita.ai`.
|
||||
|
||||
**ChutesAI** - `src/modules/llms/server/openai/models/chutesai.models.ts`
|
||||
- Custom `max_model_len` field for context window.
|
||||
- Assumes all models support Vision + Functions (aggressive).
|
||||
- Hostname heuristic: `.chutes.ai`.
|
||||
|
||||
**FireworksAI** - `src/modules/llms/server/openai/models/fireworksai.models.ts`
|
||||
- Relies on provider capability flags: `supports_chat`, `supports_image_input`, `supports_tools`.
|
||||
- Hostname heuristic: `fireworks.ai/`.
|
||||
|
||||
**TogetherAI** - `src/modules/llms/server/openai/models/together.models.ts`
|
||||
- Type allow-list (`type: 'chat'`), vision detection by string match.
|
||||
- Custom wire schema with pricing conversion.
|
||||
|
||||
**TLUS** - `src/modules/llms/server/openai/models/tlusapi.models.ts`
|
||||
- Detected by response structure (`total_models`, `free_models`, `pro_models` fields).
|
||||
- Capability enum mapping (`text`, `vision`, `audio`, `tool-calling`, `reasoning`, `websearch`).
|
||||
- Tier-based pricing (`free` vs paid).
|
||||
|
||||
**Alibaba** - `src/modules/llms/server/openai/models/alibaba.models.ts`
|
||||
- Model list was cleared (dynamic-only). Exclusion patterns for non-chat models.
|
||||
- Assumes 128K context and Vision+Functions for all models (overly permissive).
|
||||
- Check if hardcoded data should be restored now that naming has stabilized.
|
||||
|
||||
### Low Risk (local/generic - validate only if issues reported)
|
||||
|
||||
**Azure** - `src/modules/llms/server/openai/models/azure.models.ts`
|
||||
- Custom deployments API, not `/v1/models`. User-specific. Deployment name fallback logic.
|
||||
|
||||
**LM Studio** - `src/modules/llms/server/openai/models/lmstudio.models.ts`
|
||||
- Local service, native API (`/api/v1/models`). GGUF metadata parsing, capability flags.
|
||||
|
||||
**LocalAI** - `src/modules/llms/server/openai/models/localai.models.ts`
|
||||
- Local service. String-based hide list, vision/reasoning detection by name pattern.
|
||||
|
||||
**FastAPI** - `src/modules/llms/server/openai/models/fastapi.models.ts`
|
||||
- Generic passthrough. Detected by `owned_by === 'fastchat'`. Minimal parsing.
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
For each vendor (prioritize High > Medium > Low):
|
||||
|
||||
1. **Read the parser file** and check for:
|
||||
- Deny/allow lists that may be stale (new model families missing)
|
||||
- Capability assumptions that may be wrong (e.g. "all models support vision")
|
||||
- Field names that may have changed upstream
|
||||
- Pricing conversion math that may use wrong units
|
||||
|
||||
2. **Check upstream docs** (where available) for:
|
||||
- API response schema changes
|
||||
- New model types or capability fields
|
||||
- Deprecated fields
|
||||
|
||||
3. **Cross-reference with OpenRouter** (aggregator):
|
||||
- OpenRouter surfaces models from many of these vendors
|
||||
- If OpenRouter shows capabilities that a vendor's parser misses, the parser is stale
|
||||
|
||||
4. **Fix issues found** - update parsers, filters, deny lists as needed.
|
||||
|
||||
5. Run `tsc --noEmit` after changes.
|
||||
|
||||
**Important:**
|
||||
- Do NOT convert dynamic vendors to hardcoded lists - the dynamic approach is intentional
|
||||
- Focus on parser correctness, not model coverage
|
||||
- Flag any vendor whose API response format seems to have changed substantially
|
||||
@@ -1,21 +0,0 @@
|
||||
---
|
||||
description: Update Gemini model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/gemini/gemini.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.types.ts`, `src/modules/llms/server/llm.server.types.ts`, and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models: https://ai.google.dev/gemini-api/docs/models
|
||||
- Pricing: https://ai.google.dev/gemini-api/docs/pricing
|
||||
- Changelog: https://ai.google.dev/gemini-api/docs/changelog
|
||||
|
||||
**Fallbacks if blocked:** Check Google AI JS SDK at https://github.com/googleapis/js-genai, search "gemini models latest pricing", "gemini latest models", or search GitHub for latest model prices and context windows
|
||||
|
||||
**Important:**
|
||||
- Ignore context windows (auto-determined at runtime) and training cutoffs (not supported)
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review, do NOT remove comments
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
description: Update Groq model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/groq.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Source:**
|
||||
- Fetch https://console.groq.com/docs/models.md directly (markdown format, no search needed)
|
||||
- Pricing: https://groq.com/pricing/
|
||||
|
||||
**Do NOT use web search.** The `.md` endpoint provides structured markdown content directly.
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
description: Update Kimi model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/moonshot.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources (fetch directly, no search needed):**
|
||||
- Pricing: https://platform.moonshot.ai/docs/pricing/chat
|
||||
- API Reference: https://platform.moonshot.ai/docs/api/chat
|
||||
|
||||
**Do NOT use web search.** Fetch the URLs directly, or ask the user to provide data, if unaccessible.
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,24 +0,0 @@
|
||||
---
|
||||
description: Update Mistral model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/mistral.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models: https://docs.mistral.ai/getting-started/models/models_overview/
|
||||
- Pricing: https://mistral.ai/pricing#api-pricing
|
||||
- Changelog: https://docs.mistral.ai/getting-started/changelog/
|
||||
|
||||
**Fallbacks if blocked:**
|
||||
- Search "mistral [model-name] latest pricing", "mistral api latest pricing", "mistral latest models", or search GitHub for latest model prices and context windows
|
||||
- Cross-reference: pricepertoken.com, helicone.ai, artificialanalysis.ai
|
||||
- Check Mistral API list models response
|
||||
- As last resort: Use Chrome DevTools MCP to render pricing table
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,44 +0,0 @@
|
||||
---
|
||||
description: Update Ollama model definitions with latest featured models
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/ollama/ollama.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Automated Workflow:**
|
||||
```bash
|
||||
# 1. Fetch the HTML (sorted by newest for stable ordering)
|
||||
curl -s "https://ollama.com/library?sort=newest" -o /tmp/ollama-newest.html
|
||||
|
||||
# 2. Parse it with the script
|
||||
node .claude/scripts/parse-ollama-models.js > /tmp/ollama-parsed.txt 2>&1
|
||||
|
||||
# 3. Review the parsed output
|
||||
cat /tmp/ollama-parsed.txt
|
||||
```
|
||||
|
||||
The parser outputs: `modelName|pulls|capabilities|sizes`
|
||||
- Example: `deepseek-r1|66200000|tools,thinking|1.5b,7b,8b,14b,32b,70b,671b`
|
||||
|
||||
**Primary Sources:**
|
||||
- Model Library: https://ollama.com/library?sort=newest
|
||||
- Parser script: `.claude/scripts/parse-ollama-models.js`
|
||||
|
||||
**Fallbacks if blocked:** Check https://github.com/ollama/ollama, search "ollama featured models", "ollama latest models", or search GitHub for latest model info
|
||||
|
||||
**Important:**
|
||||
- Parser filtering rules:
|
||||
- Top 30 newest models are always included (regardless of pull count)
|
||||
- After top 30, only models with 50K+ pulls are included
|
||||
- Models with 'cloud' capability are automatically excluded
|
||||
- Models with 'embedding' capability are automatically excluded
|
||||
- Sort them in the EXACT same order as the source (newest first, for stable ordering)
|
||||
- Extract tags: 'tools' → hasTools, 'vision' → hasVision, 'embedding' → isEmbeddings (note the 's'), 'thinking' → tags only
|
||||
- Extract 'b' tags (1.5b, 7b, 32b) to tags field
|
||||
- Set today's date (YYYYMMDD format) for newly added models only
|
||||
- Update OLLAMA_LAST_UPDATE constant to today's date
|
||||
- Do NOT change dates of existing models
|
||||
- Review the full model list for additions, removals, and changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments and newlines to make diffs easy to review
|
||||
@@ -1,26 +0,0 @@
|
||||
---
|
||||
description: Update OpenAI model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/openai.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Manual hint:** For pricing page, expand all tables before copying content.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models: https://platform.openai.com/docs/models (use Copy Page button)
|
||||
- Pricing: https://platform.openai.com/docs/pricing (expand tables first)
|
||||
|
||||
**Known Issue:** OpenAI docs block automated access (403 Forbidden). Manual browser access required.
|
||||
|
||||
**Fallbacks if blocked:**
|
||||
- Search "openai models latest pricing", "openai latest models" for third-party aggregators, or search GitHub for latest model prices and context windows
|
||||
- OpenAI Node SDK (https://github.com/openai/openai-node) has limited model metadata only
|
||||
- As last resort: Use Chrome DevTools MCP to navigate and extract from official docs
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,19 +0,0 @@
|
||||
---
|
||||
description: Update OpenPipe model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/openpipe.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Base Models: https://docs.openpipe.ai/base-models
|
||||
- Pricing: https://docs.openpipe.ai/pricing/pricing
|
||||
|
||||
**Fallbacks if blocked:** Search "openpipe models latest pricing", "openpipe latest models", "openpipe base models", or search GitHub for latest model prices and context windows
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,20 +0,0 @@
|
||||
---
|
||||
description: Update Perplexity model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/perplexity.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models: https://docs.perplexity.ai/getting-started/models
|
||||
- Pricing: https://docs.perplexity.ai/getting-started/pricing
|
||||
- Changelog: https://docs.perplexity.ai/changelog/changelog
|
||||
|
||||
**Fallbacks if blocked:** Search "perplexity api latest pricing", "perplexity latest models", or search GitHub for latest model prices and context windows
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,23 +0,0 @@
|
||||
---
|
||||
description: Update xAI model definitions with latest pricing and capabilities
|
||||
---
|
||||
|
||||
Update `src/modules/llms/server/openai/models/xai.models.ts` with latest model definitions.
|
||||
|
||||
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
|
||||
|
||||
**Primary Sources:**
|
||||
- Models & Pricing: https://docs.x.ai/docs/models?cluster=us-east-1#detailed-pricing-for-all-grok-models
|
||||
|
||||
**Known Issue:** docs.x.ai blocks automated access (403 Forbidden). Use fallbacks below.
|
||||
|
||||
**Fallbacks if blocked:**
|
||||
- Search "xai grok latest pricing", "xai latest models", "xai api models", or search GitHub for latest model prices and context windows
|
||||
- Random sites? https://the-rogue-marketing.github.io/grok-api-latest-llms-pricing-october-2025/ (find a newer version), https://langdb.ai/app/providers/xai/ (browse by model, limited coverage)
|
||||
- As last resort: Use Chrome DevTools MCP to access docs.x.ai
|
||||
|
||||
**Important:**
|
||||
- Review the full model list for additions, removals, and price changes
|
||||
- Minimize whitespace/comment changes, focus on content
|
||||
- Preserve comments to make diffs easy to review
|
||||
- Flag broken links or unexpected content
|
||||
@@ -1,57 +0,0 @@
|
||||
---
|
||||
description: Verify model parameterSpecs match API-validated sweep data
|
||||
argument-hint: openai | anthropic | gemini | xai (or empty for all)
|
||||
---
|
||||
|
||||
# Verify LLM Parameters
|
||||
|
||||
Compare model `parameterSpecs` in definition files against API-validated sweep data.
|
||||
|
||||
If `$ARGUMENTS` provided, verify only that dialect, which includes reading the pair of sweep results and model defintions. Otherwise verify all four, and read the pairs in sequence.
|
||||
|
||||
## Files
|
||||
|
||||
**Sweep results** (source of truth for select parameters):
|
||||
- `tools/develop/llm-parameter-sweep/llm-{dialect}-parameters-sweep.json`
|
||||
By the time you see these files, the repo owner has already updated them via `tools/develop/llm-parameter-sweep/sweep.sh` (very long running, 15 min per vendor).
|
||||
|
||||
**Model definitions (source of truth for model defintions for the user and application, including constants, interfaces, supported parameters and sometimes allowed parameter values)**:
|
||||
- OpenAI: `src/modules/llms/server/openai/models/openai.models.ts`
|
||||
- Anthropic: `src/modules/llms/server/anthropic/anthropic.models.ts`
|
||||
- Gemini: `src/modules/llms/server/gemini/gemini.models.ts`
|
||||
- xAI: `src/modules/llms/server/openai/models/xai.models.ts`
|
||||
|
||||
## Task
|
||||
|
||||
The sweep data is the source of truth for allowed model parameter values or value ranges.
|
||||
|
||||
For each model in the sweep, verify the model definition exposes exactly those capabilities - no more, no less. This includes:
|
||||
- The parameter is present in parameterSpecs
|
||||
- The paramId variant covers exactly the values from the sweep, if applicable
|
||||
- etc.
|
||||
|
||||
Report models where the definition doesn't match the sweep.
|
||||
|
||||
## Parameter Mapping
|
||||
|
||||
Example parameter mapping. Note that new parameters may have been added to both the definition, and the sweep.
|
||||
The objective of the sweep is to hint at model definition values, but the model definitions are what matters for Big-AGI,
|
||||
and need to be carefully updated, otherwise thousands of clients may break.
|
||||
|
||||
| Dialect | Sweep Key | Model paramId |
|
||||
|-----------|--------------------------|------------------------------|
|
||||
| OpenAI | `oai-reasoning-effort` | `llmVndOaiEffort` |
|
||||
| OpenAI | `oai-verbosity` | `llmVndOaiVerbosity` |
|
||||
| OpenAI | `oai-image-generation` | `llmVndOaiImageGeneration` |
|
||||
| OpenAI | `oai-web-search` | `llmVndOaiWebSearchContext` |
|
||||
| Anthropic | `ant-effort` | `llmVndAntEffort` |
|
||||
| Anthropic | `ant-thinking-budget` | `llmVndAntThinkingBudget` |
|
||||
| Gemini | `gemini-thinking-level` | `llmVndGemEffort` |
|
||||
| Gemini | `gemini-thinking-budget` | `llmVndGeminiThinkingBudget` |
|
||||
| xAI | `xai-web-search` | `llmVndXaiWebSearch` |
|
||||
|
||||
## Output
|
||||
|
||||
Report first for every model the expected values from the sweep, then the actual values from the definition, then the mismatches.
|
||||
|
||||
Finally make one table for each dialect listing all models with mismatches and the specific issues.
|
||||
@@ -1,56 +0,0 @@
|
||||
---
|
||||
description: Generate changelog bullets for big-agi.com/changes
|
||||
argument-hint: date like "2026-01-10" or empty for auto-detect
|
||||
---
|
||||
|
||||
Generate changelog bullets for a single entry in https://big-agi.com/changes
|
||||
|
||||
**Step 1: Find the starting date**
|
||||
|
||||
IMPORTANT: This repo rebases frequently, so commits are INTERLEAVED throughout history.
|
||||
New commits can appear at line 10, 500, or 1800. Use AUTHOR DATE (`%ad`) to filter - it's preserved during rebases.
|
||||
|
||||
If `$ARGUMENTS` provided, use it as the cutoff date.
|
||||
|
||||
If NO argument:
|
||||
1. Fetch https://big-agi.com/changes to get the most recent changelog date
|
||||
2. Use that date as the cutoff
|
||||
|
||||
**Step 2: Get commits by author date**
|
||||
|
||||
Filter commits by author date to catch ALL new commits regardless of position in history:
|
||||
|
||||
```bash
|
||||
# For commits after Jan 10, 2026 (adjust date pattern as needed)
|
||||
git log --oneline --no-merges --format="%h %ad %s" --date=short | grep "2026-01-1[1-9]\|2026-01-2\|2026-02"
|
||||
|
||||
# Verify interleaving by checking line numbers
|
||||
git log --oneline --no-merges --format="%h %ad %s" --date=short | grep -n "2026-01-1[1-9]"
|
||||
```
|
||||
|
||||
The line numbers prove commits are scattered (e.g., lines 14, 638, 1156, 1803 = interleaved).
|
||||
|
||||
**Step 3: Write bullets**
|
||||
|
||||
Real examples from big-agi.com/changes:
|
||||
- "Gemini 3 Flash support with 4-level thinking: high, medium, low, minimal"
|
||||
- "Cloud Sync launched! - long awaited and top requested"
|
||||
- "Deepseek V3.2 Speciale comes with almost Gemini 3 Pro performance but 20 times cheaper"
|
||||
- "Anthropic Opus 4.5 with controls for effort (speed tradeoff), thinking budget, search"
|
||||
- "Login with email, via magic link"
|
||||
- "Mobile UX fixes for popups drag/interaction"
|
||||
|
||||
**Rules:**
|
||||
|
||||
1. **Order by importance** - most significant changes first, minor fixes last
|
||||
2. **Feature-first, no verb prefixes** - "Gemini 3 support" not "Add Gemini 3 support"
|
||||
3. **Model names lead** when it's about LLMs
|
||||
4. **Specific details** - "4-level thinking: high, medium, low, minimal" not "multiple thinking levels"
|
||||
5. **One-liners** - short, no fluff
|
||||
6. **Consolidate commits** - 10 persona editor commits = 1 bullet
|
||||
7. **No corporate speak** - no "enhanced", "streamlined", "robust", "leverage"
|
||||
|
||||
**Skip:** WIP, internal refactors, KB docs, automation, review cleanups, trivial fixes, deps bumps, CI changes.
|
||||
|
||||
**Output:** Just bullets, ready to paste. 2-5 bullets but adapt depending on scope, especially
|
||||
in relation to the usual https://big-agi.com/changes entries.
|
||||
@@ -1,113 +0,0 @@
|
||||
---
|
||||
description: Execute the Big-AGI release process
|
||||
argument-hint: version like "2.0.4" or empty to auto-increment patch
|
||||
---
|
||||
|
||||
Execute the release process for Big-AGI. Go step-by-step, waiting for user approval between major steps.
|
||||
|
||||
## Step 1: Determine Version
|
||||
|
||||
If `$ARGUMENTS` provided, use it. Otherwise, read `package.json` and increment patch version.
|
||||
|
||||
## Step 2: Update Files
|
||||
|
||||
1. **package.json** - Update `version` field
|
||||
2. **src/common/app.release.ts** - Increment `Monotonics.NewsVersion` (e.g., 203 → 204)
|
||||
3. **src/apps/news/news.data.tsx** - Add new entry at top of `NewsItems` array
|
||||
|
||||
For the news entry, ask user for release name and key highlights.
|
||||
|
||||
**News entry style** - Draft is a starting point, user will refine:
|
||||
- Models lead when model-heavy, grouped together
|
||||
- Callout features get own bullet with colon explanation
|
||||
- UX items grouped, minimal bold
|
||||
- Fixes last, brief
|
||||
- Release name stays subtle - don't oversell the theme
|
||||
|
||||
Use `<B>`, `<B issue={N}>`, `<B href='url'>`. Re-read file after user edits.
|
||||
|
||||
4. User runs `npm i` to update lockfile
|
||||
|
||||
## Step 3: README
|
||||
|
||||
Update `README.md`:
|
||||
- Line ~46: Update model examples if new flagship models
|
||||
- Line ~147: Add release bullet above previous version
|
||||
|
||||
**Style:** `- Open X.Y.Z: **Name** feature1, feature2, feature3`
|
||||
|
||||
## Step 4: Git Operations
|
||||
|
||||
User commits changes, then:
|
||||
```bash
|
||||
git tag vX.Y.Z
|
||||
git push opensource vX.Y.Z
|
||||
```
|
||||
|
||||
## Step 5: GitHub Release
|
||||
|
||||
Create release with `gh release create`. Structure:
|
||||
|
||||
```
|
||||
# Big-AGI X.Y.Z - Name
|
||||
|
||||
## What's New
|
||||
|
||||
### **Headline Feature**
|
||||
1-2 sentences explaining the main theme. Then bullet points for specifics.
|
||||
|
||||
### **Also New**
|
||||
- Bullet list of other features
|
||||
- Keep it scannable
|
||||
|
||||
**Full Changelog**: https://github.com/enricoros/big-AGI/compare/vPREV...vNEW
|
||||
|
||||
## Get Started
|
||||
Available now at [big-agi.com](https://big-agi.com), via Docker, or self-host from source.
|
||||
```
|
||||
|
||||
## Step 6: Announcements
|
||||
|
||||
Draft for user to post:
|
||||
|
||||
**Twitter** - Thematic, not feature dumps. Talk about what it means, not what it lists:
|
||||
```
|
||||
Big-AGI Open X.Y.Z is out!
|
||||
|
||||
[Theme - e.g., "Lots of love to models: native support, latest protocols, total configuration - puts you in control."]
|
||||
|
||||
[One more angle, natural prose]
|
||||
|
||||
[Optional link]
|
||||
```
|
||||
|
||||
**Discord** - Structured with bold headers:
|
||||
```
|
||||
## :partyblob: Big-AGI **Open** X.Y.Z
|
||||
|
||||
**Category:** Items
|
||||
**Category:** Items
|
||||
**More:** Count of commits/fixes
|
||||
```
|
||||
|
||||
## Tone Guide
|
||||
|
||||
**Good:**
|
||||
- "Lots of love to models: native support, latest protocols, total configuration"
|
||||
- "UX quality of life improvements, from Google Drive to message reorder"
|
||||
- "Gemini 3 Flash support with 4-level thinking: high, medium, low, minimal"
|
||||
|
||||
**Bad:**
|
||||
- "Rolling out the red carpet for top models!" (too salesy)
|
||||
- "Enhanced and streamlined the robust model experience" (corporate speak)
|
||||
- "Added support for Gemini 3 Flash model with multiple thinking levels" (verb prefix, vague)
|
||||
|
||||
## Reference
|
||||
|
||||
Find previous copy at:
|
||||
- **GitHub releases:** https://github.com/enricoros/big-AGI/releases
|
||||
- **News entries:** `src/apps/news/news.data.tsx`
|
||||
- **README:** `README.md` release notes section
|
||||
- **Changelog:** https://big-agi.com/changes
|
||||
|
||||
Match the existing tone - professional but human, specific not generic, features not marketing.
|
||||
@@ -1,113 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* Parse Ollama models from HTML (sorted by newest for stable ordering)
|
||||
*
|
||||
* Usage:
|
||||
* 1. Fetch HTML: curl -s "https://ollama.com/library?sort=newest" -o /tmp/ollama-newest.html
|
||||
* 2. Parse: node .claude/scripts/parse-ollama-models.js
|
||||
*
|
||||
* Outputs: pipe-delimited format: modelName|pulls|capabilities|sizes
|
||||
* Example: deepseek-r1|66200000|tools,thinking|1.5b,7b,8b,14b,32b,70b,671b
|
||||
*
|
||||
* Filtering rules:
|
||||
* - Top 30 newest models are always included (regardless of pull count)
|
||||
* - After top 30, only models with 50K+ pulls are included
|
||||
* - Models with 'cloud' capability are always excluded
|
||||
* - Models with 'embedding' capability are always excluded
|
||||
*
|
||||
* Pull counts are rounded to significant figures for stable diffs:
|
||||
* - >=10M: round to 100K (e.g., 109,123,456 -> 109,100,000)
|
||||
* - >=1M: round to 10K (e.g., 5,432,100 -> 5,430,000)
|
||||
* - <1M: round to 1K (e.g., 88,700 -> 89,000)
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
const htmlPath = process.argv[2] || '/tmp/ollama-newest.html';
|
||||
const TOP_N_ALWAYS_INCLUDE = 30;
|
||||
const MIN_PULLS_THRESHOLD = 50000;
|
||||
|
||||
if (!fs.existsSync(htmlPath)) {
|
||||
console.error(`Error: HTML file not found at ${htmlPath}`);
|
||||
console.error('Please fetch it first with:');
|
||||
console.error(' curl -s "https://ollama.com/library?sort=newest" -o /tmp/ollama-newest.html');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const html = fs.readFileSync(htmlPath, 'utf8');
|
||||
|
||||
// Split into model sections - each starts with <a href="/library/
|
||||
const modelSections = html.split(/<a href="\/library\//);
|
||||
const allParsedModels = [];
|
||||
|
||||
for (let i = 1; i < modelSections.length; i++) {
|
||||
const section = modelSections[i].substring(0, 5000); // Large enough window to capture all data
|
||||
|
||||
// Extract model name (first quoted string)
|
||||
const nameMatch = section.match(/^([^"]+)"/);
|
||||
if (!nameMatch) continue;
|
||||
const name = nameMatch[1];
|
||||
|
||||
// Extract pulls using x-test-pull-count
|
||||
const pullsMatch = section.match(/x-test-pull-count>([^<]+)</);
|
||||
let pulls = 0;
|
||||
if (pullsMatch) {
|
||||
const pullStr = pullsMatch[1].replace(/,/g, '');
|
||||
if (pullStr.includes('M')) {
|
||||
pulls = Math.floor(parseFloat(pullStr) * 1000000);
|
||||
} else if (pullStr.includes('K')) {
|
||||
pulls = Math.floor(parseFloat(pullStr) * 1000);
|
||||
} else {
|
||||
pulls = parseInt(pullStr);
|
||||
}
|
||||
}
|
||||
|
||||
// Extract capabilities (tools, vision, embedding, thinking, cloud)
|
||||
const capabilities = [];
|
||||
const capabilityRegex = /x-test-capability[^>]*>([^<]+)</g;
|
||||
let capMatch;
|
||||
while ((capMatch = capabilityRegex.exec(section)) !== null) {
|
||||
capabilities.push(capMatch[1].trim());
|
||||
}
|
||||
|
||||
// Extract sizes (1.5b, 7b, etc.)
|
||||
const sizes = [];
|
||||
const sizeRegex = /x-test-size[^>]*>([^<]+)</g;
|
||||
let sizeMatch;
|
||||
while ((sizeMatch = sizeRegex.exec(section)) !== null) {
|
||||
sizes.push(sizeMatch[1].trim());
|
||||
}
|
||||
|
||||
// Skip models with 'cloud' or 'embedding' capability
|
||||
if (capabilities.includes('cloud') || capabilities.includes('embedding')) {
|
||||
continue;
|
||||
}
|
||||
|
||||
allParsedModels.push({ name, pulls: roundPulls(pulls), capabilities, sizes });
|
||||
}
|
||||
|
||||
// Apply filtering: top 30 always included, rest need 50K+ pulls
|
||||
const models = allParsedModels.filter((model, index) => {
|
||||
return index < TOP_N_ALWAYS_INCLUDE || model.pulls >= MIN_PULLS_THRESHOLD;
|
||||
});
|
||||
|
||||
/**
|
||||
* Round pulls to significant figures for stable output.
|
||||
* This reduces churn from daily fluctuations while preserving magnitude.
|
||||
*/
|
||||
function roundPulls(pulls) {
|
||||
if (pulls >= 10000000) return Math.round(pulls / 100000) * 100000; // >=10M: round to 100K
|
||||
if (pulls >= 1000000) return Math.round(pulls / 10000) * 10000; // >=1M: round to 10K
|
||||
return Math.round(pulls / 1000) * 1000; // <1M: round to 1K
|
||||
}
|
||||
|
||||
// Output in pipe-delimited format (in the order they appear on the page)
|
||||
models.forEach(m => {
|
||||
const caps = m.capabilities.join(',');
|
||||
const tags = m.sizes.join(',');
|
||||
console.log(`${m.name}|${m.pulls}|${caps}|${tags}`);
|
||||
});
|
||||
|
||||
const topNCount = Math.min(TOP_N_ALWAYS_INCLUDE, allParsedModels.length);
|
||||
const thresholdCount = models.length - topNCount;
|
||||
console.error(`\nTotal models: ${models.length} (top ${topNCount} newest + ${thresholdCount} with ${MIN_PULLS_THRESHOLD / 1000}K+ pulls)`);
|
||||
@@ -1,49 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(cat:*)",
|
||||
"Bash(cp:*)",
|
||||
"Bash(curl:*)",
|
||||
"Bash(eslint:*)",
|
||||
"Bash(find:*)",
|
||||
"Bash(gh issue list:*)",
|
||||
"Bash(gh issue view:*)",
|
||||
"Bash(git branch:*)",
|
||||
"Bash(git cherry-pick:*)",
|
||||
"Bash(git describe:*)",
|
||||
"Bash(git grep:*)",
|
||||
"Bash(git log:*)",
|
||||
"Bash(git ls-tree:*)",
|
||||
"Bash(git mv:*)",
|
||||
"Bash(git show:*)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(head:*)",
|
||||
"Bash(ls:*)",
|
||||
"Bash(mkdir:*)",
|
||||
"Bash(node:*)",
|
||||
"Bash(npm install)",
|
||||
"Bash(npm install:*)",
|
||||
"Bash(npm run:*)",
|
||||
"Bash(npx eslint:*)",
|
||||
"Bash(npx tsc:*)",
|
||||
"Bash(rg:*)",
|
||||
"Bash(rm:*)",
|
||||
"Bash(sed:*)",
|
||||
"Bash(tail:*)",
|
||||
"Bash(tree:*)",
|
||||
"Bash(tsc:*)",
|
||||
"Read(//tmp/**)",
|
||||
"Skill(llms:update-models*)",
|
||||
"WebFetch",
|
||||
"WebFetch(domain:big-agi.com)",
|
||||
"WebSearch",
|
||||
"mcp__chrome-devtools",
|
||||
"mcp__github",
|
||||
"mcp__ide__getDiagnostics"
|
||||
],
|
||||
"deny": [
|
||||
"Read(node_modules)",
|
||||
"Read(node_modules/**)"
|
||||
]
|
||||
}
|
||||
}
|
||||
+40
-15
@@ -1,18 +1,43 @@
|
||||
*
|
||||
# big-AGI non-code files
|
||||
/docs/
|
||||
/dist/
|
||||
README.md
|
||||
|
||||
!app/
|
||||
!kb/
|
||||
!pages/
|
||||
!public/
|
||||
!src/
|
||||
!tools/
|
||||
# Ignore build and log files
|
||||
Dockerfile
|
||||
/.dockerignore
|
||||
|
||||
!*.mjs
|
||||
!middleware_BASIC_AUTH.ts
|
||||
!middleware.ts
|
||||
!next.config.ts
|
||||
!package*.json
|
||||
!tsconfig.json
|
||||
# Node build artifacts
|
||||
/node_modules
|
||||
/.pnp
|
||||
.pnp.js
|
||||
|
||||
!LICENSE
|
||||
!README.md
|
||||
# next.js
|
||||
/.next/
|
||||
/out/
|
||||
|
||||
# production
|
||||
/build
|
||||
|
||||
# versioning
|
||||
.git/
|
||||
.github/
|
||||
|
||||
# IDEs
|
||||
.idea/
|
||||
|
||||
# debug
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
.pnpm-debug.log*
|
||||
|
||||
# local env files
|
||||
.env*.local
|
||||
|
||||
# vercel
|
||||
.vercel
|
||||
|
||||
# typescript
|
||||
*.tsbuildinfo
|
||||
next-env.d.ts
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{
|
||||
"extends": "next/core-web-vitals"
|
||||
}
|
||||
@@ -1,70 +0,0 @@
|
||||
name: 🔥 Make AI Fix This
|
||||
description: Bug, question, or feedback - AI analyzes and changes Big-AGI appropriately
|
||||
labels: [ 'claude-triage' ]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for opening an issue! Our AI will analyze it and change Big-AGI appropriately.
|
||||
|
||||
**What happens next:**
|
||||
- AI searches the codebase and documentation
|
||||
- You get a response, typically within 30 minutes
|
||||
- Ticket gets follow-up and community votes
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: What's happening?
|
||||
description: Describe the bug, feature request, or question. Be as detailed as you can.
|
||||
placeholder: |
|
||||
Bug example: "In Beam, Anthropic models seem to have search off..."
|
||||
Model request: "Add Claude Opus 4.5 out today, see https://..."
|
||||
Feature example: "Add the option to to save frequent prompt templates for reuse..."
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: Where does this happen?
|
||||
description: If this is a bug or issue, where are you experiencing it?
|
||||
options:
|
||||
- Big-AGI Pro (big-agi.com)
|
||||
- Self-deployed from GitHub
|
||||
- Docker deployment
|
||||
- Local development
|
||||
- Not applicable (question/feedback)
|
||||
- Other
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: Impact on your workflow
|
||||
description: How does this affect your use of Big-AGI?
|
||||
options:
|
||||
- Blocking - Can't use Big-AGI
|
||||
- High - Major feature broken
|
||||
- Medium - Workaround exists
|
||||
- Low - Minor inconvenience
|
||||
- None - Just a question/suggestion
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Environment (if applicable)
|
||||
description: Device, OS, browser - only if reporting a bug
|
||||
placeholder: |
|
||||
Device: Macbook Pro M3
|
||||
OS: macOS 15.2
|
||||
Browser: Chrome 131
|
||||
validations:
|
||||
required: false
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Screenshots, error messages, or anything else that helps
|
||||
placeholder: Paste screenshots or error messages here
|
||||
validations:
|
||||
required: false
|
||||
@@ -5,29 +5,14 @@ labels: [ 'type: bug' ]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: Thank you for reporting a bug. Please help us by providing accurate environment information.
|
||||
|
||||
- type: dropdown
|
||||
attributes:
|
||||
label: Environment
|
||||
description: (required) Where are you experiencing this issue?
|
||||
options:
|
||||
- Big-AGI Pro (big-agi.com)
|
||||
- Self-deployed from GitHub
|
||||
- Docker container (specify in description)
|
||||
- Local development
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
|
||||
value: Thank you for reporting a bug.
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Description
|
||||
description: (required) Please provide a clear description and **steps to reproduce**.
|
||||
description: (required) Please provide a clear description. Please also provide the steps to reproduce.
|
||||
placeholder: 'Concise description + steps to reproduce.'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Device and browser
|
||||
@@ -35,12 +20,10 @@ body:
|
||||
placeholder: 'Device: (e.g., iPhone 16, Pixel 9, PC, Macbook...), OS: (e.g., iOS 17, Windows 12), Browser: (e.g., Chrome 119, Safari 18, Firefox..)'
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Screenshots and more
|
||||
placeholder: 'Attach screenshots, or add any additional context here.'
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Willingness to Contribute
|
||||
|
||||
@@ -32,6 +32,7 @@ assignees: enricoros
|
||||
- [ ] verify deployment on Vercel
|
||||
- [ ] verify container on GitHub Packages
|
||||
- [ ] update the GitHub release
|
||||
- [ ] push as stable `git push opensource main:main-stable`
|
||||
- Announce:
|
||||
- [ ] Discord announcement
|
||||
- [ ] Twitter announcement
|
||||
|
||||
@@ -1,69 +0,0 @@
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: docker
|
||||
directory: /
|
||||
schedule:
|
||||
interval: weekly
|
||||
commit-message:
|
||||
prefix: "chore(deps)"
|
||||
ignore:
|
||||
- dependency-name: "node"
|
||||
versions: [">=25", "<26"] # Node 25 breaks the build because of a dummy localStorage object
|
||||
|
||||
- package-ecosystem: github-actions
|
||||
directory: /
|
||||
schedule:
|
||||
interval: weekly
|
||||
commit-message:
|
||||
prefix: "chore(deps)"
|
||||
|
||||
# Disabled npm updates for now - will need precise package pinning, as some packages changed behavior upstream
|
||||
# - package-ecosystem: npm
|
||||
# directory: /
|
||||
# schedule:
|
||||
# interval: weekly
|
||||
# commit-message:
|
||||
# prefix: "chore(deps)"
|
||||
# cooldown:
|
||||
# semver-patch: 3
|
||||
# semver-minor: 7
|
||||
# semver-major: 14
|
||||
# # Ignore packages intentionally pinned due to upstream issues
|
||||
# ignore:
|
||||
# # Issue #857: v11.6+ breaks streaming; tried 11.4.4/11.6/11.7, only 11.5.1 works
|
||||
# - dependency-name: "@trpc/*"
|
||||
# versions: [">=11.5.1", "<12"]
|
||||
# # Pinned during tRPC #857 debugging - may be safe to unpin, test first
|
||||
# - dependency-name: "@tanstack/react-query"
|
||||
# versions: [">=5.90.10", "<6"]
|
||||
# # Pinned because 5.0.8 changes signatures so return set({ .. }) != void;
|
||||
# - dependency-name: "zustand"
|
||||
# versions: [">=5.0.7", "<6"]
|
||||
# groups:
|
||||
# next:
|
||||
# patterns:
|
||||
# - "@next/*"
|
||||
# - "eslint-config-next"
|
||||
# - "next"
|
||||
# react:
|
||||
# patterns:
|
||||
# - "react"
|
||||
# - "react-dom"
|
||||
# - "@types/react"
|
||||
# - "@types/react-dom"
|
||||
# emotion:
|
||||
# patterns:
|
||||
# - "@emotion/*"
|
||||
# mui:
|
||||
# patterns:
|
||||
# - "@mui/*"
|
||||
# dnd-kit:
|
||||
# patterns:
|
||||
# - "@dnd-kit/*"
|
||||
# prisma:
|
||||
# patterns:
|
||||
# - "@prisma/*"
|
||||
# - "prisma"
|
||||
# vercel:
|
||||
# patterns:
|
||||
# - "@vercel/*"
|
||||
@@ -1,59 +0,0 @@
|
||||
name: Claude Code DM
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
|
||||
jobs:
|
||||
claude-dm:
|
||||
# Only allow repository owner to trigger DMs with @claude (blocks other users and bots)
|
||||
if: |
|
||||
github.actor == 'enricoros' &&
|
||||
github.triggering_actor == 'enricoros' &&
|
||||
((github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) ||
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')))
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
|
||||
permissions:
|
||||
contents: write # Required for code creation and commits
|
||||
issues: write
|
||||
pull-requests: write
|
||||
actions: read # Required for Claude to read CI results on PRs
|
||||
id-token: write # required to use OIDC to authenticate to Claude Code API
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0 # 1 -> 0: full history helps with git blame, etc.
|
||||
|
||||
- name: Run Claude Code DM Response
|
||||
id: claude
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
|
||||
# Security: Only users with write access can trigger (DMs allow code execution)
|
||||
# Note: contents:write permission enables code creation and commits
|
||||
|
||||
# This is an optional setting that allows Claude to read CI results on PRs
|
||||
additional_permissions: |
|
||||
actions: read
|
||||
|
||||
# Optional: Add claude_args to customize behavior and configuration
|
||||
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
|
||||
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
|
||||
claude_args: |
|
||||
--model claude-opus-4-6
|
||||
--max-turns 100
|
||||
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),SlashCommand"
|
||||
@@ -1,83 +0,0 @@
|
||||
name: Claude Code Auto-Triage Issues
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [ opened ]
|
||||
|
||||
jobs:
|
||||
claude-issue-triage:
|
||||
# Optional: Skip for bot users and direct mentions in the body (handled by claude-dm.yml)
|
||||
if: |
|
||||
github.event.issue.user.type != 'Bot' &&
|
||||
!contains(github.event.issue.body, '@claude')
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
pull-requests: read # was write, but we're not altering PRs here
|
||||
actions: read
|
||||
id-token: write # required to use OIDC to authenticate to Claude Code API
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0 # 1 -> 0: full history helps with git blame, etc.
|
||||
|
||||
- name: Analyze issue and provide help
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
# Security: Allow any user to trigger triage (automated issue help is safe)
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
allowed_non_write_users: '*'
|
||||
# track_progress: true # Enables tracking comments
|
||||
show_full_output: ${{ github.event.repository.private }} # security: do not log verbosely in private repo
|
||||
|
||||
# This is an optional setting that allows Claude to read CI results on PRs
|
||||
additional_permissions: |
|
||||
actions: read
|
||||
|
||||
prompt: |
|
||||
REPO: ${{ github.repository }}
|
||||
ISSUE NUMBER: #${{ github.event.issue.number }}
|
||||
|
||||
A user has reported an issue. Please help them by:
|
||||
|
||||
1. Deep think about the issue:
|
||||
**Understand the problem**: Analyze the issue description and any error messages
|
||||
**Search for context**:
|
||||
- Use the repository's CLAUDE.md for high level guidance and especially kb/ documentation
|
||||
- Look in relevant code files, including kb/ documentation
|
||||
**Use web search**: When potentially outside Big-AGI (e.g. user configuration), search the web for similar errors or related issues
|
||||
**Provide a solution**:
|
||||
- Provide multiple solutions if uncertain, and say so
|
||||
- Analyze the code and suggest specific fixes with code examples
|
||||
- If possible also suggest fixes or workarounds for immediate relief
|
||||
- Reference specific files and line numbers
|
||||
- Suggest workarounds for immediate relief if applicable
|
||||
- Use web search to find similar issues and solutions
|
||||
- Test selectively and even npm install and run build if needed to verify the solution
|
||||
2. Always add the 'claude-triage' issue label to indicate this issue was triaged by Claude
|
||||
3. Comment with:
|
||||
- Very brief thank you note, if applicable
|
||||
- Initial assessment
|
||||
- Next steps or clarification needed
|
||||
- Link duplicates if found
|
||||
|
||||
Remember: design values for this codebase: orthogonal features, features that generalize well, modularized and reusable code,
|
||||
type-discriminated data, optimized code, zero maintenance burden. Minimize future pain, etc.
|
||||
|
||||
IMPORTANT: You are in READ-ONLY triage mode. Analyze and suggest solutions in your comment, but do NOT attempt to push code changes.
|
||||
If you're uncertain, say so and suggest next steps.
|
||||
Be welcoming, helpful, professional, solution-focused and no-BS.
|
||||
|
||||
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
|
||||
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
|
||||
claude_args: |
|
||||
--model claude-opus-4-6
|
||||
--max-turns 75
|
||||
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),SlashCommand"
|
||||
@@ -12,130 +12,34 @@ name: Create and publish Docker images
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main # Primary branch (Big-AGI Open)
|
||||
- main
|
||||
#- main-stable # Disabled as the v* tag is used for stable releases
|
||||
tags:
|
||||
- 'v2.*' # Stable releases (v2.0.0, v2.1.0, etc.)
|
||||
- 'v*' # Trigger on version tags (e.g., v1.7.0)
|
||||
|
||||
env:
|
||||
REGISTRY: ghcr.io
|
||||
IMAGE_NAME: ${{ github.repository }}
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# Build job: runs on native runners for each platform (no QEMU emulation)
|
||||
build:
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- platform: linux/amd64
|
||||
runner: ubuntu-latest
|
||||
- platform: linux/arm64
|
||||
runner: ubuntu-24.04-arm
|
||||
|
||||
runs-on: ${{ matrix.runner }}
|
||||
name: Build ${{ matrix.platform }}
|
||||
timeout-minutes: 30
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Prepare
|
||||
run: |
|
||||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
echo "IMAGE_NAME_LC=${IMAGE_NAME,,}" >> $GITHUB_ENV
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Extract metadata for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
labels: |
|
||||
org.opencontainers.image.title=Big-AGI Open
|
||||
org.opencontainers.image.description=Big-AGI Open - Multi-model AI workspace for experts who need to think broader, decide smarter, and build with confidence.
|
||||
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
|
||||
org.opencontainers.image.documentation=https://big-agi.com
|
||||
|
||||
- name: Build and push by digest
|
||||
id: build
|
||||
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
|
||||
with:
|
||||
context: .
|
||||
file: Dockerfile
|
||||
platforms: ${{ matrix.platform }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
tags: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_LC }}
|
||||
build-args: |
|
||||
NEXT_PUBLIC_GA4_MEASUREMENT_ID=${{ secrets.GA4_MEASUREMENT_ID }}
|
||||
NEXT_PUBLIC_BUILD_HASH=${{ github.sha }}
|
||||
NEXT_PUBLIC_BUILD_REF_NAME=${{ github.ref_name }}
|
||||
outputs: type=image,push-by-digest=true,name-canonical=true,push=true,oci-mediatypes=true
|
||||
provenance: false
|
||||
cache-from: type=gha,scope=${{ github.repository }}-${{ matrix.platform }}
|
||||
cache-to: type=gha,scope=${{ github.repository }}-${{ matrix.platform }},mode=max
|
||||
|
||||
- name: Export digest
|
||||
run: |
|
||||
mkdir -p ${{ runner.temp }}/digests
|
||||
digest="${{ steps.build.outputs.digest }}"
|
||||
touch "${{ runner.temp }}/digests/${digest#sha256:}"
|
||||
|
||||
- name: Upload digest
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
with:
|
||||
name: digests-${{ env.PLATFORM_PAIR }}
|
||||
path: ${{ runner.temp }}/digests/*
|
||||
if-no-files-found: error
|
||||
retention-days: 1
|
||||
|
||||
# Merge job: combines platform-specific images into a unified multi-arch manifest
|
||||
merge:
|
||||
name: Merge manifests
|
||||
build-and-push-image:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
needs: build
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
|
||||
steps:
|
||||
- name: Prepare
|
||||
run: echo "IMAGE_NAME_LC=${IMAGE_NAME,,}" >> $GITHUB_ENV
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download digests
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
|
||||
with:
|
||||
path: ${{ runner.temp }}/digests
|
||||
pattern: digests-*
|
||||
merge-multiple: true
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Log in to the Container registry
|
||||
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
|
||||
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
|
||||
with:
|
||||
registry: ${{ env.REGISTRY }}
|
||||
username: ${{ github.actor }}
|
||||
@@ -143,34 +47,23 @@ jobs:
|
||||
|
||||
- name: Extract metadata (tags, labels) for Docker
|
||||
id: meta
|
||||
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
|
||||
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
|
||||
with:
|
||||
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
|
||||
tags: |
|
||||
# Development: main branch
|
||||
type=raw,value=development,enable=${{ github.ref == 'refs/heads/main' }}
|
||||
type=raw,value=stable,enable=${{ github.ref == 'refs/heads/main-stable' }}
|
||||
type=ref,event=tag # Use the tag name as a tag for tag builds
|
||||
type=semver,pattern={{version}} # Generate semantic versioning tags for tag builds
|
||||
type=sha # Just in case none of the above applies
|
||||
|
||||
# Latest: v2.x releases (safe default)
|
||||
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/v2.') }}
|
||||
|
||||
# Stable: v2.x releases (alias)
|
||||
type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v2.') }}
|
||||
|
||||
# Version tags (v2.0.0, 2.0.0)
|
||||
type=ref,event=tag
|
||||
type=semver,pattern={{version}}
|
||||
|
||||
- name: Create manifest list and push
|
||||
working-directory: ${{ runner.temp }}/digests
|
||||
run: |
|
||||
docker buildx imagetools create \
|
||||
$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
|
||||
--annotation='index:org.opencontainers.image.title=Big-AGI Open' \
|
||||
--annotation='index:org.opencontainers.image.description=Big-AGI Open - Multi-model AI workspace for experts who need to think broader, decide smarter, and build with confidence.' \
|
||||
--annotation='index:org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}' \
|
||||
--annotation='index:org.opencontainers.image.documentation=https://big-agi.com' \
|
||||
$(printf '${{ env.REGISTRY }}/${{ env.IMAGE_NAME_LC }}@sha256:%s ' *)
|
||||
|
||||
- name: Inspect image
|
||||
run: |
|
||||
docker buildx imagetools inspect ${{ env.REGISTRY }}/${{ env.IMAGE_NAME_LC }}:${{ steps.meta.outputs.version }}
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
|
||||
with:
|
||||
context: .
|
||||
file: Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
build-args: NEXT_PUBLIC_GA4_MEASUREMENT_ID=${{ secrets.GA4_MEASUREMENT_ID }}
|
||||
|
||||
+1
-15
@@ -3,10 +3,6 @@
|
||||
# Frontend Build: ignore API files disabled for this build
|
||||
/app/**/*.backup
|
||||
|
||||
# Supabase - ignored for now
|
||||
/supabase/
|
||||
/*.sql
|
||||
|
||||
# dependencies
|
||||
/node_modules
|
||||
/.pnp
|
||||
@@ -45,14 +41,4 @@ yarn-error.log*
|
||||
next-env.d.ts
|
||||
|
||||
# other
|
||||
.idea/
|
||||
|
||||
# Ingore k8s/env-secret.yaml
|
||||
./k8s/env-secret.yaml
|
||||
/certificates
|
||||
.env*.local
|
||||
/.run/dev (ENV).run.xml
|
||||
/src/modules/3rdparty/aider/scratch*
|
||||
|
||||
# Ignore temporary CC files
|
||||
/tmpclaude*
|
||||
.idea/
|
||||
@@ -0,0 +1,3 @@
|
||||
overrides=@mui/material@^5.0.0:
|
||||
dependencies:
|
||||
@mui/material: replaced-by=@mui/joy
|
||||
@@ -1,237 +0,0 @@
|
||||
# CLAUDE.md
|
||||
|
||||
Guidance to Claude Code when working with code in this repository.
|
||||
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
Big-AGI is a Next.js 15 application with a sophisticated modular architecture built for professional AI interactions.
|
||||
|
||||
### Development Commands
|
||||
|
||||
Dev servers may be already running on ports 3000, 3001, 3002, or 3003 (not always this app - other projects may occupy these ports). Never start or stop dev servers, let the user do it.
|
||||
|
||||
```bash
|
||||
# Validate (~5s, safe while dev server runs, do NOT use `next build` ~45s for same checks)
|
||||
tsc --noEmit --pretty && npm run lint # Type check (~3.5s) + ESLint (~2s)
|
||||
eslint src/path/to/file.ts # Lint specific file
|
||||
|
||||
# Full build (~60s+, only when suspecting runtime/bundle issues)
|
||||
npm run build # next build runs compile+lint+types but stops at first type-error file; tsc shows all at once
|
||||
|
||||
# Database & External Services
|
||||
# npm run supabase:local-update-types # Generate TypeScript types
|
||||
# npm run stripe:listen # Listen for Stripe webhooks
|
||||
```
|
||||
|
||||
### Git/GitHub remotes
|
||||
|
||||
The `gh` command is available to interact with GitHub from the terminal, but **NEVER PUSH TO ANY BRANCH**. The user manages all 'write' git operations.
|
||||
- `opensource` -> `enricoros/big-AGI` (public, default branch: `main`, MIT) - community issues/PRs/releases
|
||||
- `private` -> `big-agi/big-agi-private` (private, default branch: `dev`) - main dev repo with `dev`->`staging`->`prod` pipeline
|
||||
|
||||
### Core Directory Structure
|
||||
|
||||
You are started from the root of the repository (i.e. where the git folder is or scripts should be run from).
|
||||
**ISSUE ALL COMMANDS FROM THE ROOT, OMITTING 'cd' COMMANDS. DO NOT CHAIN CD AND OTHER COMMANDS**
|
||||
**NEVER RUN COMPOUND `cd` COMMANDS LIKE `cd some-folder && command` - ONLY RUN `command` FROM THE ROOT, ALWAYS.**
|
||||
The directory structure is as follows:
|
||||
|
||||
```
|
||||
/app/api/ # Next.js App Router (API routes only, mostly -> /src/server/)
|
||||
/pages/ # Next.js Pages Router (file-based, mostly -> /src/apps/)
|
||||
/src/
|
||||
├── apps/ # Feature applications (self-contained modules)
|
||||
├── modules/ # Reusable business logic and integrations
|
||||
├── common/ # Shared infrastructure and utilities
|
||||
└── server/ # Backend API layer with tRPC
|
||||
/kb/ # Knowledge base for modules, architectures
|
||||
```
|
||||
|
||||
### Key Technologies
|
||||
|
||||
- **Frontend**: Next.js 15, React 18, Material-UI Joy, Emotion (CSS-in-JS)
|
||||
- **State Management**: Zustand with localStorage/IndexedDB (single cell) persistence
|
||||
- **API Layer**: tRPC with TanStack React Query for type-safe communication
|
||||
- **Runtime**: Edge Runtime for AI operations, Node.js for data processing
|
||||
|
||||
### "Apps" Architecture Pattern
|
||||
|
||||
Each app in `/src/apps/` is a self-contained feature module:
|
||||
- Main component (`App*.tsx`)
|
||||
- Local state store (`store-app-*.ts`)
|
||||
- Feature-specific components and layouts
|
||||
- Runtime configurations
|
||||
|
||||
Example apps: `chat/`, `call/`, `beam/`, `draw/`, `personas/`, `settings-modal/`
|
||||
|
||||
### Modules Architecture Pattern
|
||||
|
||||
Modules in `/src/modules/` provide reusable business logic:
|
||||
- **`aix/`** - AI communication framework for real-time streaming
|
||||
- **`beam/`** - Multi-model AI reasoning system (scatter/gather pattern)
|
||||
- **`blocks/`** - Content rendering (markdown, code, images, etc.)
|
||||
- **`llms/`** - Language model abstraction supporting 20+ vendors
|
||||
|
||||
### Key Subsystems & Their Patterns
|
||||
|
||||
#### AIX - Real-time AI Communication
|
||||
**Location**: `/src/modules/aix/`
|
||||
**Pattern**: Client-server streaming architecture with provider abstraction
|
||||
|
||||
- **Client** -> tRPC -> **Server** -> **AI Providers**
|
||||
- Handles streaming/non-streaming responses with batching and error recovery
|
||||
- Particle-based streaming: `AixWire_Particles` -> `ContentReassembler` -> `DMessage`
|
||||
- Provider-agnostic through adapter pattern (OpenAI, Anthropic, Gemini protocols)
|
||||
|
||||
#### Beam - Multi-Model Reasoning
|
||||
**Location**: `/src/modules/beam/`
|
||||
**Pattern**: Scatter/Gather for parallel AI processing
|
||||
|
||||
- **Scatter**: Multiple models (rays) process input in parallel
|
||||
- **Gather**: Fusion algorithms combine outputs
|
||||
- Real-time UI updates via vanilla Zustand stores
|
||||
- BeamStore per conversation via ConversationHandler
|
||||
|
||||
#### Conversation Management
|
||||
**Location**: `/src/common/stores/chat/` and `/src/common/chat-overlay/`
|
||||
**Pattern**: Overlay architecture with handler per conversation
|
||||
|
||||
- `ConversationHandler` orchestrates chat, beam, ephemerals
|
||||
- Per-chat stores: `PerChatOverlayStore` + `BeamStore`
|
||||
- Message structure: `DMessage` -> `DMessageFragment[]`
|
||||
- Supports multi-pane with independent conversation states
|
||||
|
||||
#### Layout System ("Optima")
|
||||
|
||||
The Optima layout system provides:
|
||||
- **Responsive design** adapting desktop/mobile
|
||||
- **Drawer(left)/Toolbar/Panel(right)** composition
|
||||
- **Portal-based rendering** for flexible component placement
|
||||
|
||||
Located in `/src/common/layout/optima/`
|
||||
|
||||
### Storage System
|
||||
|
||||
Big-AGI uses a local-first architecture with Zustand + IndexedDB:
|
||||
- **Zustand** stores for in-memory state management
|
||||
- **localStorage** for persistent settings/all storage (via Zustand persist middleware)
|
||||
- **IndexedDB** for persistent chat-only storage (via Zustand persist middleware) on a single key-val cell
|
||||
- **Local-first** architecture with offline capability
|
||||
|
||||
Key storage patterns:
|
||||
- Stores use `createIDBPersistStorage()` for IndexedDB persistence
|
||||
- Version-based migrations handle data structure changes
|
||||
- Partialize/merge functions control what gets persisted
|
||||
- Rehydration logic repairs and upgrades data on load
|
||||
|
||||
Located in `/src/common/stores/` with stores like:
|
||||
- `chat/store-chats.ts`: Conversations and messages
|
||||
- `llms/store-llms.ts`: Model configurations
|
||||
|
||||
### State Management Patterns
|
||||
|
||||
1. **Global Stores** (Zustand with IndexedDB persistence)
|
||||
- `store-chats`: Conversations and messages
|
||||
- `store-llms`: Model configurations
|
||||
- `store-ux-labs`: UI preferences and labs features
|
||||
- **Zustand pattern**: Always wrap multi-property selectors with `useShallow` from `zustand/react/shallow` to prevent re-renders on reference changes
|
||||
|
||||
2. **Per-Instance Stores** (Vanilla Zustand)
|
||||
- `store-beam_vanilla`: Beam scatter/gather state
|
||||
- `store-perchat_vanilla`: Chat overlay state
|
||||
- `store-attachment-drafts_vanilla`: Attachment drafts
|
||||
- High-performance, no React integration
|
||||
|
||||
3. **Module Stores**
|
||||
- Feature-specific configuration and state
|
||||
- Example: `store-module-beam`, `store-module-t2i`
|
||||
|
||||
### User Flows & Interdependencies
|
||||
|
||||
#### Chat Message Flow
|
||||
1. User input -> `Composer` -> `DMessage` creation
|
||||
2. `ConversationHandler.messageAppend()` -> Store update
|
||||
3. `_handleExecute()` / `ConversationHandler.executeChatMessages()` -> AIX client request
|
||||
4. AIX streaming -> `ContentReassembler` -> UI updates
|
||||
5. Zustand auto-persistence -> IndexedDB
|
||||
|
||||
#### Beam Multi-Model Flow
|
||||
1. User triggers Beam -> `BeamStore.open()` state update
|
||||
2. Scatter: Parallel `aixChatGenerateContent()` to N models
|
||||
3. Real-time ray updates -> UI progress
|
||||
4. Gather: User selects fusion -> Combined output
|
||||
5. Result -> New message in conversation
|
||||
|
||||
### Development Patterns
|
||||
|
||||
#### TypeScript & Code Quality
|
||||
- Type-safe through strict TypeScript interfaces
|
||||
- Clear interface-first approach for modules and components
|
||||
- Use latest TypeScript 5.9+ features
|
||||
- Use forward-looking patterns to minimize future refactors (e.g., discriminated unions, `satisfies` operator, as const assertions)
|
||||
- Type guards and exhaustiveChecks for robustness
|
||||
- Type inference where possible
|
||||
- Runtime validation with Zod schemas for API inputs/outputs (usually server-side, with the client importing as types the inferred types)
|
||||
|
||||
#### Module Integration
|
||||
- Modules register with central registries (e.g., `vendors.registry.ts`)
|
||||
- Configuration objects define module behavior
|
||||
|
||||
#### API Patterns
|
||||
- **tRPC routers** for type-safe API endpoints
|
||||
- **Zod schemas** for runtime validation
|
||||
- **tRPC procedures middleware** for authorization and logging (authorization is on a httpOnly cookie)
|
||||
- **Edge functions** for performance-critical operations
|
||||
|
||||
#### Security Considerations
|
||||
- API keys in environment variables only (server-side); on the client they're in localStorage for now, but we want to move away from this
|
||||
- XSS protection through proper content escaping
|
||||
|
||||
#### Writing Style
|
||||
- **Never use emdashes (—).** Use normal dashes (-) instead, in all generated text, code comments, and documentation.
|
||||
|
||||
|
||||
## Common Development Tasks
|
||||
|
||||
### Testing & Quality
|
||||
- Run `npm run lint` before committing
|
||||
- Type-check with `tsc --noEmit`
|
||||
- Test critical user flows manually
|
||||
|
||||
### Debugging Storage Issues
|
||||
- Check IndexedDB: DevTools -> Application -> IndexedDB -> `app-chats`
|
||||
- Monitor Zustand state: Use Zustand DevTools
|
||||
- Check migration logs in console during rehydration
|
||||
|
||||
|
||||
## Server Architecture
|
||||
|
||||
The server uses a split architecture with two tRPC routers:
|
||||
|
||||
### Edge Network (`trpc.router-edge`)
|
||||
Distributed edge runtime for low-latency AI operations:
|
||||
- **AIX** [1] - AI streaming and communication
|
||||
- **LLM Routers** [1] - Vendor-specific operations such as list models (OpenAI, Anthropic, Gemini, Ollama)
|
||||
- **Speex** [1] - Unified TTS router (ElevenLabs, Inworld, and other TTS vendors)
|
||||
- **External Services** - Google Search, YouTube transcripts
|
||||
|
||||
[1]: also supports client-side fetch (CSF) via client-side inclusion (rebundling with stubs),
|
||||
for direct browser-to-API communication when possible (CORS), to reduce latency and network barriers
|
||||
|
||||
Located at `/src/server/trpc/trpc.router-edge.ts`
|
||||
|
||||
### Cloud Network (`trpc.router-cloud`)
|
||||
Centralized server for data processing operations:
|
||||
- **Browse** - Web scraping and content extraction
|
||||
- **Trade** - Import/export functionality (ChatGPT, markdown, JSON)
|
||||
|
||||
Located at `/src/server/trpc/trpc.router-cloud.ts`
|
||||
|
||||
**Key Pattern**: Edge runtime for AI (fast, distributed), Cloud runtime for data ops (centralized, Node.js)
|
||||
|
||||
@kb/KB.md
|
||||
|
||||
@kb/vision-inlined.md
|
||||
|
||||
As a side note, the product tiers (independent, non-VC-funded) are: **Open** (self-host, MIT) · **Free** (big-agi.com) · **Pro** (paid, includes Sync + backup). All tiers use the user's own API keys.
|
||||
+12
-39
@@ -1,9 +1,7 @@
|
||||
# syntax=docker/dockerfile:1
|
||||
# check=skip=CopyIgnoredFile
|
||||
|
||||
# Base
|
||||
FROM node:24-alpine AS base
|
||||
ENV NEXT_TELEMETRY_DISABLED=1
|
||||
FROM node:18-alpine AS base
|
||||
ENV NEXT_TELEMETRY_DISABLED 1
|
||||
|
||||
|
||||
# Dependencies
|
||||
FROM base AS deps
|
||||
@@ -13,11 +11,8 @@ WORKDIR /app
|
||||
COPY package*.json ./
|
||||
COPY src/server/prisma ./src/server/prisma
|
||||
|
||||
# link ssl3 for latest Alpine
|
||||
RUN sh -c '[ ! -e /lib/libssl.so.3 ] && ln -s /usr/lib/libssl.so.3 /lib/libssl.so.3 || echo "Link already exists"'
|
||||
|
||||
# Install dependencies, including dev (release builds should use npm ci)
|
||||
ENV NODE_ENV=development
|
||||
ENV NODE_ENV development
|
||||
RUN npm ci
|
||||
|
||||
|
||||
@@ -25,37 +20,20 @@ RUN npm ci
|
||||
FROM base AS builder
|
||||
WORKDIR /app
|
||||
|
||||
# Deployment type marker
|
||||
ENV NEXT_PUBLIC_DEPLOYMENT_TYPE=docker
|
||||
|
||||
# Optional build version arguments at build time
|
||||
ARG NEXT_PUBLIC_BUILD_HASH
|
||||
ENV NEXT_PUBLIC_BUILD_HASH=${NEXT_PUBLIC_BUILD_HASH}
|
||||
ARG NEXT_PUBLIC_BUILD_REF_NAME
|
||||
ENV NEXT_PUBLIC_BUILD_REF_NAME=${NEXT_PUBLIC_BUILD_REF_NAME}
|
||||
|
||||
# Optional argument to configure GA4 at build time (see: docs/deploy-analytics.md)
|
||||
ARG NEXT_PUBLIC_GA4_MEASUREMENT_ID
|
||||
ENV NEXT_PUBLIC_GA4_MEASUREMENT_ID=${NEXT_PUBLIC_GA4_MEASUREMENT_ID}
|
||||
|
||||
# Optional argument to configure PostHog at build time (see: docs/deploy-analytics.md)
|
||||
ARG NEXT_PUBLIC_POSTHOG_KEY
|
||||
ENV NEXT_PUBLIC_POSTHOG_KEY=${NEXT_PUBLIC_POSTHOG_KEY}
|
||||
|
||||
# Optional argument to configure Google Drive Picker at build time (can reuse AUTH_GOOGLE_ID value)
|
||||
ARG NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID
|
||||
ENV NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID=${NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID}
|
||||
|
||||
# Copy development deps and source
|
||||
COPY --from=deps /app/node_modules ./node_modules
|
||||
COPY . .
|
||||
|
||||
# Build the application
|
||||
ENV NODE_ENV=production
|
||||
ENV NODE_ENV production
|
||||
RUN npm run build
|
||||
|
||||
# Reduce installed packages to production-only
|
||||
RUN npm prune --omit=dev
|
||||
RUN npm prune --production
|
||||
|
||||
|
||||
# Runner
|
||||
@@ -63,23 +41,18 @@ FROM base AS runner
|
||||
WORKDIR /app
|
||||
|
||||
# As user
|
||||
RUN addgroup --system --gid 1001 nodejs \
|
||||
&& adduser --system --uid 1001 nextjs \
|
||||
&& apk add --no-cache openssl
|
||||
RUN addgroup --system --gid 1001 nodejs
|
||||
RUN adduser --system --uid 1001 nextjs
|
||||
|
||||
# Copy Built app
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/src/server/prisma ./src/server/prisma
|
||||
# Instead of `COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next`, we only extract some parts, excluding .next/cache which is build time only:
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/BUILD_ID ./.next/
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/server ./.next/server
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/types ./.next/types
|
||||
COPY --from=builder --chown=nextjs:nodejs /app/.next/*.json ./.next/
|
||||
|
||||
# Minimal ENV for production
|
||||
ENV NODE_ENV=production
|
||||
ENV NODE_ENV production
|
||||
ENV PATH $PATH:/app/node_modules/.bin
|
||||
|
||||
# Run as non-root user
|
||||
USER nextjs
|
||||
@@ -88,4 +61,4 @@ USER nextjs
|
||||
EXPOSE 3000
|
||||
|
||||
# Start the application
|
||||
CMD ["/app/node_modules/.bin/next", "start"]
|
||||
CMD ["next", "start"]
|
||||
@@ -1,6 +1,6 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023-2026 Enrico Ros
|
||||
Copyright (c) 2023-2024 Enrico Ros
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
@@ -1,212 +1,31 @@
|
||||
<div align="center">
|
||||
# BIG-AGI 🧠✨
|
||||
|
||||
<img width="256" height="256" alt="Big-AGI Logo" src="https://big-agi.com/assets/logo-bright-github.svg" />
|
||||
Welcome to big-AGI, the AI suite for professionals that need function, form,
|
||||
simplicity, and speed. Powered by the latest models from 12 vendors and
|
||||
open-source servers, `big-AGI` offers best-in-class Chats,
|
||||
[Beams](https://github.com/enricoros/big-AGI/issues/470),
|
||||
and [Calls](https://github.com/enricoros/big-AGI/issues/354) with AI personas,
|
||||
visualizations, coding, drawing, side-by-side chatting, and more -- all wrapped in a polished UX.
|
||||
|
||||
<h1><a href="https://big-agi.com">Big-AGI</a></h1>
|
||||
Stay ahead of the curve with big-AGI. 🚀 Pros & Devs love big-AGI. 🤖
|
||||
|
||||
[](https://big-agi.com)
|
||||
[](https://github.com/enricoros/big-AGI/pkgs/container/big-agi)
|
||||
[](https://vercel.com/new/clone?repository-url=https://github.com/enricoros/big-agi)
|
||||
[](https://discord.gg/MkH4qj2Jp9)
|
||||
<br/>
|
||||
[](https://github.com/enricoros/big-agi/commits)
|
||||
[](https://github.com/enricoros/big-AGI/pkgs/container/big-agi)
|
||||
[](https://github.com/enricoros/big-AGI/graphs/contributors)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
<br/>
|
||||
[](https://big-agi.com)
|
||||
|
||||
[](https://github.com/enricoros/big-agi/issues/new?template=ai-triage.yml)
|
||||
Or fork & run on Vercel
|
||||
|
||||
[//]: # ([](https://stats.uptimerobot.com/59MXcnmjrM))
|
||||
[//]: # ([](https://github.com/enricoros/big-AGI/releases/latest))
|
||||
[//]: # ()
|
||||
[//]: # ([](#))
|
||||
[//]: # ([](https://x.com/enricoros))
|
||||
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI&env=OPENAI_API_KEY&envDescription=Backend%20API%20keys%2C%20optional%20and%20may%20be%20overridden%20by%20the%20UI.&envLink=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI%2Fblob%2Fmain%2Fdocs%2Fenvironment-variables.md&project-name=big-AGI)
|
||||
|
||||
</div>
|
||||
## 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2) 👉 [installation](docs/installation.md) 👉 [documentation](docs/README.md)
|
||||
|
||||
<br/>
|
||||
> Note: bigger better features (incl. Beam-2) are being cooked outside of `main`.
|
||||
|
||||
# Big-AGI Open 🧠
|
||||
[//]: # (big-AGI is an open book; see the **[ready-to-ship and future ideas](https://github.com/users/enricoros/projects/4/views/2)** in our open roadmap)
|
||||
|
||||
This is the open-source foundation of **Big-AGI**, ___the multi-model AI workspace for experts___.
|
||||
### What's New in 1.16.1 · May 13, 2024 (minor release, models support)
|
||||
|
||||
Big-AGI is the multi-model AI workspace for experts: Engineers architecting systems. Founders making decisions. Researchers validating hypotheses.
|
||||
You need to think broader, decide faster, and build with confidence, then you need Big-AGI.
|
||||
- Support for the new OpenAI GPT-4o 2024-05-13 model
|
||||
|
||||
It comes packed with **world-class features** like Beam, and is praised for its **best-in-class AI chat UX**.
|
||||
**As an independent, non-VC-funded project, Pro subscriptions at $10.99/mo fund development for everyone, including the free and open-source tiers.**
|
||||
|
||||

|
||||
[](https://big-agi.com/beam)
|
||||
[](https://big-agi.com/inspector)
|
||||
|
||||
### What makes Big-AGI different:
|
||||
|
||||
**Intelligence**: with [Beam & Merge](https://big-agi.com/beam) for multi-model de-hallucination, native search, and bleeding-edge AI models like Opus 4.6, Nano Banana Pro, Kimi K2.5 or GPT 5.4 -
|
||||
**Control**: with personas, data ownership, requests inspection, unlimited usage with API keys, and *no vendor lock-in* -
|
||||
and **Speed**: with a local-first, over-powered, zero-latency, madly optimized web app.
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="center" width="25%">
|
||||
<b>🧠 Intelligence</b><br/>
|
||||
<img src="https://img.shields.io/badge/Multi--Model-Trust-4285F4?style=for-the-badge" alt="Multi-Model"/>
|
||||
</td>
|
||||
<td align="center" width="25%">
|
||||
<b>✨ Experience</b><br/>
|
||||
<img src="https://img.shields.io/badge/Clean-UX-34A853?style=for-the-badge" alt="Clean UX"/>
|
||||
</td>
|
||||
<td align="center" width="25%">
|
||||
<b>⚡ Performance</b><br/>
|
||||
<img src="https://img.shields.io/badge/Zero-Latency-EA4335?style=for-the-badge" alt="Zero Latency"/>
|
||||
</td>
|
||||
<td align="center" width="25%">
|
||||
<b>🔒 Control</b><br/>
|
||||
<img src="https://img.shields.io/badge/No-Lock--in-FBBC04?style=for-the-badge" alt="No Lock-in"/>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center" valign="top">
|
||||
Beam & Merge<br/>
|
||||
No context junk<br/>
|
||||
Purest AI outputs
|
||||
</td>
|
||||
<td align="center" valign="top">
|
||||
Flow-state interface<br/>
|
||||
Highly customizable<br/>
|
||||
Best-in-class UX
|
||||
</td>
|
||||
<td align="center" valign="top">
|
||||
Local-first<br/>
|
||||
Highly parallel<br/>
|
||||
Madly optimized
|
||||
</td>
|
||||
<td align="center" valign="top">
|
||||
No vendor lock-in<br/>
|
||||
Your API keys<br/>
|
||||
AI Inspector
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
### Who uses Big-AGI:
|
||||
Loved by engineers, founders, researchers, self-hosters, and IT departments for its power, reliability, and transparency.
|
||||
|
||||
<img width="830" height="370" alt="image" src="https://github.com/user-attachments/assets/513c4f77-0970-4a56-b23b-1416c8246174" />
|
||||
|
||||
Choose Big-AGI because you don't need another clone or slop - you need an AI tool that scales with you.
|
||||
|
||||
### Show me a screenshot:
|
||||
Sure - here is real-world screeengrab as I'm writing this, while running a Beam to extract SVG from an image with Sonnet 4.5, Opus 4.1, GPT 5.1, Gemini 2.5 Pro, Nano Banana, etc.
|
||||
<img alt="Real-world screen capture as of Nov 15 2025, 2am" src="https://github.com/user-attachments/assets/853f4160-27cb-4ac9-826b-402f1e63d4af" />
|
||||
|
||||
|
||||
## Get Started
|
||||
|
||||
| Tier | Best For | What You Get | Setup |
|
||||
|------------------------------------------------------|-------------------|---------------------------------------------------------------|-------------|
|
||||
| Big-AGI Open (self-host) | **IT** | First to get new models support. Maximum control and privacy. | 5-30 min |
|
||||
| [big-agi.com](https://big-agi.com) Free | **Everyone** | Full core experience, improved Beam, new Personas, best UX. | **2 min**\* |
|
||||
| **[big-agi.com](https://big-agi.com) Pro** $10.99/mo | **Professionals** | Everything + **Sync** across unlimited devices + 1GB storage | **2 min**\* |
|
||||
|
||||
\*: **Configuration requires your API keys**. *Big-AGI does not charge for model usage or limit your access*.
|
||||
**Why Pro?** As an independent project, Pro subscriptions fund all development. Early subscribers shape the roadmap directly.
|
||||
|
||||
[](https://big-agi.com)
|
||||
|
||||
**Self-host and developers** (full control)
|
||||
- Develop locally or self-host with Docker on your own infrastructure – [guide](docs/installation.md)
|
||||
- Or fork & run on Vercel:
|
||||
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI&env=OPENAI_API_KEY&envDescription=Backend%20API%20keys%2C%20optional%20and%20may%20be%20overridden%20by%20the%20UI.&envLink=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI%2Fblob%2Fmain%2Fdocs%2Fenvironment-variables.md&project-name=big-AGI)
|
||||
|
||||
[//]: # (**For the latest Big-AGI:**)
|
||||
|
||||
[//]: # (- [**Big-AGI Open**](https://github.com/enricoros/big-AGI/tree/main) - Open Source, latest models and features (main branch))
|
||||
|
||||
[//]: # (- [**Big-AGI Pro**](https://big-agi.com) - Hosted with Cloud Sync)
|
||||
|
||||
---
|
||||
|
||||
## Our Philosophy
|
||||
|
||||
We're an independent, non-VC-funded project with a simple belief: **AI should elevate you, not replace you**.
|
||||
|
||||
This is why we built Big-AGI to be **local-first**, madly optimized to 0-latency, launched multi-model first to
|
||||
defeat hallucinations, designed Beam around the **humans in the loop**, re-wrote frameworks and abstractions
|
||||
so you **are not vendor locked-in**, and obsessed over a powerful UI that works, just works.
|
||||
|
||||
NOTE: this is a powerful tool - if you need a toy UI or clone, this ain't it.
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Release Notes
|
||||
|
||||
👉 **[See the Live Release Notes](https://big-agi.com/changes)**
|
||||
- Open 2.0.4: **Hyper Params** **Opus 4.6**, **GPT-5.4**, **Gemini 3.1 Pro**, AWS Bedrock, parameter accuracy, Anthropic continuation/Fast mode
|
||||
- Open 2.0.3: **Red Carpet** **Kimi K2.5**, **Gemini 3 Flash**, **GPT 5.2**, Google Drive, Inworld, Novita.ai, Speech/UX improvements
|
||||
- Open 2.0.2: **Speex** multi-vendor speech synthesis, **Opus 4.5**, **Gemini 3 Pro**, **Nano Banana Pro**, **Grok 4.1**, **GPT-5.1**, **Kimi K2** + 280 fixes
|
||||
|
||||
### What's New in 2.0 · Oct 31, 2025 · Open
|
||||
|
||||
- **Big-AGI Open** is ready and more productive and faster than ever, with:
|
||||
- **Beam 2**: multi-modal, program-based, follow-ups, save presets
|
||||
- Top-notch AI models support including **agentic models** and **reasoning models**
|
||||
- **Image Generation** and editing with Nano Banana and gpt-image-1
|
||||
- **Web Search** with citations for supported models
|
||||
- **UI** & Mobile UI overhaul with peeking and side panels
|
||||
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
|
||||
- Built for the future, madly optimized
|
||||
|
||||
<img width="830" height="385" alt="image" src="https://github.com/user-attachments/assets/ad52761d-7e3f-44d8-b41e-947ce8b4faa1" />
|
||||
|
||||
#### **Open** links: 👉 [changelog](https://big-agi.com/changes) 👉 [installation](docs/installation.md) 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2) 👉 [documentation](docs/README.md)
|
||||
|
||||
**For teams and institutions:** Need shared prompts, SSO, or managed deployments? Reach out at enrico@big-agi.com. We're actively collecting requirements from research groups and IT departments.
|
||||
|
||||
<details>
|
||||
<summary>5,000 Commits Milestone</summary>
|
||||
|
||||
Hit 5k commits last week. That's a lot of code.
|
||||
|
||||
Recent work has been intense:
|
||||
- Chain of thought reasoning across multiple LLMs: **OpenAI o3** and o1, **DeepSeek R1**, **Gemini 2.0 Flash Thinking**, and more
|
||||
- Beam is real - ~35% of our users run it daily to compare models
|
||||
- New AIX framework lets us scale features we couldn't before
|
||||
- UI is faster than ever. Like, terminal-fast
|
||||
|
||||
The new architecture is solid and the speed improvements are real.
|
||||
|
||||

|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>What's New in 1.16.1...1.16.13 · (patch releases)</summary>
|
||||
|
||||
- 1.16.13: Docker fix ([#840](https://github.com/enricoros/big-AGI/issues/840))
|
||||
- 1.16.12: Dockerfile update ([#840](https://github.com/enricoros/big-AGI/issues/840))
|
||||
- 1.16.11: v1 final release, documentation updates
|
||||
- 1.16.10: OpenRouter models support
|
||||
- 1.16.9: Docker Gemini fix, R1 models support
|
||||
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
|
||||
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
|
||||
- 1.16.6: Groq support for Llama 3.1 models
|
||||
- 1.16.5: GPT-4o Mini support
|
||||
- 1.16.4: 8192 tokens support for Claude 3.5 Sonnet
|
||||
- 1.16.3: Anthropic Claude 3.5 Sonnet model support
|
||||
- 1.16.2: Improve web downloads, as text, markdown, or HTML
|
||||
- 1.16.2: Proper support for Gemini models
|
||||
- 1.16.2: Added the latest Mistral model
|
||||
- 1.16.2: Tokenizer support for gpt-4o
|
||||
- 1.16.2: Updates to Beam
|
||||
- 1.16.1: Support for the new OpenAI GPT-4o 2024-05-13 model
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>What's New in 1.16.0 · May 9, 2024 · Crystal Clear</summary>
|
||||
### What's New in 1.16.0 · May 9, 2024 · Crystal Clear
|
||||
|
||||
- [Beam](https://big-agi.com/blog/beam-multi-model-ai-reasoning) core and UX improvements based on user feedback
|
||||
- Chat cost estimation 💰 (enable it in Labs / hover the token counter)
|
||||
@@ -217,20 +36,14 @@ The new architecture is solid and the speed improvements are real.
|
||||
- Models update: **Anthropic**, **Groq**, **Ollama**, **OpenAI**, **OpenRouter**, **Perplexity**
|
||||
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more [#517](https://github.com/enricoros/big-AGI/issues/517), [507](https://github.com/enricoros/big-AGI/pull/507)
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>3,000 Commits Milestone · April 7, 2024</summary>
|
||||
#### 3,000 Commits Milestone · April 7, 2024
|
||||
|
||||

|
||||
|
||||
- 🥇 Today we <b>celebrate commit 3000</b> in just over one year, and going stronger 🚀
|
||||
- 📢️ Thanks everyone for your support and words of love for Big-AGI, we are committed to creating the best AI experiences for everyone.
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>What's New in 1.15.0 · April 1, 2024 · Beam</summary>
|
||||
### What's New in 1.15.0 · April 1, 2024 · Beam
|
||||
|
||||
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
|
||||
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
|
||||
@@ -240,8 +53,6 @@ The new architecture is solid and the speed improvements are real.
|
||||
- 1.15.1: Support for Gemini Pro 1.5 and OpenAI Turbo models
|
||||
- Beast release, over 430 commits, 10,000+ lines changed: [release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.15.0), and changes [v1.14.1...v1.15.0](https://github.com/enricoros/big-AGI/compare/v1.14.1...v1.15.0)
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>What's New in 1.14.1 · March 7, 2024 · Modelmorphic</summary>
|
||||
|
||||
@@ -249,7 +60,7 @@ The new architecture is solid and the speed improvements are real.
|
||||
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
|
||||
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
|
||||
- **Mistral** Large and Google **Gemini 1.5** support
|
||||
- Performance optimizations: runs [much faster](https://x.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
|
||||
- Performance optimizations: runs [much faster](https://twitter.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
|
||||
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
|
||||
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
|
||||
|
||||
@@ -312,86 +123,92 @@ https://github.com/enricoros/big-AGI/assets/1590910/a6b8e172-0726-4b03-a5e5-10cf
|
||||
|
||||
</details>
|
||||
|
||||
For full details and former releases, check out the [archived versions changelog](docs/changelog.md).
|
||||
For full details and former releases, check out the [changelog](docs/changelog.md).
|
||||
|
||||
## 👉 Supported Models & Integrations
|
||||
## 👉 Key Features ✨
|
||||
|
||||
Delightful UX with latest models exclusive features like Beam for **multi-model AI validation**.
|
||||
> 
|
||||
> [](https://big-agi.com/beam)
|
||||
|
||||
|  |  |  |  |  |
|
||||
|  |  |  |  |  |
|
||||
|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
|
||||
| **Chat**<br/>**Call**<br/>**Beam**<br/>**Draw**, ... | Local & Cloud<br/>Open & Closed<br/>Cheap & Heavy<br/>Google, Mistral, ... | Attachments<br/>Diagrams<br/>Multi-Chat<br/>Mobile-first UI | Stored Locally<br/>Easy self-Host<br/>Local actions<br/>Data = Gold | AI Personas<br/>Voice Modes<br/>Screen Capture<br/>Camera + OCR |
|
||||
|
||||

|
||||
|
||||
### AI Models & Vendors
|
||||
You can easily configure 100s of AI models in big-AGI:
|
||||
|
||||
Configure 100s of AI models from 20+ providers:
|
||||
| **AI models** | _supported vendors_ |
|
||||
|:--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Opensource Servers | [LocalAI](https://localai.com) (multimodal) · [Ollama](https://ollama.com/) · [Oobabooga](https://github.com/oobabooga/text-generation-webui) |
|
||||
| Local Servers | [LM Studio](https://lmstudio.ai/) |
|
||||
| Multimodal services | [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
|
||||
| Language services | [Anthropic](https://anthropic.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) |
|
||||
| Image services | [Prodia](https://prodia.com/) (SDXL) |
|
||||
| Speech services | [ElevenLabs](https://elevenlabs.io) (Voice synthesis / cloning) |
|
||||
|
||||
| **AI models** | _supported vendors_ |
|
||||
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Opensource Servers | [LocalAI](https://localai.io/) · [Ollama](https://ollama.com/) |
|
||||
| Local Servers | [LM Studio](https://lmstudio.ai/) (non-open) |
|
||||
| Multimodal services | [Anthropic](https://anthropic.com) · [AWS Bedrock](https://aws.amazon.com/bedrock/) · [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
|
||||
| LLM services | [Alibaba](https://www.alibabacloud.com/en/product/modelstudio) · [DeepSeek](https://deepseek.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [Moonshot](https://www.moonshot.cn/) · [OpenPipe](https://openpipe.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) · [xAI](https://x.ai/) · [Z.ai](https://z.ai/) |
|
||||
| OpenAI-compatible | Any OpenAI-compatible endpoint - models, pricing, and capabilities are auto-detected |
|
||||
| Image services | OpenAI · Google Gemini (Nano Banana) · LocalAI |
|
||||
| Speech services | [ElevenLabs](https://elevenlabs.io) · [Inworld](https://inworld.ai) · [OpenAI TTS](https://platform.openai.com/docs/guides/text-to-speech) · LocalAI · Browser (Web Speech API) |
|
||||
Add extra functionality with these integrations:
|
||||
|
||||
### Additional Integrations
|
||||
| **More** | _integrations_ |
|
||||
|:-------------|:---------------------------------------------------------------------------------------------------------------|
|
||||
| Web Browse | [Browserless](https://www.browserless.io/) · [Puppeteer](https://pptr.dev/)-based |
|
||||
| Web Search | [Google CSE](https://programmablesearchengine.google.com/) |
|
||||
| Code Editors | [CodePen](https://codepen.io/pen/) · [StackBlitz](https://stackblitz.com/) · [JSFiddle](https://jsfiddle.net/) |
|
||||
| Sharing | [Paste.gg](https://paste.gg/) (Paste chats) |
|
||||
| Tracking | [Helicone](https://www.helicone.ai) (LLM Observability) |
|
||||
|
||||
| **More** | _integrations_ |
|
||||
|:--------------|:---------------------------------------------------------------------------------------------------------------|
|
||||
| Web Browse | [Browserless](https://www.browserless.io/) · [Puppeteer](https://pptr.dev/)-based |
|
||||
| Web Search | [Google CSE](https://programmablesearchengine.google.com/) |
|
||||
| Code Editors | [CodePen](https://codepen.io/pen/) · [StackBlitz](https://stackblitz.com/) · [JSFiddle](https://jsfiddle.net/) |
|
||||
| Observability | [Helicone](https://www.helicone.ai) |
|
||||
[//]: # (- [x] **Flow-state UX** for uncompromised productivity)
|
||||
|
||||
---
|
||||
[//]: # (- [x] **AI Personas**: Tailor your AI interactions with customizable personas)
|
||||
|
||||
[//]: # (- [x] **Sleek UI/UX**: A smooth, intuitive, and mobile-responsive interface)
|
||||
|
||||
[//]: # (- [x] **Efficient Interaction**: Voice commands, OCR, and drag-and-drop file uploads)
|
||||
|
||||
[//]: # (- [x] **Privacy First**: Self-host and use your own API keys for full control)
|
||||
|
||||
[//]: # (- [x] **Advanced Tools**: Execute code, import PDFs, and summarize documents)
|
||||
|
||||
[//]: # (- [x] **Seamless Integrations**: Enhance functionality with various third-party services)
|
||||
|
||||
[//]: # (- [x] **Open Roadmap**: Contribute to the progress of big-AGI)
|
||||
|
||||
<br/>
|
||||
|
||||
## 🚀 Installation
|
||||
|
||||
Self-host with Docker, deploy on Vercel, or develop locally. Full setup guide:
|
||||
To get started with big-AGI, follow our comprehensive [Installation Guide](docs/installation.md).
|
||||
The guide covers various installation options, whether you're spinning it up on
|
||||
your local computer, deploying on Vercel, on Cloudflare, or rolling it out
|
||||
through Docker.
|
||||
|
||||
Whether you're a developer, system integrator, or enterprise user, you'll find step-by-step instructions
|
||||
to set up big-AGI quickly and easily.
|
||||
|
||||
[](docs/installation.md)
|
||||
|
||||
Or use the hosted version at [big-agi.com](https://big-agi.com) with your API keys.
|
||||
Or bring your API keys and jump straight into our free instance on [big-AGI.com](https://big-agi.com).
|
||||
|
||||
---
|
||||
<br/>
|
||||
|
||||
## 👋 Community & Contributing
|
||||
|
||||
### Connect
|
||||
# 🌟 Get Involved!
|
||||
|
||||
[//]: # ([](https://discord.gg/MkH4qj2Jp9))
|
||||
[](https://discord.gg/MkH4qj2Jp9)
|
||||
|
||||
⭐ [Star the repo](https://github.com/enricoros/big-agi) if Big-AGI is useful to you
|
||||
- [ ] 📢️ [**Chat with us** on Discord](https://discord.gg/MkH4qj2Jp9)
|
||||
- [ ] ⭐ **Give us a star** on GitHub 👆
|
||||
- [ ] 🚀 **Do you like code**? You'll love this gem of a project! [_Pick up a task!_](https://github.com/users/enricoros/projects/4/views/4) - _easy_ to _pro_
|
||||
- [ ] 💡 Got a feature suggestion? [_Add your roadmap ideas_](https://github.com/enricoros/big-agi/issues/new?&template=roadmap-request.md)
|
||||
- [ ] ✨ [Deploy](docs/installation.md) your [fork](docs/customizations.md) for your friends and family, or [customize it for work](docs/customizations.md)
|
||||
|
||||
### Contribute
|
||||
<br/>
|
||||
|
||||
**🤖 AI-Powered Issue Assistance**
|
||||
[//]: # ([](https://github.com/enricoros/big-agi/stargazers))
|
||||
|
||||
When you open an issue, our custom AI triage system (powered by [Claude Code](https://github.com/anthropics/claude-code-action) with Big-AGI architecture documentation) analyzes it, searches the codebase, and provides solutions - typically within 30 minutes. We've trained the system on our modules and subsystems so it handles most issues effectively. Your feedback drives development!
|
||||
[//]: # ([](https://github.com/enricoros/big-agi/network))
|
||||
|
||||
[](https://github.com/enricoros/big-agi/issues/new?template=ai-triage.yml)
|
||||
[](https://github.com/enricoros/big-agi/issues/new?&template=roadmap-request.md)
|
||||
[//]: # ([](https://github.com/enricoros/big-agi/pulls))
|
||||
|
||||
[](https://github.com/users/enricoros/projects/4/views/4)
|
||||
[](docs/customizations.md)
|
||||
[](https://github.com/users/enricoros/projects/4/views/2)
|
||||
|
||||
#### Contributors
|
||||
|
||||
<a href="https://github.com/enricoros/big-agi/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=enricoros/big-agi&max=48&columns=12" />
|
||||
</a>
|
||||
[//]: # ([](https://github.com/enricoros/big-agi/LICENSE))
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
MIT License · [Third-Party Notices](src/modules/3rdparty/THIRD_PARTY_NOTICES.md)
|
||||
|
||||
**2023-2026** · [Enrico Ros](https://www.enricoros.com) × [Token Fabrics](https://www.tokenfabrics.com)
|
||||
2023-2024 · Enrico Ros x [big-AGI](https://big-agi.com) · License: [MIT](LICENSE) · Made with 💙
|
||||
|
||||
@@ -1,39 +0,0 @@
|
||||
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
|
||||
|
||||
import { appRouterCloud } from '~/server/trpc/trpc.router-cloud';
|
||||
import { createTRPCFetchContext } from '~/server/trpc/trpc.server';
|
||||
import { posthogServerSendException } from '~/server/posthog/posthog.server';
|
||||
|
||||
const handlerNodeRoutes = (req: Request) => fetchRequestHandler({
|
||||
endpoint: '/api/cloud',
|
||||
router: appRouterCloud,
|
||||
req,
|
||||
createContext: createTRPCFetchContext,
|
||||
onError: async function({ path, error, type, ctx }) {
|
||||
|
||||
// -> DEV error logging
|
||||
if (process.env.NODE_ENV === 'development')
|
||||
console.error(`❌ tRPC-cloud failed on ${path ?? 'unk-path'}: ${error.message}`);
|
||||
|
||||
// -> Capture node errors
|
||||
await posthogServerSendException(error, undefined, {
|
||||
domain: 'trpc-onerror',
|
||||
runtime: 'nodejs',
|
||||
endpoint: path ?? 'unknown',
|
||||
method: req.method,
|
||||
url: req.url,
|
||||
additionalProperties: {
|
||||
error_code: error.code,
|
||||
error_type: type,
|
||||
},
|
||||
});
|
||||
},
|
||||
});
|
||||
|
||||
|
||||
// NOTE: the following statement breaks the build on non-pro deployments, and conditionals don't work either
|
||||
// so we resorted to raising the timeout from 10s to 60s in the vercel.json file instead
|
||||
// export const maxDuration = 60;
|
||||
export const runtime = 'nodejs';
|
||||
export const dynamic = 'force-dynamic';
|
||||
export { handlerNodeRoutes as GET, handlerNodeRoutes as POST };
|
||||
@@ -1,20 +0,0 @@
|
||||
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
|
||||
|
||||
import { appRouterEdge } from '~/server/trpc/trpc.router-edge';
|
||||
import { createTRPCFetchContext } from '~/server/trpc/trpc.server';
|
||||
|
||||
const handlerEdgeRoutes = (req: Request) => fetchRequestHandler({
|
||||
endpoint: '/api/edge',
|
||||
router: appRouterEdge,
|
||||
req,
|
||||
createContext: createTRPCFetchContext,
|
||||
onError:
|
||||
process.env.NODE_ENV === 'development'
|
||||
? ({ path, error }) => console.error(`\n❌ tRPC-edge failed on ${path ?? 'unk-path'}: ${error.message}`)
|
||||
: undefined,
|
||||
});
|
||||
|
||||
// NOTE: we don't set maxDuration explicitly here - however we set it in the Vercel project settings, raising to the limit of 300s
|
||||
// export const maxDuration = 60;
|
||||
export const runtime = 'edge';
|
||||
export { handlerEdgeRoutes as GET, handlerEdgeRoutes as POST };
|
||||
@@ -0,0 +1,2 @@
|
||||
export const runtime = 'edge';
|
||||
export { elevenLabsHandler as POST } from '~/modules/elevenlabs/elevenlabs.server';
|
||||
@@ -0,0 +1,2 @@
|
||||
export const runtime = 'edge';
|
||||
export { llmStreamingRelayHandler as POST } from '~/modules/llms/server/llm.server.streaming';
|
||||
@@ -0,0 +1,19 @@
|
||||
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
|
||||
|
||||
import { appRouterEdge } from '~/server/api/trpc.router-edge';
|
||||
import { createTRPCFetchContext } from '~/server/api/trpc.server';
|
||||
|
||||
const handlerEdgeRoutes = (req: Request) =>
|
||||
fetchRequestHandler({
|
||||
router: appRouterEdge,
|
||||
endpoint: '/api/trpc-edge',
|
||||
req,
|
||||
createContext: createTRPCFetchContext,
|
||||
onError:
|
||||
process.env.NODE_ENV === 'development'
|
||||
? ({ path, error }) => console.error(`❌ tRPC-edge failed on ${path ?? "<no-path>"}: ${error.message}`)
|
||||
: undefined,
|
||||
});
|
||||
|
||||
export const runtime = 'edge';
|
||||
export { handlerEdgeRoutes as GET, handlerEdgeRoutes as POST };
|
||||
@@ -0,0 +1,23 @@
|
||||
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
|
||||
|
||||
import { appRouterNode } from '~/server/api/trpc.router-node';
|
||||
import { createTRPCFetchContext } from '~/server/api/trpc.server';
|
||||
|
||||
const handlerNodeRoutes = (req: Request) =>
|
||||
fetchRequestHandler({
|
||||
router: appRouterNode,
|
||||
endpoint: '/api/trpc-node',
|
||||
req,
|
||||
createContext: createTRPCFetchContext,
|
||||
onError:
|
||||
process.env.NODE_ENV === 'development'
|
||||
? ({ path, error }) => console.error(`❌ tRPC-node failed on ${path ?? '<no-path>'}: ${error.message}`)
|
||||
: undefined,
|
||||
});
|
||||
|
||||
export const runtime = 'nodejs';
|
||||
// NOTE: the following statement breaks the build on non-pro deployments, and conditionals don't work either
|
||||
// so we resorted to raising the timeout from 10s to 25s in the vercel.json file instead
|
||||
// export const maxDuration = 25;
|
||||
export const dynamic = 'force-dynamic';
|
||||
export { handlerNodeRoutes as GET, handlerNodeRoutes as POST };
|
||||
+4
-1
@@ -1,6 +1,8 @@
|
||||
# Very simple docker-compose file to run the app on http://localhost:3000 (or http://127.0.0.1:3000).
|
||||
#
|
||||
# For more examples, such running big-AGI alongside a web browsing service, see the `docs/docker` folder.
|
||||
# For more examples, such runnin big-AGI alongside a web browsing service, see the `docs/docker` folder.
|
||||
|
||||
version: '3.9'
|
||||
|
||||
services:
|
||||
big-agi:
|
||||
@@ -9,3 +11,4 @@ services:
|
||||
- "3000:3000"
|
||||
env_file:
|
||||
- .env
|
||||
command: [ "next", "start", "-p", "3000" ]
|
||||
@@ -1,74 +0,0 @@
|
||||
---
|
||||
unlisted: true
|
||||
---
|
||||
|
||||
# AIX dispatch server - API features comparison
|
||||
|
||||
This is updated as of 2024-07-09, and includes the latest features and capabilities of the three major AI APIs: Anthropic, Gemini, and OpenAI.
|
||||
The comparison covers a wide range of features, including function calling, vision, system instructions, etc.
|
||||
|
||||
| Feature Category | Specific Feature | Anthropic | Gemini | OpenAI |
|
||||
|------------------------------------------|-------------------------------|--------------------------------------------------------------------|------------------------------------------------------------------|---------------------------------------------------------------------|
|
||||
| **Message Structure** |
|
||||
| | Role types | user, assistant | user, model | user, assistant, system, tool |
|
||||
| | Named participants | No | No | Yes |
|
||||
| | Content array | Yes | Yes | Yes |
|
||||
| **Content Types and Multimodal Support** |
|
||||
| | Text generation | Yes | Yes | Yes |
|
||||
| | Image understanding | Yes | Yes | Yes |
|
||||
| | Audio processing | No | **Yes** | No |
|
||||
| | Video processing | No | **Yes** | No |
|
||||
| **Image Handling** |
|
||||
| | Supported formats | JPEG, PNG, GIF, WebP | JPEG, PNG, WebP, HEIC, HEIF | PNG, JPEG, WebP, non-animated GIF |
|
||||
| | Max image size | 5MB per image | (20MB per prompt) | 20MB per image |
|
||||
| | Image detail level | N/A | N/A | **Low, high, auto** |
|
||||
| | Image resolution | max: 1568x1568 | min: 768x768, max: 3072x3072 | min: 512x512, max: 2048 x 2048 |
|
||||
| | Token calculation for images | (width * height)/750; max 1,600 | 258 tokens | 85 + 170 * {patches} |
|
||||
| | Image retention | Deleted after processing | Not specified | Deleted after processing |
|
||||
| **Audio and Video Handling** |
|
||||
| | Audio formats | N/A | WAV, MP3, AIFF, AAC, OGG, FLAC | N/A |
|
||||
| | Video formats | N/A | MP4, MPEG, MOV, AVI, MPG, WebM, WMV, 3GPP | N/A |
|
||||
| **System Instructions and Tool Use** |
|
||||
| | System instructions | Yes (array of text blocks) | Yes (parts array) | Yes (as system message) |
|
||||
| **Function/Tool Handling** |
|
||||
| | Parallel tool calls | No | No | **Yes** |
|
||||
| | Tool Declaration | Defined in `tools` array | Defined in `tools` array | Defined in `tools` array |
|
||||
| | FC name restrictions | Yes | Yes (max 63 chars) | Yes (max 64 chars) |
|
||||
| | FC declaration | name, description, input_schema | name, description, parameters | name, description, parameters |
|
||||
| | FC options structure | JSON Schema for input | Object with properties | JSON Schema for parameters |
|
||||
| | FC Force invocation | Via `tool_choice` parameter | Via `toolConfig` parameter | Via `tool_choice` parameter |
|
||||
| | FC Model invocation | Model generates a `tool_use` block with predicted parameters | Generates a `functionCall` part with predicted parameters | Generates a message.`tool_calls` item with predicted arguments |
|
||||
| | FC Execution | Client-side | Client-side | Client-side |
|
||||
| | FC Result injection | Client appends a `user` message with a `tool_result` content block | Client appends a `function` message with `functionResponse` part | Client sends a new `tool` message with `tool_call_id` and `content` |
|
||||
| | Built-in Code execution | No | **Yes** | No |
|
||||
| | Tool use with vision | Yes | Yes | Yes |
|
||||
| **Generation Configuration** |
|
||||
| | temperature | Yes | Yes | Yes |
|
||||
| | max_tokens | Yes | Yes | Yes |
|
||||
| | stop_sequences | Yes | Yes | Yes |
|
||||
| | top_k | Yes | Yes | **No** |
|
||||
| | top_p | Yes | Yes | Yes |
|
||||
| | seed | No | No | **Yes** |
|
||||
| | Multiple candidates | No | No | Yes (with 'n' parameter, breaks streaming?) |
|
||||
| **Streaming and Response Structure** |
|
||||
| | Streaming support | Yes | Yes | Yes |
|
||||
| | Streaming initiation | stream=true | streamGenerateContent path | stream=true |
|
||||
| | Streaming event types | **Multiple specific types** | Not specified | Single delta type |
|
||||
| | Response container | content (array) | candidates (array) | choices (array) |
|
||||
| **Usage Metrics and Error Handling** |
|
||||
| | Token counts | Yes | Yes | Yes |
|
||||
| | Detailed token breakdown | input, output | prompt, cached, candidates, total | prompt, completion, total |
|
||||
| | Usage in stream | No | No | **Optional** |
|
||||
| | Error handling in response | Not specified | Not specified | **Yes (undocumented)** |
|
||||
| | Error handling in stream | Not specified | Not specified | **Yes (undocumented)** |
|
||||
| **Advanced Features** |
|
||||
| | JSON mode | **Partial (via structured prompts)** | **Yes (responseMimeType)** | **Yes** |
|
||||
| | Output consistency techniques | **Yes (multiple methods)** | Not specified | Not specified |
|
||||
| | Logprobs | No | No | **Yes (disabled in schema)** |
|
||||
| | System fingerprint | No | No | **Yes** |
|
||||
| | Semantic caching | No | **Yes** | No |
|
||||
| | Assistant prefill | **Yes** | No | No |
|
||||
| | Preferred formatting | **XML tags, JSON** | Not specified | Markdown |
|
||||
| **Safety and Compliance** |
|
||||
| | Safety settings in request | **Stop sequences** | **Detailed category-based** | **Moderation API** |
|
||||
| | Safety feedback in response | Yes | Yes | Not specified |
|
||||
+38
-60
@@ -1,81 +1,59 @@
|
||||
# Big-AGI Documentation
|
||||
# big-AGI Documentation
|
||||
|
||||
Information you need to get started, configure, and use big-AGI productively.
|
||||
Find all the information you need to get started, configure, and effectively use big-AGI.
|
||||
|
||||
👉 **[Changelog](https://big-agi.com/changes)** - See what's new
|
||||
[//]: # (## Quick Start)
|
||||
|
||||
## Getting Started
|
||||
[//]: # (- **[Introduction](big-agi.md)**: Overview of big-AGI's features.)
|
||||
|
||||
Essential guides:
|
||||
## Configuration Guides
|
||||
|
||||
- **[FAQ](help-faq.md)**: Common questions and answers
|
||||
- **[Enabling Microphone](help-feature-microphone.md)**: Configure speech recognition in your browser
|
||||
- **[Data Ownership](help-data-ownership.md)**: How your data is stored and managed
|
||||
- **[Live File](help-feature-livefile.md)**: Live file attachment feature
|
||||
Detailed guides to configure your big-AGI interface and models.
|
||||
|
||||
## AI Services
|
||||
👉 The following applies to the users of big-AGI.com, as the public instance is empty and to be configured by the user.
|
||||
|
||||
How to set up AI models and features in big-AGI.
|
||||
|
||||
> 👉 The following applies to users of big-AGI.com, as the public instance is empty and requires user configuration.
|
||||
|
||||
- **Cloud AI Services**:
|
||||
- Easy API key configuration:
|
||||
[Alibaba](https://bailian.console.alibabacloud.com/?apiKey=1#/api-key),
|
||||
[Anthropic](https://console.anthropic.com/settings/keys),
|
||||
[AWS Bedrock](https://console.aws.amazon.com/bedrock/),
|
||||
[Deepseek](https://platform.deepseek.com/api_keys),
|
||||
[Google Gemini](https://aistudio.google.com/app/apikey),
|
||||
[Groq](https://console.groq.com/keys),
|
||||
[Mistral](https://console.mistral.ai/api-keys/),
|
||||
[Moonshot](https://platform.moonshot.cn/console/api-keys),
|
||||
[OpenAI](https://platform.openai.com/api-keys),
|
||||
[OpenPipe](https://app.openpipe.ai/settings),
|
||||
[Perplexity](https://www.perplexity.ai/settings/api),
|
||||
[TogetherAI](https://api.together.xyz/settings/api-keys),
|
||||
[xAI](https://x.ai/api),
|
||||
[Z.ai](https://z.ai/)
|
||||
- **[Azure OpenAI](config-azure-openai.md)** guide
|
||||
- **[OpenRouter](config-openrouter.md)** guide
|
||||
- **OpenAI-compatible endpoints**: Any provider with an OpenAI-compatible API works out of the box - models, pricing, and capabilities are auto-detected
|
||||
- **Cloud Model Services**:
|
||||
- **[Azure OpenAI](config-azure-openai.md)**
|
||||
- **[OpenRouter](config-openrouter.md)**
|
||||
- easy API key: **Anthropic**, **Google AI**, **Groq**, **Mistral**, **OpenAI**, **Perplexity**, **TogetherAI**
|
||||
|
||||
|
||||
- **Local AI Integrations**:
|
||||
- [LocalAI](config-local-localai.md), [LM Studio](config-local-lmstudio.md), [Ollama](config-local-ollama.md)
|
||||
- **Local Model Servers**:
|
||||
- **[LocalAI](config-local-localai.md)**
|
||||
- **[LM Studio](config-local-lmstudio.md)**
|
||||
- **[Ollama](config-local-ollama.md)**
|
||||
- **[Oobabooga](config-local-oobabooga.md)**
|
||||
|
||||
|
||||
- **Enhanced AI Features**:
|
||||
- **[Web Browsing](config-feature-browse.md)**: Enable web page download through third-party services or your own cloud
|
||||
- **Web Search**: Google Search API (see '[Environment Variables](environment-variables.md)')
|
||||
- **Image Generation**: GPT Image (gpt-image-1), Nano Banana, DALL·E 3 and 2
|
||||
- **Voice Synthesis**: ElevenLabs, Inworld, OpenAI TTS, LocalAI, or browser Web Speech API
|
||||
- **[Google Drive](config-feature-google-drive.md)**: Attach files from Google Drive
|
||||
- **Advanced Feature Configuration**:
|
||||
- **[Browse](config-feature-browse.md)**: Enable web page download through third-party services or your own cloud (advanced)
|
||||
- **ElevenLabs API**: Voice and cutom voice generation, only requires their API key
|
||||
- **Google Search API**: guide not yet available, see the Google options in '[Environment Variables](environment-variables.md)'
|
||||
- **Prodia API**: Stable Diffusion XL image generation, only requires their API key, alternative to DALL·E
|
||||
|
||||
## Deployment & Customization
|
||||
## Deployment
|
||||
|
||||
> 👉 The following applies to developers and experts who deploy their own big-AGI instance.
|
||||
System integrators, administrators, whitelabelers: instead of using the public big-AGI instance on get.big-agi.com, you can deploy your own instance.
|
||||
|
||||
For deploying a custom big-AGI instance:
|
||||
Step-by-step deployment and system configuration instructions.
|
||||
|
||||
- **[Installation Guide](installation.md)**, including:
|
||||
- Set up your own big-AGI instance
|
||||
- Source build or pre-built options
|
||||
- Local, cloud, or on-premises deployment
|
||||
- **[Installation](installation.md)**: Set up your own instance of big-AGI and related products
|
||||
- build from source or use pre-built
|
||||
- locally, in the public cloud, or on your own servers
|
||||
|
||||
|
||||
- **Advanced Setup**:
|
||||
- **[Source Code Customization](customizations.md)**: Modify the source code
|
||||
- **[Access Control](deploy-authentication.md)**: Optional, add basic user authentication
|
||||
- **[Reverse Proxy](deploy-reverse-proxy.md)**: Optional, enables custom domains and SSL
|
||||
- **[Docker Deployment](deploy-docker.md)**: Deploy with Docker containers
|
||||
- **[Kubernetes](deploy-k8s.md)**: Deploy on Kubernetes clusters
|
||||
- **[Analytics](deploy-analytics.md)**: Set up usage analytics
|
||||
- **[Environment Variables](environment-variables.md)**: Pre-configures models and services
|
||||
- **Advanced Customizations**:
|
||||
- **[Source code alterations guide](customizations.md)**: source code primer and alterations guidelines
|
||||
- **[Basic Authentication](deploy-authentication.md)**: Optional, adds a username and password wall
|
||||
- **[Database Setup](deploy-database.md)**: Optional, enables "Chat Link Sharing"
|
||||
- **[Environment Variables](environment-variables.md)**: 📌 Pre-configures models and services
|
||||
|
||||
## Community & Support
|
||||
## Support and Community
|
||||
|
||||
Join our community or get support:
|
||||
|
||||
- Check the [changelog](https://big-agi.com/changes) for the latest updates
|
||||
- Visit our [GitHub repository](https://github.com/enricoros/big-AGI) for source code and issue tracking
|
||||
- Join our [Discord](https://discord.gg/MkH4qj2Jp9) for discussions and help
|
||||
- Check the latest updates and features on [Changelog](changelog.md) or the in-app [News](https://get.big-agi.com/news)
|
||||
- Connect with us and other users on [Discord](https://discord.gg/MkH4qj2Jp9) for discussions, help, and sharing your experiences with big-AGI
|
||||
|
||||
Let's build something great.
|
||||
Thank you for choosing big-AGI. We're excited to see what you'll build.
|
||||
|
||||
+11
-36
@@ -1,44 +1,18 @@
|
||||
## Archived Versions - Changelog
|
||||
## Changelog
|
||||
|
||||
This is a high-level changelog. Calls out some of the high level features batched
|
||||
by release.
|
||||
|
||||
- For the live changelog, see [big-agi.com/changes](https://big-agi.com/changes)
|
||||
- For the live roadmap, please see [the GitHub project](https://github.com/users/enricoros/projects/4/views/2)
|
||||
|
||||
> NOTE: with the release of 2.0.0 we switching to [big-agi.com/changes](https://big-agi.com/changes) for the
|
||||
> continuously updated changelog.
|
||||
### 1.17.0 - Jun 2024
|
||||
|
||||
### What's New in 2 · Oct 31, 2025 · Open
|
||||
- milestone: [1.17.0](https://github.com/enricoros/big-agi/milestone/17)
|
||||
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
|
||||
|
||||
- **Big-AGI Open** is ready and more productive and faster than ever, with:
|
||||
- **Beam 2**: multi-modal, program-based, follow-ups, save presets
|
||||
- Top-notch AI models support including **agentic models** and **reasoning models**
|
||||
- **Image Generation** and editing with Nano Banana and gpt-image-1
|
||||
- **Web Search** with citations for supported models
|
||||
- **UI** & Mobile UI overhaul with peeking and side panels
|
||||
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
|
||||
- Built for the future, madly optimized
|
||||
### What's New in 1.16.1 · May 13, 2024 (minor release, models support)
|
||||
|
||||
### What's New in 1.16.1...1.16.13 · (patch releases)
|
||||
|
||||
- 1.16.13: Docker fix (#840)
|
||||
- 1.16.12: Dockerfile update (#840)
|
||||
- 1.16.11: v1 final release, documentation updates
|
||||
- 1.16.10: OpenRouter models support
|
||||
- 1.16.9: Docker Gemini fix, R1 models support
|
||||
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
|
||||
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
|
||||
- 1.16.6: Groq support for Llama 3.1 models
|
||||
- 1.16.5: GPT-4o Mini support
|
||||
- 1.16.4: 8192 tokens support for Claude 3.5 Sonnet
|
||||
- 1.16.3: Anthropic Claude 3.5 Sonnet model support
|
||||
- 1.16.2: Improve web downloads, as text, markdown, or HTML
|
||||
- 1.16.2: Proper support for Gemini models
|
||||
- 1.16.2: Added the latest Mistral model
|
||||
- 1.16.2: Tokenizer support for gpt-4o
|
||||
- 1.16.2: Updates to Beam
|
||||
- 1.16.1: Support for the new OpenAI GPT-4o 2024-05-13 model
|
||||
- Support for the new OpenAI GPT-4o 2024-05-13 model
|
||||
|
||||
### What's New in 1.16.0 · May 9, 2024 · Crystal Clear
|
||||
|
||||
@@ -61,7 +35,7 @@ by release.
|
||||
### What's New in 1.15.0 · April 1, 2024 · Beam
|
||||
|
||||
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
|
||||
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
|
||||
- Managed Deployments **Auto-Configuration**: simplify the UI mdoels setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
|
||||
- Message **Starring ⭐**: star important messages within chats, to attach them later. [#476](https://github.com/enricoros/big-AGI/issues/476)
|
||||
- Enhanced the default Persona
|
||||
- Fixes to Gemini models and SVGs, improvements to UI and icons
|
||||
@@ -73,7 +47,7 @@ by release.
|
||||
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
|
||||
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
|
||||
- **Mistral** Large and Google **Gemini 1.5** support
|
||||
- Performance optimizations: runs [much faster](https://x.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
|
||||
- Performance optimizations: runs [much faster](https://twitter.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
|
||||
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
|
||||
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
|
||||
- [Release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.14.0), and changes [v1.13.1...v1.14.0](https://github.com/enricoros/big-AGI/compare/v1.13.1...v1.14.0) (233 commits, 8,000+ lines changed)
|
||||
@@ -153,7 +127,7 @@ https://github.com/enricoros/big-AGI/assets/1590910/a6b8e172-0726-4b03-a5e5-10cf
|
||||
- **Overheat LLMs**: Push the creativity with higher LLM temperatures. [#256](https://github.com/enricoros/big-agi/issues/256)
|
||||
- **Model Options Shortcut**: Quick adjust with `Ctrl+Shift+O`
|
||||
- Optimized Voice Input and Performance
|
||||
- Latest Ollama models
|
||||
- Latest Ollama and Oobabooga models
|
||||
- For developers: **Password Protection**: HTTP Basic Auth. [Learn How](https://github.com/enricoros/big-agi/blob/main/docs/deploy-authentication.md)
|
||||
|
||||
### What's New in 1.6.0 - Nov 28, 2023 · Surf's Up
|
||||
@@ -185,7 +159,7 @@ For Developers:
|
||||
first request to get the configuration. See
|
||||
https://github.com/enricoros/big-agi/blob/main/src/modules/backend/backend.router.ts.
|
||||
- CloudFlare developers: please change the deployment command to
|
||||
`rm app/api/cloud/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`,
|
||||
`rm app/api/trpc-node/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`,
|
||||
as we transitioned to the App router in NextJS 14. The documentation in
|
||||
[docs/deploy-cloudflare.md](../docs/deploy-cloudflare.md) is updated
|
||||
|
||||
@@ -202,6 +176,7 @@ For Developers:
|
||||
- **Camera OCR** - real-world AI - take a picture of a text, and chat with it
|
||||
- **Anthropic models** support, e.g. Claude
|
||||
- **Backup/Restore** - save chats, and restore them later
|
||||
- **[Local model support with Oobabooga server](../docs/config-local-oobabooga)** - run your own LLMs!
|
||||
- **Flatten conversations** - conversations summarizer with 4 modes
|
||||
- **Fork conversations** - create a new chat, to try with different endings
|
||||
- New commands: /s to add a System message, and /a for an Assistant message
|
||||
|
||||
+28
-48
@@ -14,7 +14,7 @@ If you have an `API Endpoint` and `API Key`, you can configure big-AGI as follow
|
||||
1. Launch the `big-AGI` application
|
||||
2. Go to the **Models** settings
|
||||
3. Add a Vendor and select **Azure OpenAI**
|
||||
- Enter the Endpoint (e.g., 'https://your-resource-name.openai.azure.com')
|
||||
- Enter the Endpoint (e.g., 'https://your-openai-api-1234.openai.azure.com/')
|
||||
- Enter the API Key (e.g., 'fd5...........................ba')
|
||||
|
||||
The deployed models are now available in the application. If you don't have a configured
|
||||
@@ -23,36 +23,6 @@ Azure OpenAI service instance, continue with the next section.
|
||||
In addition to using the UI, configuration can also be done using
|
||||
[environment variables](environment-variables.md).
|
||||
|
||||
## Server Configuration
|
||||
|
||||
For server deployments, set these environment variables:
|
||||
|
||||
```bash
|
||||
AZURE_OPENAI_API_ENDPOINT=https://your-resource-name.openai.azure.com
|
||||
AZURE_OPENAI_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
This enables Azure OpenAI for all users without requiring individual API keys. For more details, see [environment-variables.md](environment-variables.md).
|
||||
|
||||
## Azure OpenAI API Versions
|
||||
|
||||
Azure OpenAI supports both traditional deployment-based API and the next-generation v1 API:
|
||||
|
||||
### Next-Generation v1 API (Default)
|
||||
- **Enabled by default** for GPT-5-like models (GPT-5, GPT-6, o3, o4, etc.)
|
||||
- Uses direct `/openai/v1/responses` endpoint without deployment IDs
|
||||
- Optimized for advanced reasoning models and new features
|
||||
- Can be disabled by setting `AZURE_OPENAI_DISABLE_V1=true`
|
||||
|
||||
### Traditional Deployment-Based API
|
||||
- Uses `/openai/deployments/{deployment-name}/...` endpoints
|
||||
- Required for older models and when v1 API is disabled
|
||||
- Needs deployment ID for all API calls
|
||||
|
||||
### Known Limitations
|
||||
- **Web Search Tool**: Azure OpenAI does not support the `web_search_preview` tool that's available in OpenAI's API
|
||||
- Models with web search capabilities will have this feature automatically disabled on Azure
|
||||
|
||||
## Setting Up Azure
|
||||
|
||||
### Step 1: Azure Account & Subscription
|
||||
@@ -64,7 +34,18 @@ Azure OpenAI supports both traditional deployment-based API and the next-generat
|
||||
- Fill in the required fields and click on **Create**
|
||||
- Note down the **Subscription ID** (e.g., `12345678-1234-1234-1234-123456789012`)
|
||||
|
||||
### Step 2: Create Azure OpenAI Resource
|
||||
### Step 2: Apply for Azure OpenAI Service
|
||||
|
||||
We'll now be creating "OpenAI"-specific resources on Azure. This requires to 'apply',
|
||||
and acceptance should be quick (even as low as minutes).
|
||||
|
||||
1. Visit [Azure OpenAI Service](https://aka.ms/azure-openai)
|
||||
2. Click on **Apply for access**
|
||||
- Fill in the required fields (including the subscription ID) and click on **Apply**
|
||||
|
||||
Once your application is accepted, you can create OpenAI resources on Azure.
|
||||
|
||||
### Step 3: Create Azure OpenAI Resource
|
||||
|
||||
For more information, see [Azure: Create and deploy OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
|
||||
|
||||
@@ -74,32 +55,31 @@ For more information, see [Azure: Create and deploy OpenAI](https://learn.micros
|
||||

|
||||
- Select the subscription
|
||||
- Select a resource group or create a new one
|
||||
- Select the region. **Important**: The region determines which models are available.
|
||||
> Popular regions like **East US**, **West Europe**, and **Australia East** typically have the best model availability. For the latest model availability by region, see [Azure OpenAI Model Availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
|
||||
- Select the region. Note that the region determines the available models.
|
||||
> For instance, **Canada East** offers GPT-4-32k models, For the full list, see [GPT-4 models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
|
||||
- Name the service (e.g., `your-openai-api-1234`)
|
||||
- Select a pricing tier (e.g., `S0` for standard)
|
||||
- Select: "All networks, including the internet, can access this resource."
|
||||
- Click on **Review + create** and then **Create**
|
||||
|
||||
After creating the resource, you can access the API Keys and Endpoints:
|
||||
After creating the resource, you can access the API Keys and Endpoints. At any point, you can go to
|
||||
the OpenAI Service instance page to get this information.
|
||||
|
||||
1. Click on **Go to resource** (or navigate to your Azure OpenAI resource)
|
||||
2. In the left sidebar, under **Resource Management**, click on **Keys and Endpoint**
|
||||
3. Copy the required information:
|
||||
- **Endpoint**: e.g., 'https://your-resource-name.openai.azure.com/'
|
||||
- **Key**: Copy either KEY 1 or KEY 2 (both work identically)
|
||||
- Click on **Go to resource**
|
||||
- Click on **Develop**
|
||||
- Copy the `Endpoint`, called "Language API", e.g. 'https://your-openai-api-1234.openai.azure.com/'
|
||||
- Copy `KEY 1`
|
||||
|
||||
### Step 3: Deploy Models
|
||||
### Step 4: Deploy Models
|
||||
|
||||
By default, Azure OpenAI resource instances don't have models available. You need to deploy the models you want to use.
|
||||
|
||||
1. In your Azure OpenAI resource, click on **Model deployments** in the left sidebar
|
||||
2. Click on **Create new deployment**
|
||||
3. Fill in the deployment details:
|
||||
- **Select a model**: Choose from available models
|
||||
- **Model version**: Select the latest version or a specific one
|
||||
- **Deployment name**: Give it a meaningful name
|
||||
4. Click **Deploy**
|
||||
1. Click on **Model Deployments > Manage Deployments**
|
||||
2. Click on **+Create New Deployment**
|
||||

|
||||
- Select the model you want to deploy
|
||||
- Optionally select a version
|
||||
- name the model, e.g., `gpt4-32k-0613`
|
||||
|
||||
Repeat as necessary for each model you want to deploy.
|
||||
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
# Google Drive Integration
|
||||
|
||||
Attach files from Google Drive directly in the chat composer.
|
||||
|
||||
## Setup
|
||||
|
||||
### 1. Enable APIs
|
||||
|
||||
In [Google Cloud Console](https://console.cloud.google.com/):
|
||||
|
||||
1. Go to **APIs & Services > Library**
|
||||
2. Enable **Google Drive API** and **Google Picker API**
|
||||
|
||||
### 2. Configure OAuth
|
||||
|
||||
1. Go to **APIs & Services > OAuth consent screen**
|
||||
2. Create consent screen (External or Internal)
|
||||
3. Add scope: `https://www.googleapis.com/auth/drive.file`
|
||||
4. Add test users if in testing mode
|
||||
|
||||
### 3. Create Credentials
|
||||
|
||||
1. Go to **APIs & Services > Credentials**
|
||||
2. Create **OAuth client ID** (Web application)
|
||||
3. Add JavaScript origins:
|
||||
- `http://localhost:3000` (dev)
|
||||
- `https://your-domain.com` (prod)
|
||||
|
||||
### 4. Set Environment Variable
|
||||
|
||||
```bash
|
||||
NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID=your-client-id.apps.googleusercontent.com
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
- Click **Drive** button in attachment menu
|
||||
|
||||
## Supported Files
|
||||
|
||||
| Type | Export Format |
|
||||
|-----------------|---------------------|
|
||||
| Regular files | Downloaded directly |
|
||||
| Google Docs | Markdown (.md) |
|
||||
| Google Sheets | CSV (.csv) |
|
||||
| Google Slides | PDF (.pdf) |
|
||||
| Google Drawings | SVG (.svg) |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Picker won't open**: Check `NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID` is set and APIs are enabled.
|
||||
|
||||
**OAuth errors**: Verify your domain is in authorized JavaScript origins. Add yourself as test user if app is in testing mode.
|
||||
|
||||
**Download fails**: Check file permissions and that Drive API is enabled.
|
||||
@@ -41,8 +41,6 @@ In addition to using the UI, configuration can also be done using
|
||||
|
||||
### Integration: Models Gallery
|
||||
|
||||
> Note: The Gallery Admin feature described below may have been removed or renamed in recent versions of big-AGI.
|
||||
|
||||
If the running LocalAI instance is configured with a [Model Gallery](https://localai.io/models/):
|
||||
|
||||
- Go to Models > LocalAI
|
||||
@@ -56,7 +54,7 @@ If the running LocalAI instance is configured with a [Model Gallery](https://loc
|
||||
|
||||
At the time of writing, LocalAI does not publish the model `context window size`.
|
||||
Every model is assumed to be capable of chatting, and with a context window of 4096 tokens.
|
||||
Please update the [src/modules/llms/server/models.mappings.ts](../src/modules/llms/server/models.mappings.ts)
|
||||
Please update the [src/modules/llms/transports/server/openai/models.data.ts](../src/modules/llms/server/openai/models.data.ts)
|
||||
file with the mapping information between LocalAI model IDs and names/descriptions/tokens, etc.
|
||||
|
||||
# 🤝 Support
|
||||
|
||||
@@ -81,8 +81,7 @@ Then, edit the nginx configuration file `/etc/nginx/sites-enabled/default` and a
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
|
||||
# Longer timeouts (1hr)
|
||||
keepalive_timeout 3600;
|
||||
# Longer timeouts
|
||||
proxy_read_timeout 3600;
|
||||
proxy_connect_timeout 3600;
|
||||
proxy_send_timeout 3600;
|
||||
|
||||
@@ -0,0 +1,61 @@
|
||||
# Local LLM Integration with `text-web-ui` :llama:
|
||||
|
||||
Integrate local Large Language Models (LLMs) with
|
||||
[oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui),
|
||||
a specialized interface that includes a custom variant of the OpenAI API for a smooth integration process.
|
||||
|
||||
_Last updated on Dec 7, 2023_
|
||||
|
||||
### Components
|
||||
|
||||
The implementation of local LLMs involves the following components:
|
||||
|
||||
* **text-generation-webui**: A Python application with a Gradio web UI for operating Large Language Models.
|
||||
* **Local Large Language Models "LLMs"**: Use large language models on your personal computer with consumer-grade GPUs or CPUs.
|
||||
* **big-AGI**: An LLM UI that offers features such as Personas, OCR, Voice Support, Code Execution, AGI functions, and more.
|
||||
|
||||
## Instructions
|
||||
|
||||
This guide assumes that **big-AGI** is already installed on your system. Note that the text-generation-webui IP address must be accessible from the server running **big-AGI**.
|
||||
|
||||
### Text-web-ui Installation & Configuration:
|
||||
|
||||
1. Install [text-generation-webui](https://github.com/oobabooga/text-generation-webui#Installation):
|
||||
- Follow the instructions in the official page (basicall clone the repo and run a script) [~10 minutes]
|
||||
- Stop the Web UI as we need to modify the startup flags to enable the OpenAI API
|
||||
2. Enable the **openai extension**
|
||||
- Edit `CMD_FLAGS.txt`
|
||||
- Make sure that `--listen --api` is present and uncommented
|
||||
3. Restart text-generation-webui
|
||||
- Double-click on "start"
|
||||
- You should see something like:
|
||||
```
|
||||
2023-12-07 21:51:21 INFO:Loading the extension "openai"...
|
||||
2023-12-07 21:51:21 INFO:OpenAI-compatible API URL:
|
||||
|
||||
http://0.0.0.0:5000
|
||||
...
|
||||
INFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
|
||||
Running on local URL: http://0.0.0.0:7860
|
||||
```
|
||||
- This shows that:
|
||||
- The Web UI is running on port 7860: http://127.0.0.1:7860
|
||||
- **The OpenAI API is running on port 5000: http://127.0.0.1:5000**
|
||||
4. Load your first model
|
||||
- Open the text-generation-webui at [127.0.0.1:7860](http://127.0.0.1:7860/)
|
||||
- Switch to the **Model** tab
|
||||
- Download, for instance, `TheBloke/Llama-2-7B-Chat-GPTQ`
|
||||
- Select the model once it's loaded
|
||||
|
||||
### Integrating text-web-ui with big-AGI:
|
||||
1. Integrating Text-Generation-WebUI with big-AGI:
|
||||
- Go to Models > Add a model source of type: **Oobabooga**
|
||||
- Enter the address: `http://127.0.0.1:5000`
|
||||
- If running remotely, replace 127.0.0.1 with the IP of the machine. Make sure to use the **IP:Port** format
|
||||
- Load the models
|
||||
- The active model must be selected and LOADED on the text-generation-webui as it doesn't support model switching or parallel requests.
|
||||
- Select model & Chat
|
||||
|
||||

|
||||
|
||||
Enjoy the privacy and flexibility of local LLMs with `big-AGI` and `text-generation-webui`!
|
||||
@@ -1,7 +1,8 @@
|
||||
# OpenRouter Configuration
|
||||
|
||||
[OpenRouter](https://openrouter.ai) is a standalone, premium service
|
||||
that provides access to a wide range of AI models from multiple providers through a single API.
|
||||
that provides access to <Link href='https://openrouter.ai/docs#models' target='_blank'>exclusive AI models</Link>
|
||||
such as GPT-4 32k, Claude, and more. These models are typically not available to the public.
|
||||
This document details the process of integrating OpenRouter with big-AGI.
|
||||
|
||||
### 1. OpenRouter Account Setup and API Key Generation
|
||||
@@ -19,7 +20,7 @@ This document details the process of integrating OpenRouter with big-AGI.
|
||||

|
||||
3. Input the API key into the **OpenRouter API Key** field, and load the Models.
|
||||

|
||||
4. Models from all supported providers will now be accessible and selectable in the application.
|
||||
4. OpenAI GPT4-32k and other models will now be accessible and selectable in the application.
|
||||
|
||||
In addition to using the UI, configuration can also be done using
|
||||
[environment variables](environment-variables.md).
|
||||
@@ -29,5 +30,5 @@ In addition to using the UI, configuration can also be done using
|
||||
OpenRouter independently manages its service and pricing and is not affiliated with big-AGI.
|
||||
For more detailed information, please visit [this page](https://openrouter.ai/docs#models).
|
||||
|
||||
Please note that running large models can be costly and may rapidly consume credits.
|
||||
Check model pricing on the OpenRouter website before use.
|
||||
Please note that running large models such as GPT-4 32k can be costly and may rapidly consume
|
||||
credits - a single prompt may cost $1 or more, at the time of writing.
|
||||
+9
-31
@@ -31,14 +31,17 @@ At time of writing, big-AGI has only 2 operations that run on Node.js Functions:
|
||||
browsing (fetching web pages) and sharing. They both can exceed 10 seconds, especially
|
||||
when fetching large pages or waiting for websites to be completed.
|
||||
|
||||
We provide `vercel_PRODUCTION.json` to raise the duration to 25 seconds (from a default of 10), to use it,
|
||||
make sure to rename it to `vercel.json` before build.
|
||||
|
||||
From the Vercel Project > Settings > General > Build & Development Settings,
|
||||
you can for instance set the build command to:
|
||||
|
||||
```bash
|
||||
next build
|
||||
mv vercel_PRODUCTION.json vercel.json; next build
|
||||
```
|
||||
|
||||
### Change the Personas (v1.x only)
|
||||
### Change the Personas
|
||||
|
||||
Edit the `src/data.ts` file to customize personas. This file houses the default personas. You can add, remove, or modify these to meet your project's needs.
|
||||
|
||||
@@ -49,44 +52,19 @@ Edit the `src/data.ts` file to customize personas. This file houses the default
|
||||
Adapt the UI to match your project's aesthetic, incorporate new features, or exclude unnecessary ones.
|
||||
|
||||
- [ ] Adjust `src/common/app.theme.ts` for theme changes: colors, spacing, button appearance, animations, etc
|
||||
- [ ] Modify `src/common/app.release.ts` to alter the application's name
|
||||
- [ ] Update `src/common/app.nav.ts` to revise the navigation bar
|
||||
|
||||
### Add a Message of the Day
|
||||
|
||||
You can display a temporary announcement banner at the top of the app using the `NEXT_PUBLIC_MOTD` environment variable.
|
||||
|
||||
- Set this variable in your deployment environment
|
||||
- The message supports template variables:
|
||||
- `{{app_build_hash}}`: Current git commit hash
|
||||
- `{{app_build_pkgver}}`: Package version
|
||||
- `{{app_build_time}}`: Build timestamp as date
|
||||
- `{{app_deployment_type}}`: Deployment type (local, docker, vercel, etc.)
|
||||
- Users can dismiss the message (until next page refresh)
|
||||
- Use it for version announcements, maintenance notices, or feature highlights
|
||||
|
||||
Example: `NEXT_PUBLIC_MOTD=🚀 New features available in {{app_build_pkgver}}! Try the improved Beam.`
|
||||
- [ ] Modify `src/common/app.config.tsx` to alter the application's name
|
||||
- [ ] Update `src/common/app.nav.tsx` to revise the navigation bar
|
||||
|
||||
## Testing & Deployment
|
||||
|
||||
Test your application thoroughly using local development (refer to README.md for local build instructions). Deploy using your preferred hosting service. big-AGI supports deployment on platforms like Vercel, Docker, or any Node.js-compatible service, especially those supporting NextJS's "Edge Runtime."
|
||||
|
||||
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Pages deployment (limited support)
|
||||
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Workers deployment
|
||||
- [deploy-docker.md](deploy-docker.md): for Docker deployment instructions and examples
|
||||
- [deploy-k8s.md](deploy-k8s.md): for Kubernetes deployment instructions and examples
|
||||
|
||||
## Debugging
|
||||
|
||||
The application includes a client-side logging system. You can view recent logs via the UI (Settings > Tools > Logs).
|
||||
|
||||
For deeper debugging during development:
|
||||
|
||||
1. **Debug Page**: Access the `/info/debug` page for an overview of the application's environment, configuration, API status, and environment variables available to the client.
|
||||
2. **Conditional Breakpoints**: To automatically pause execution in your browser's developer tools when critical errors (`error`, `critical`, `DEV` levels) are logged to the console, set the following environment variable in your local `.env.local` file and restart your development server:
|
||||
```bash
|
||||
NEXT_PUBLIC_DEBUG_BREAKS=true
|
||||
```
|
||||
This allows you to inspect the application state at the exact moment an important error occurs. This feature only works in development mode (`npm run dev`) and requires the environment variable to be explicitly set to `true`.
|
||||
We introduced the `/info/debug` page that provides a detailed overview of the application's environment, including the API keys, environment variables, and other configuration settings.
|
||||
|
||||
<br/>
|
||||
|
||||
|
||||
+34
-51
@@ -2,9 +2,8 @@
|
||||
|
||||
The open-source big-AGI project provides support for the following analytics services:
|
||||
|
||||
- **Google Analytics 4**: manual setup required
|
||||
- **PostHog Analytics**: manual setup required
|
||||
- **Vercel Analytics**: automatic when deployed to Vercel
|
||||
- **Google Analytics 4**: manual setup required
|
||||
|
||||
The following is a quick overview of the Analytics options for the deployers of this open-source project.
|
||||
big-AGI is deployed to many large-scale and enterprise though various ways (custom builds, Docker, Vercel, Cloudflare, etc.),
|
||||
@@ -12,6 +11,32 @@ and this guide is for its customization.
|
||||
|
||||
## Service Configuration
|
||||
|
||||
### Vercel Analytics
|
||||
|
||||
- Why: understand coarse traction, and identify deployment issues - all without tracking individual users
|
||||
- What: top pages, top referrers, country of origin, operating system, browser, and page speed metrics
|
||||
|
||||
Vercel Analytics and Speed Insights are local API endpoints deployed to your domain, so everything stays within your
|
||||
domain. Furthermore, the Vercel Analytics service is privacy-friendly, and does not track individual users.
|
||||
|
||||
This service is avaialble to system administrators when deploying to Vercel. It is automatically enabled when deploying to Vercel.
|
||||
The code that activates Vercel Analytics is located in the `src/pages/_app.tsx` file:
|
||||
|
||||
```tsx
|
||||
const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) => <>
|
||||
...
|
||||
{isVercelFromFrontend && <VercelAnalytics debug={false} />}
|
||||
{isVercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
|
||||
...
|
||||
</>;
|
||||
```
|
||||
|
||||
When big-AGI is served on Vercel hosts, the ```process.env.NEXT_PUBLIC_VERCEL_URL``` environment variable is trueish, and
|
||||
analytics will be sent by default to the Vercel Analytics service which is deployed by Vercel IF configured from the
|
||||
Vercel project dashboard.
|
||||
|
||||
In summary: to turn it on: activate the `Analytics` service in the Vercel project dashboard.
|
||||
|
||||
### Google Analytics 4
|
||||
|
||||
- Why: user engagement and retention, performance insights, personalization, content optimization
|
||||
@@ -26,55 +51,13 @@ server/container will be able to report analytics to your Google Analytics 4 pro
|
||||
|
||||
As of Feb 27, 2024, this feature is in development.
|
||||
|
||||
### PostHog Analytics
|
||||
|
||||
- Why: feature usage tracking, user journeys, conversion optimization, product analytics
|
||||
- What: page views, page leave events, user interactions, and deployment context
|
||||
|
||||
PostHog provides comprehensive product analytics with privacy controls. It helps understand how users interact with big-AGI's features, identify opportunities for improvement, and optimize the user experience.
|
||||
|
||||
To enable PostHog, set the `NEXT_PUBLIC_POSTHOG_KEY` environment variable at build time. PostHog is configured with tracking optimization and privacy in mind:
|
||||
|
||||
- Uses a proxy endpoint (`/a/ph`) to avoid ad blockers
|
||||
- Respects user opt-out preferences via local storage
|
||||
- Tracks only essential information without PII
|
||||
- Adds deployment context for better segmentation
|
||||
|
||||
The implementation follows PostHog's best practices for Next.js applications and includes manual page view tracking for proper single-page application support.
|
||||
|
||||
### Vercel Analytics
|
||||
|
||||
- Why: understand coarse traction, and identify deployment issues - all without tracking individual users
|
||||
- What: top pages, top referrers, country of origin, operating system, browser, and page speed metrics
|
||||
|
||||
Vercel Analytics and Speed Insights are local API endpoints deployed to your domain, so everything stays within your
|
||||
domain. Furthermore, the Vercel Analytics service is privacy-friendly, and does not track individual users.
|
||||
|
||||
This service is avaialble to system administrators when deploying to Vercel. It is automatically enabled when deploying to Vercel.
|
||||
The code that activates Vercel Analytics is located in the `pages/_app.tsx` file:
|
||||
|
||||
```tsx
|
||||
const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) => <>
|
||||
...
|
||||
{Is.Deployment.VercelFromFrontend && <VercelAnalytics debug={false} />}
|
||||
{Is.Deployment.VercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
|
||||
...
|
||||
</>;
|
||||
```
|
||||
|
||||
When big-AGI is served on Vercel hosts, the `process.env.NEXT_PUBLIC_VERCEL_URL` environment variable is trueish, and
|
||||
analytics will be sent by default to the Vercel Analytics service which is deployed by Vercel IF configured from the
|
||||
Vercel project dashboard.
|
||||
|
||||
In summary: to turn it on: activate the `Analytics` service in the Vercel project dashboard.
|
||||
|
||||
## Configurations
|
||||
|
||||
| Scope | Default | Description / Instructions |
|
||||
|-------------------------------------------------------------------------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Your **Source** builds of big-AGI | None | **Google Analytics**: set environment variable at build time · **PostHog**: set environment variable at build time · **Vercel**: enable Vercel Analytics from the dashboard |
|
||||
| Your **Docker** builds of big-AGI | None | (**Vercel**: n/a) · **Google Analytics**: set environment variable at `docker build` time · **PostHog**: set environment variable at `docker build` time. |
|
||||
| [get.big-agi.com](https://get.big-agi.com) (**Big-AGI 1.x Legacy**) | Vercel + Google + PostHog | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
|
||||
| [prebuilt Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) (**Big-AGI 1.x**, 'latest' tag) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements · **PostHog**: n/a |
|
||||
| Scope | Default | Description / Instructions |
|
||||
|-----------------------------------------------------------------------------------------|------------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||
| Your source builds of big-AGI | None | **Vercel**: enable Vercel Analytics from the dashboard. · **Google Analytics**: set environment variable at build time. |
|
||||
| Your docker builds of big-AGI | None | **Vercel**: n/a. · **Google Analytics**: set environment variable at `docker build` time. |
|
||||
| [big-agi.com](https://big-agi.com) | Vercel + Google | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
|
||||
| [official Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements. |
|
||||
|
||||
Note: this information is updated as of March 3, 2025 and can change at any time.
|
||||
Note: this information is updated as of Feb 27, 2024 and can change at any time.
|
||||
@@ -19,7 +19,7 @@ To enable it in `big-AGI`, you **must manually build the application**:
|
||||
- Build `big-AGI` with HTTP authentication enabled:
|
||||
- Clone the repository
|
||||
- Rename `middleware_BASIC_AUTH.ts` to `middleware.ts`
|
||||
- Build: usual simple build procedure (e.g. [Deploy manually](installation.md#Local-Production-build) or [Deploying with Docker](deploy-docker.md))
|
||||
- Build: usual simple build procedure (e.g. [Deploy manually](../README.md#-deploy-manually) or [Deploying with Docker](deploy-docker.md))
|
||||
|
||||
- Configure the following [environment variables](environment-variables.md) before launching `big-AGI`:
|
||||
```dotenv
|
||||
|
||||
+10
-12
@@ -1,20 +1,18 @@
|
||||
---
|
||||
unlisted: true
|
||||
---
|
||||
|
||||
# Deploying a Next.js App on Cloudflare Pages
|
||||
|
||||
> WARNING: Cloudflare Pages only supports Edge Runtime functions, not the full Node.js runtime.
|
||||
> WARNING: Cloudflare Pages does not support traditional NodeJS runtimes, but only Edge Runtime functions.
|
||||
>
|
||||
> The cloud router in this project requires a Node.js runtime for Supabase SDK, authentication,
|
||||
> sync, and other server-side features that cannot run on Cloudflare's edge runtime.
|
||||
> In this project we use Prisma connected to serverless Postgres, which at the moment cannot run on
|
||||
> edge functions, so we cannot deploy this project on Cloudflare Pages.
|
||||
>
|
||||
> Workaround: Step 3.4. has been added below, to DELETE the Node.js cloud router - which means that some
|
||||
> Workaround: Step 3.4. has been added below, to DELETE the NodeJS traditional runtime - which means that some
|
||||
> parts of this application will not work.
|
||||
> - [Side effects](https://github.com/enricoros/big-agi/blob/main/src/modules/trade/server/trade.router.ts):
|
||||
> Sharing functionality, import from ChatGPT share, and post to Paste.GG will not work
|
||||
> - Cloud features (sync, auth, payments) will not be available
|
||||
> - [Side effects](https://github.com/enricoros/big-agi/blob/main/src/apps/chat/trade/server/trade.router.ts#L19):
|
||||
> Sharing functionality to DB, and import from ChatGPT share, and post to Paste.GG will not work
|
||||
> - See [Issue 174](https://github.com/enricoros/big-agi/issues/174).
|
||||
>
|
||||
> Longer term: follow [prisma/prisma: Support Edge Function deployments](https://github.com/prisma/prisma/issues/21394)
|
||||
> and convert the Node runtime to Edge runtime once Prisma supports it.
|
||||
|
||||
This guide provides steps to deploy your Next.js app on Cloudflare Pages.
|
||||
It is based on the [official Cloudflare developer documentation](https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/),
|
||||
@@ -36,7 +34,7 @@ Fork the repository to your personal GitHub account.
|
||||
2. On this page, set your **Project name**, **Production branch** (e.g., main), and your Build settings
|
||||
3. Choose `Next.js` from the **Framework preset** dropdown menu
|
||||
4. Set a custom **Build Command**:
|
||||
- `rm app/api/cloud/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`
|
||||
- `rm app/api/trpc-node/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`
|
||||
- see the tradeoffs for this deletion on the notice at the top
|
||||
5. Keep the **Build output directory** as default
|
||||
6. Click the **Save and Deploy** button
|
||||
|
||||
@@ -31,12 +31,6 @@ file.
|
||||
|
||||
### Official Images: [ghcr.io/enricoros/big-agi](https://github.com/enricoros/big-agi/pkgs/container/big-agi)
|
||||
|
||||
#### Available Tags
|
||||
|
||||
- **`:latest`** / **`:stable`** - Latest stable release (recommended)
|
||||
- **`:development`** - Main branch (bleeding edge)
|
||||
- **`:v2.0.0`** - Specific versions
|
||||
|
||||
#### Run using *docker* 🚀
|
||||
|
||||
```bash
|
||||
@@ -65,17 +59,6 @@ To make local services running on your host machine accessible to a Docker conta
|
||||
|
||||
<br/>
|
||||
|
||||
### Reverse Proxy Configuration
|
||||
|
||||
A reverse proxy is a server that sits in front of big-AGI's container and can forwards web
|
||||
requests to it. Often used to run multiple web applications, expose them to the internet,
|
||||
increase security.
|
||||
|
||||
If you're deploying big-AGI behind a reverse proxy, you may want to see
|
||||
our [Reverse Proxy Deployment Guide](deploy-reverse-proxy.md) for more information.
|
||||
|
||||
<br/>
|
||||
|
||||
### More Information
|
||||
|
||||
The [`Dockerfile`](../Dockerfile) describes how to create a Docker image. It establishes a Node.js environment,
|
||||
|
||||
@@ -1,85 +0,0 @@
|
||||
# Deploy `big-AGI` with Kubernetes ☸️
|
||||
|
||||
In this tutorial, we will guide you through the process of deploying big-AGI
|
||||
in a Kubernetes environment using the kubectl command-line tool.
|
||||
|
||||
## First Deployment
|
||||
|
||||
### Step 1: Clone the big-AGI repository
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/enricoros/big-agi
|
||||
$ cd ./big-agi/docs/k8s
|
||||
```
|
||||
|
||||
### Step 2: Create the namespace
|
||||
|
||||
```bash
|
||||
$ kubectl create namespace ns-big-agi
|
||||
```
|
||||
|
||||
### Step 3: Fill in the key information into env-secret.yaml
|
||||
|
||||
All variables are optional. By default, Kubernetes Secret uses Base64 for
|
||||
encode/decode, so please don't do a git commit after filling in the keys
|
||||
to avoid leaking sensitive information.
|
||||
|
||||
We provide an empty `env-secret.yaml` file as a template.
|
||||
You can fill in the necessary information using a text editor.
|
||||
|
||||
```bash
|
||||
$ nano env-secret.yaml
|
||||
```
|
||||
|
||||
### Step 4: Deploying Kubernetes Resources
|
||||
|
||||
```bash
|
||||
$ kubectl apply -f big-agi-deployment.yaml -f env-secret.yaml
|
||||
```
|
||||
|
||||
### Step 5: Verifying the Resource Statuses
|
||||
|
||||
```bash
|
||||
$ kubectl -n ns-big-agi get svc,pod,deployment
|
||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
service/svc-big-agi ClusterIP 10.0.198.118 <none> 3000/TCP 63m
|
||||
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
pod/deployment-big-agi-xxxxxxxx-yyyyy 1/1 Running 0 39m
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
deployment.apps/deployment-big-agi 1/1 1 1 63m
|
||||
```
|
||||
|
||||
### Step 6: Testing the Service
|
||||
|
||||
You can test the service by port-forwarding the service to your local machine:
|
||||
|
||||
```bash
|
||||
$ kubectl -n ns-big-agi port-forward service/svc-big-agi 3000
|
||||
Forwarding from 127.0.0.1:3000 -> 3000
|
||||
Forwarding from [::1]:3000 -> 3000
|
||||
```
|
||||
|
||||
Now you can access the service at `http://localhost:3000`, and you should see the big-AGI homepage.
|
||||
|
||||
## Updating big-AGI
|
||||
|
||||
To update big-AGI to the latest version:
|
||||
|
||||
1. Pull the latest changes from the repository:
|
||||
```bash
|
||||
$ git pull origin main
|
||||
```
|
||||
|
||||
2. Apply the updated deployment:
|
||||
```bash
|
||||
$ kubectl apply -f big-agi-deployment.yaml
|
||||
```
|
||||
|
||||
This will trigger a rolling update of the deployment with the latest image.
|
||||
|
||||
**Note**: If you're deploying big-AGI behind a reverse proxy, you may need to configure
|
||||
your proxy to support streaming. See our [Reverse Proxy Deployment Guide](deploy-reverse-proxy.md) for more information.
|
||||
|
||||
Note: For production use, consider setting up an Ingress Controller or Load Balancer instead of using port-forward.
|
||||
@@ -1,58 +0,0 @@
|
||||
# Advanced: Deploying big-AGI behind a Reverse Proxy
|
||||
|
||||
Note: if you don't have a reverse proxy set up, you can skip this guide.
|
||||
|
||||
If you're deploying big-AGI behind a reverse proxy, you may want to configure your proxy to support streaming output.
|
||||
This guide provides instructions on how to configure your reverse proxy to support streaming output from big-AGI.
|
||||
|
||||
This is for advanced deployments, and you should have a basic understanding of how reverse proxies work.
|
||||
|
||||
## Nginx Configuration
|
||||
|
||||
If you're using Nginx as your reverse proxy, add the following configuration to your server block:
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your-domain.com;
|
||||
|
||||
location / {
|
||||
# ...your specific proxy_pass configuration, example below...
|
||||
proxy_pass http://localhost:3000; # Assuming big-AGI is running on port 3000
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
# ...
|
||||
|
||||
# Important: Disable buffering for the streaming responses (SSE)
|
||||
chunked_transfer_encoding on; # Turn on chunked transfer encoding
|
||||
proxy_buffering off; # Turn off proxy buffering
|
||||
proxy_cache off; # Turn off caching
|
||||
tcp_nodelay on; # Turn on TCP NODELAY option, disable delay ACK algorithm
|
||||
tcp_nopush on; # Turn on TCP NOPUSH option, disable Nagle algorithm
|
||||
|
||||
# Important: Longer timeouts (5 min)
|
||||
keepalive_timeout 300;
|
||||
proxy_connect_timeout 300;
|
||||
proxy_read_timeout 300;
|
||||
proxy_send_timeout 300;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This configuration disables caching and buffering, enables chunked transfer encoding, and adjusts TCP settings to optimize for streaming content.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you're experiencing issues with streaming not working, especially when deploying behind a reverse proxy,
|
||||
ensure that your proxy is configured to support streaming output as described above.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- For Docker deployments, see our [Docker Deployment Guide](deploy-docker.md)
|
||||
- For Kubernetes deployments, see our [Kubernetes Deployment Guide](deploy-k8s.md)
|
||||
- For general installation instructions, see our [Installation Guide](installation.md)
|
||||
|
||||
If you continue to experience issues, please reach out to our [community support channels](../README.md#-get-involved).
|
||||
@@ -19,6 +19,7 @@ services:
|
||||
- .env
|
||||
environment:
|
||||
- PUPPETEER_WSS_ENDPOINT=ws://browserless:3000
|
||||
command: [ "next", "start", "-p", "3000" ]
|
||||
depends_on:
|
||||
- browserless
|
||||
|
||||
|
||||
@@ -0,0 +1,14 @@
|
||||
# Why big-AGI?
|
||||
Placeholder for a document that demonstrates the productivity and unique features of Big-AGI.
|
||||
|
||||
## Exclusive features
|
||||
- [x] Call AGI
|
||||
- [x] Continuous Voice mode
|
||||
- [x] Diagram generation
|
||||
- [ ] ...
|
||||
|
||||
## Productivity Features
|
||||
- [x] Multi-window to never wait
|
||||
- [x] Multi-Chat to explore different solutions
|
||||
- [x] Rendering of graphs, charts, mindmaps
|
||||
- [ ] ...
|
||||
@@ -3,7 +3,7 @@
|
||||
This document provides an explanation of the environment variables used in the big-AGI application.
|
||||
|
||||
**All variables are optional**; and _UI options_ take precedence over _backend environment variables_,
|
||||
which take place over _defaults_. This file is kept in sync with [`../src/server/env.server.ts`](../src/server/env.server.ts).
|
||||
which take place over _defaults_. This file is kept in sync with [`../src/server/env.mjs`](../src/server/env.mjs).
|
||||
|
||||
### Setting Environment Variables
|
||||
|
||||
@@ -23,57 +23,45 @@ MDB_URI=
|
||||
OPENAI_API_KEY=
|
||||
OPENAI_API_HOST=
|
||||
OPENAI_API_ORG_ID=
|
||||
ALIBABA_API_HOST=
|
||||
ALIBABA_API_KEY=
|
||||
AZURE_OPENAI_API_ENDPOINT=
|
||||
AZURE_OPENAI_API_KEY=
|
||||
ANTHROPIC_API_KEY=
|
||||
ANTHROPIC_API_HOST=
|
||||
BEDROCK_BEARER_TOKEN=
|
||||
BEDROCK_ACCESS_KEY_ID=
|
||||
BEDROCK_SECRET_ACCESS_KEY=
|
||||
BEDROCK_SESSION_TOKEN=
|
||||
BEDROCK_REGION=
|
||||
DEEPSEEK_API_KEY=
|
||||
GEMINI_API_KEY=
|
||||
GROQ_API_KEY=
|
||||
LOCALAI_API_HOST=
|
||||
LOCALAI_API_KEY=
|
||||
MISTRAL_API_KEY=
|
||||
MOONSHOT_API_KEY=
|
||||
OLLAMA_API_HOST=
|
||||
OPENPIPE_API_KEY=
|
||||
OPENROUTER_API_KEY=
|
||||
PERPLEXITY_API_KEY=
|
||||
TOGETHERAI_API_KEY=
|
||||
XAI_API_KEY=
|
||||
|
||||
# Model Observability: Helicone
|
||||
HELICONE_API_KEY=
|
||||
|
||||
# Browse
|
||||
PUPPETEER_WSS_ENDPOINT=
|
||||
|
||||
# Search
|
||||
GOOGLE_CLOUD_API_KEY=
|
||||
GOOGLE_CSE_ID=
|
||||
|
||||
# Text-To-Speech: ElevenLabs
|
||||
# Text-To-Speech
|
||||
ELEVENLABS_API_KEY=
|
||||
ELEVENLABS_API_HOST=
|
||||
ELEVENLABS_VOICE_ID=
|
||||
# Text-To-Image
|
||||
PRODIA_API_KEY=
|
||||
# Google Custom Search
|
||||
GOOGLE_CLOUD_API_KEY=
|
||||
GOOGLE_CSE_ID=
|
||||
# Browse
|
||||
PUPPETEER_WSS_ENDPOINT=
|
||||
|
||||
# Backend Analytics
|
||||
BACKEND_ANALYTICS=
|
||||
|
||||
# Backend HTTP Basic Authentication (see `deploy-authentication.md` for turning on authentication)
|
||||
HTTP_BASIC_AUTH_USERNAME=
|
||||
HTTP_BASIC_AUTH_PASSWORD=
|
||||
|
||||
|
||||
# Frontend variables
|
||||
NEXT_PUBLIC_MOTD=
|
||||
# Frontend variables
|
||||
NEXT_PUBLIC_GA4_MEASUREMENT_ID=
|
||||
NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID=
|
||||
NEXT_PUBLIC_PLANTUML_SERVER_URL=
|
||||
NEXT_PUBLIC_POSTHOG_KEY=
|
||||
```
|
||||
|
||||
## Backend Variables
|
||||
@@ -92,38 +80,24 @@ For Database configuration see [deploy-database.md](deploy-database.md).
|
||||
The following variables when set will enable the corresponding LLMs on the server-side, without
|
||||
requiring the user to enter an API key
|
||||
|
||||
| Variable | Description | Required |
|
||||
|-----------------------------|----------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
|
||||
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
|
||||
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
|
||||
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
|
||||
| `ALIBABA_API_HOST` | The Alibaba AI OpenAI-compatible endpoint | Optional |
|
||||
| `ALIBABA_API_KEY` | The API key for Alibaba AI | Optional |
|
||||
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path | Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
|
||||
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
|
||||
| `AZURE_OPENAI_DISABLE_V1` | Disables the next-generation v1 API for GPT-5-like models (set to 'true' to disable) | Optional, defaults to enabled |
|
||||
| `AZURE_OPENAI_API_VERSION` | API version for traditional deployment-based endpoints | Optional, defaults to '2025-04-01-preview' |
|
||||
| `AZURE_DEPLOYMENTS_API_VERSION` | API version for the deployments listing endpoint | Optional, defaults to '2023-03-15-preview' |
|
||||
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
|
||||
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, for proxies or custom endpoints | Optional |
|
||||
| `BEDROCK_BEARER_TOKEN` | Bedrock long-term API key (`ABSK...`). Takes priority over IAM credentials. Short-term keys only work for runtime, not model listing | Optional |
|
||||
| `BEDROCK_ACCESS_KEY_ID` | AWS IAM Access Key ID for Bedrock (Claude models via AWS) | Optional, but if set `BEDROCK_SECRET_ACCESS_KEY` must also be set |
|
||||
| `BEDROCK_SECRET_ACCESS_KEY` | AWS IAM Secret Access Key for Bedrock | Optional, but if set `BEDROCK_ACCESS_KEY_ID` must also be set |
|
||||
| `BEDROCK_SESSION_TOKEN` | AWS Session Token for temporary/STS credentials | Optional |
|
||||
| `BEDROCK_REGION` | AWS region for Bedrock (e.g., `us-east-1`, `us-west-2`, `eu-west-1`) | Optional, defaults to `us-east-1` |
|
||||
| `DEEPSEEK_API_KEY` | The API key for Deepseek AI | Optional |
|
||||
| `GEMINI_API_KEY` | The API key for Google AI's Gemini | Optional |
|
||||
| `GROQ_API_KEY` | The API key for Groq Cloud | Optional |
|
||||
| `LOCALAI_API_HOST` | Sets the URL of the LocalAI server, or defaults to http://127.0.0.1:8080 | Optional |
|
||||
| `LOCALAI_API_KEY` | The (Optional) API key for LocalAI | Optional |
|
||||
| `MISTRAL_API_KEY` | The API key for Mistral | Optional |
|
||||
| `MOONSHOT_API_KEY` | The API key for Moonshot AI | Optional |
|
||||
| `OLLAMA_API_HOST` | Changes the backend host for the Ollama vendor. See [config-local-ollama.md](config-local-ollama.md) | |
|
||||
| `OPENPIPE_API_KEY` | The API key for OpenPipe | Optional |
|
||||
| `OPENROUTER_API_KEY` | The API key for OpenRouter | Optional |
|
||||
| `PERPLEXITY_API_KEY` | The API key for Perplexity | Optional |
|
||||
| `TOGETHERAI_API_KEY` | The API key for Together AI | Optional |
|
||||
| `XAI_API_KEY` | The API key for xAI | Optional |
|
||||
| Variable | Description | Required |
|
||||
|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
|
||||
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
|
||||
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
|
||||
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
|
||||
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path | Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
|
||||
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
|
||||
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
|
||||
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as [config-aws-bedrock.md](config-aws-bedrock.md) | Optional |
|
||||
| `GEMINI_API_KEY` | The API key for Google AI's Gemini | Optional |
|
||||
| `GROQ_API_KEY` | The API key for Groq Cloud | Optional |
|
||||
| `LOCALAI_API_HOST` | Sets the URL of the LocalAI server, or defaults to http://127.0.0.1:8080 | Optional |
|
||||
| `LOCALAI_API_KEY` | The (Optional) API key for LocalAI | Optional |
|
||||
| `MISTRAL_API_KEY` | The API key for Mistral | Optional |
|
||||
| `OLLAMA_API_HOST` | Changes the backend host for the Ollama vendor. See [config-local-ollama.md](config-local-ollama) | |
|
||||
| `OPENROUTER_API_KEY` | The API key for OpenRouter | Optional |
|
||||
| `PERPLEXITY_API_KEY` | The API key for Perplexity | Optional |
|
||||
| `TOGETHERAI_API_KEY` | The API key for Together AI | Optional |
|
||||
|
||||
### LLM Observability: Helicone
|
||||
|
||||
@@ -143,17 +117,19 @@ Enable the app to Talk, Draw, and Google things up.
|
||||
|
||||
| Variable | Description |
|
||||
|:---------------------------|:------------------------------------------------------------------------------------------------------------------------|
|
||||
| **Text-To-Speech** | ElevenLabs, Inworld, OpenAI TTS, LocalAI, and browser Web Speech API are supported |
|
||||
| **Text-To-Speech** | [ElevenLabs](https://elevenlabs.io/) is a high quality speech synthesis service |
|
||||
| `ELEVENLABS_API_KEY` | ElevenLabs API Key - used for calls, etc. |
|
||||
| `ELEVENLABS_API_HOST` | Custom host for ElevenLabs |
|
||||
| `ELEVENLABS_VOICE_ID` | Default voice ID for ElevenLabs |
|
||||
| | *Note: OpenAI TTS and LocalAI TTS reuse credentials from your configured LLM services (no separate env vars needed)* |
|
||||
| **Text-To-Image** | [Prodia](https://prodia.com/) is a reliable image generation service |
|
||||
| `PRODIA_API_KEY` | Prodia API Key - used with '/imagine ...' |
|
||||
| **Google Custom Search** | [Google Programmable Search Engine](https://programmablesearchengine.google.com/about/) produces links to pages |
|
||||
| `GOOGLE_CLOUD_API_KEY` | Google Cloud API Key, used with the '/react' command - [Link to GCP](https://console.cloud.google.com/apis/credentials) |
|
||||
| `GOOGLE_CSE_ID` | Google Custom/Programmable Search Engine ID - [Link to PSE](https://programmablesearchengine.google.com/) |
|
||||
| **Browse** | |
|
||||
| `PUPPETEER_WSS_ENDPOINT` | Puppeteer WebSocket endpoint - used for browsing (pade downloadeing), etc. |
|
||||
| **Backend** | |
|
||||
| `BACKEND_ANALYTICS` | Semicolon-separated list of analytics flags (see backend.analytics.ts). Flags: `domain` logs the responding domain. |
|
||||
| `HTTP_BASIC_AUTH_USERNAME` | See the [Authentication](deploy-authentication.md) guide. Username for HTTP Basic Authentication. |
|
||||
| `HTTP_BASIC_AUTH_PASSWORD` | Password for HTTP Basic Authentication. |
|
||||
|
||||
@@ -161,14 +137,10 @@ Enable the app to Talk, Draw, and Google things up.
|
||||
|
||||
The value of these variables are passed to the frontend (Web UI) - make sure they do not contain secrets.
|
||||
|
||||
| Variable | Description |
|
||||
|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| `NEXT_PUBLIC_DEBUG_BREAKS` | (optional, development) When set to 'true', enables automatic debugger breaks on DEV/error/critical logs in development builds |
|
||||
| `NEXT_PUBLIC_MOTD` | Message of the Day - displays a dismissible banner at the top of the app (see [customizations](customizations.md) for the template variables). Example: 🔔 Welcome to our deployment! Version {{app_build_pkgver}} built on {{app_build_time}}. |
|
||||
| `NEXT_PUBLIC_GA4_MEASUREMENT_ID` | (optional) The measurement ID for Google Analytics 4. (see [deploy-analytics](deploy-analytics.md)) |
|
||||
| `NEXT_PUBLIC_GOOGLE_DRIVE_CLIENT_ID` | (optional) Google OAuth Client ID for Drive Picker. Can reuse `AUTH_GOOGLE_ID`. See [Google Drive](config-feature-google-drive.md) |
|
||||
| `NEXT_PUBLIC_PLANTUML_SERVER_URL` | The URL of the PlantUML server, used for rendering UML diagrams. Allows using custom local servers. |
|
||||
| `NEXT_PUBLIC_POSTHOG_KEY` | (optional) Key for PostHog analytics. (see [deploy-analytics](deploy-analytics.md)) |
|
||||
| Variable | Description |
|
||||
|:----------------------------------|:-----------------------------------------------------------------------------------------|
|
||||
| `NEXT_PUBLIC_GA4_MEASUREMENT_ID` | The measurement ID for Google Analytics 4. (see [deploy-analytics](deploy-analytics.md)) |
|
||||
| `NEXT_PUBLIC_PLANTUML_SERVER_URL` | The URL of the PlantUML server, used for rendering UML diagrams. (code in RederCode.tsx) |
|
||||
|
||||
> Important: these variables must be set at build time, which is required by Next.js to pass them to the frontend.
|
||||
> This is in contrast to the backend variables, which can be set when starting the local server/container.
|
||||
|
||||
@@ -1,46 +0,0 @@
|
||||
---
|
||||
unlisted: true
|
||||
---
|
||||
|
||||
# Big-AGI Advanced Tips & Tricks
|
||||
|
||||
> 🚨 This file is not meant for publication, and it's just been created as a handbook with tips
|
||||
> and tricks to make Big-AGI more efficient and productive. 🚨
|
||||
|
||||
Welcome to the advanced tips and tricks guide for Big-AGI. This document will help you make the most of the platform's existing features.
|
||||
|
||||
---
|
||||
|
||||
## Hidden Gems
|
||||
|
||||
- **Shift + Double-Click** on a chat message to **edit** it.
|
||||
- **Shift + Trash Icon** to **delete** a chats and messages without confirmation.
|
||||
- also applies elsewhere: delete Attachments, etc.
|
||||
- **Shift + Click** on **New Chat** to create an incognito chat.
|
||||
- Drag a big-AGI saved chat into Big-AGI to load (or attach) it.
|
||||
|
||||
## Not-so-obvious Shortcuts
|
||||
|
||||
- When sending a message:
|
||||
- Enter is for newlines
|
||||
- **Shift + Enter** to send the message.
|
||||
- **Ctrl + Enter** to **Beam** the message.
|
||||
- **Alt/Option + Enter** to send the message without an answer.
|
||||
- When editing a message:
|
||||
- **Ctrl + Enter** to **Save** the changes.
|
||||
- **Shift + Ctrl + Enter** to **Save & Regenerate**.
|
||||
- Scroll between messages:
|
||||
- **Ctrl + Up/Down** to scroll between **messages** and/or **Beams**.
|
||||
|
||||
## Worth the Effort:
|
||||
|
||||
- [LiveFile](help-feature-livefile.md) works on **Chrome**: Pair and synchronize your documents and code blocks with files on your local system: refresh, save, update them.
|
||||
|
||||
## Best User Hacks:
|
||||
|
||||
-
|
||||
|
||||
---
|
||||
|
||||
Note: this document is just at the beginning. It's here so we can capture
|
||||
the best tips over time.
|
||||
@@ -1,105 +0,0 @@
|
||||
# Big-AGI Data Ownership Guide
|
||||
|
||||
Big-AGI is a **client-first** web application, which means it prioritizes speed and data ownership compared to cloud apps.
|
||||
Your *API keys*, *chat history*, and *settings* live in your
|
||||
browser's [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage), not
|
||||
on cloud servers.
|
||||
|
||||
You can use Big-AGI in two ways:
|
||||
|
||||
1. Run it yourself (open-source)
|
||||
2. Use big-agi.com (hosted service)
|
||||
|
||||
This guide explains how the open-source version handles your data. You can verify everything in [the source code](https://github.com/enricoros/big-agi).
|
||||
|
||||
## Client-Side Storage
|
||||
|
||||
Within Big-AGI almost all chat/keys data is handled client-side in your browser using two
|
||||
standard browser storage mechanisms:
|
||||
|
||||
- **Local Storage**: API keys, settings, and configurations ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage))
|
||||
- **IndexedDB**: Chat history and larger files ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API))
|
||||
|
||||
The Big-AGI backend mainly passes requests to AI services (OpenAI, Anthropic, etc.). It doesn't store your data, except for the chat-sharing function if used.
|
||||
|
||||
You can see your data in your browser's local storage and IndexedDB - try it yourself:
|
||||
|
||||
1. In Chrome: Open DevTools (press F12 on Windows, ⌘ + ⌥ + I on Mac)
|
||||
2. Click 'Application' > 'Local Storage'
|
||||
3. See your settings and API keys
|
||||
|
||||

|
||||
|
||||
### Sync for Authenticated Users
|
||||
|
||||
Users with accounts on big-agi.com who opt into Sync (a Pro feature) have their entity data - such as conversations and personas - replicated to the server for multi-device access.
|
||||
Server-side data is isolated per-user using Row Level Security (RLS), ensuring that no other user can access your synced data.
|
||||
Sync is entirely optional; without it, all data remains local to your browser.
|
||||
|
||||
### What This Means For You
|
||||
|
||||
Storing data in your browser means:
|
||||
|
||||
- Your data stays on **one device/browser only**
|
||||
- Clearing browser data **erases your chats** - make backups
|
||||
- Anyone using your browser can see your chats and keys
|
||||
- Running your own server needs technical skills
|
||||
|
||||
### Local Device Identifier
|
||||
|
||||
Big-AGI generates a _device identifier_ that combines timestamp and random components, stored only on your device. This identifier:
|
||||
|
||||
- Is used only for the **optional sync functionality** between your devices
|
||||
- Helps maintain data consistency when using Big-AGI across multiple devices
|
||||
- Remains completely local unless you explicitly enable sync
|
||||
- Is not used for tracking, analytics, or telemetry
|
||||
- Can be deleted anytime by clearing local storage
|
||||
- Is fully transparent - see the implementation in `src/common/stores/store-client.ts`
|
||||
|
||||
## How Data Flows
|
||||
|
||||
AI interactions in Big-AGI, such as chats, AI titles, text to speech, browsing, flow through three components:
|
||||
|
||||
1. **Browser** (client/installed App) - Stores your keys & data locally
|
||||
2. **Backend** (routing server) - Passes requests to AI services
|
||||
3. **AI Services** - Where the actual AI processing happens
|
||||
|
||||
### Self-Deployed Version: Your Infrastructure
|
||||
|
||||
You run the server. Your data only leaves when making AI requests.
|
||||
The keys and chats are under your control and pass through your code, and are sent to
|
||||
the upstream AI services on a per-request basis.
|
||||
|
||||

|
||||
|
||||
### Web Version: Using big-agi.com
|
||||
|
||||
Your data passes through the hosted Big-AGI edge network to reach AI services. The keys
|
||||
and chats pass through Big-AGI's edge network to reach the AI services on a per-request basis,
|
||||
and then are send to the upstream AI services.
|
||||
|
||||

|
||||
|
||||
## Security Best Practices
|
||||
|
||||
**Basic Security**:
|
||||
|
||||
- **Never share API keys**
|
||||
- **Don't use shared computers**
|
||||
- Use private browsing for one-off sessions
|
||||
- Use trusted networks
|
||||
- Back up your data
|
||||
|
||||
**When Running Your Own Server**:
|
||||
|
||||
- Use [environment variables](environment-variables.md) for API keys
|
||||
- Run on trusted infrastructure
|
||||
- Keep your installation updated
|
||||
|
||||
## TL;DR
|
||||
|
||||
Your API keys and chats stay in your browser. The server only passes requests to AI services.
|
||||
|
||||
Use big-agi.com for convenience, or [run it yourself](installation.md) for full control.
|
||||
|
||||
Need help? Join our [Discord](https://discord.gg/MkH4qj2Jp9) or open a [GitHub issue](https://github.com/enricoros/big-agi/issues).
|
||||
@@ -1,28 +0,0 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
Quick answers to common questions about Big-AGI. For detailed documentation, see our [Website Docs](https://big-agi.com/docs).
|
||||
|
||||
### Versions
|
||||
|
||||
<details open>
|
||||
<summary><b>How do I check my Big-AGI version?</b></summary>
|
||||
|
||||
You can see the version in the _News_ section of the app, as per the image below.
|
||||
|
||||

|
||||
</details>
|
||||
|
||||
<details open>
|
||||
<summary><b>How do I verify my Vercel deployment version?</b></summary>
|
||||
|
||||
You can go in the **deployments** section of your Vercel project, and at a quick glance see
|
||||
what is the latest deployment status, time, and link to the source code.
|
||||
|
||||

|
||||
|
||||
Each deployment links directly to its source code commit.
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
Missing something? [Open an issue](https://github.com/enricoros/big-agi/issues/new) or [join our Discord](https://discord.gg/MkH4qj2Jp9).
|
||||
@@ -1,167 +0,0 @@
|
||||
# LiveFile: Synchronize Your Documents with Local Files
|
||||
|
||||
## Introduction
|
||||
|
||||
**LiveFile** is a powerful feature in big-AGI that allows you to **pair and synchronize
|
||||
your documents and code blocks** with files on your local system.
|
||||
|
||||
This feature enables a **two-way connection between big-AGI and your local files on disk**,
|
||||
saving you time and effort.
|
||||
|
||||
With LiveFile, you can:
|
||||
|
||||
- **Pair** documents and code blocks with local files.
|
||||
- **Monitor** changes in local files and update content in big-AGI.
|
||||
- **Refresh** chat attachments with the latest content.
|
||||
- **Save** edits made in big-AGI back to your local files.
|
||||
- **Store** AI-generated code and content.
|
||||
|
||||
---
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Supported Browsers:**
|
||||
- **Google Chrome** (desktop)
|
||||
- **Microsoft Edge** (desktop)
|
||||
- **Operating Systems:**
|
||||
- **Desktop platforms only**
|
||||
- **Note:** Mobile devices (iOS and Android) are **not supported** due to browser limitations.
|
||||
- **File Types:**
|
||||
- Designed for **text-based files** (e.g., `.txt`, `.md`, `.js`, `.py`).
|
||||
- **Performance:**
|
||||
- Can handle **dozens of files efficiently**.
|
||||
- **Limitations:**
|
||||
- **File Size Limit**:
|
||||
- Supports text files up to **10 MB**.
|
||||
- **Pairing Persistence:**
|
||||
- LiveFile connections **do not persist across sessions**.
|
||||
- After reloading the page, you will need to re-pair your files.
|
||||
- **Saving Overwrites:**
|
||||
- Saving changes in big-AGI will **overwrite the entire file**.
|
||||
- Use external tools for version control or incremental backups.
|
||||
|
||||
---
|
||||
|
||||
## Enabling LiveFile
|
||||
|
||||
LiveFile can be enabled automatically or manually in your Big-AGI workflow.
|
||||
|
||||
### Automatic Pairing
|
||||
|
||||
When you:
|
||||
|
||||
- **Attach**, **drop**, or **paste** a file into a chat message,
|
||||
|
||||
LiveFile is **automatically enabled** for that attachment. This means you can start
|
||||
monitoring and reloading changes without any additional setup.
|
||||
|
||||
### Manual Pairing
|
||||
|
||||
For existing attachments or code blocks that:
|
||||
|
||||
- **Do not have LiveFile enabled** (e.g., created on other devices),
|
||||
- **Are AI-generated code snippets without an associated file**,
|
||||
|
||||
You can manually pair them with a local file.
|
||||
|
||||
#### Pairing Attachments
|
||||
|
||||
1. **Select the Attachment:**
|
||||
- Click on the attachment in the chat to view it in the previewer.
|
||||
|
||||
2. **Initiate Pairing:**
|
||||
- Click on **"Pair File"** (🔗).
|
||||
- If you have open LiveFiles, they will be listed for easy selection.
|
||||
- Alternatively, you can select a new file from your local system.
|
||||
|
||||
3. **Grant Permissions**
|
||||
- When prompted, allow big-AGI to access the file.
|
||||
|
||||
#### Pairing Code Blocks
|
||||
|
||||
1. **Access Code Block Options:**
|
||||
- Click on the code block to reveal the header with options.
|
||||
|
||||
2. **Initiate Pairing:**
|
||||
- Click the **"Pair File"** button (🔗).
|
||||
- Select from your open LiveFiles or choose a new file.
|
||||
|
||||
3. **Confirm Pairing:**
|
||||
- Grant permission when prompted.
|
||||
|
||||
---
|
||||
|
||||
## Using LiveFile
|
||||
|
||||
### Monitoring Changes
|
||||
|
||||
- **Automatic Monitoring:**
|
||||
- LiveFile watches for changes in your paired local files.
|
||||
- If the file is modified outside of big-AGI, you'll be shown the changes in the LiveFile bar.
|
||||
- There is also a **"Replace with File"** option to manually load the latest content and see the changes.
|
||||
|
||||
- **Refreshing Content:**
|
||||
- Click **"Replace with File"** (🔄) to load the latest content from the paired file into big-AGI.
|
||||
|
||||
### Saving Edits Back to Paired Files
|
||||
|
||||
- **Editing Attachments or Code Blocks:**
|
||||
- Modify the content directly within big-AGI.
|
||||
- Attachments: Click on the attachment to open the previewer and click on "Edit" to make changes.
|
||||
- Code Blocks: Select "Edit" on the chat message to update code blocks.
|
||||
|
||||
- **Saving Changes:**
|
||||
- Click **"Save to File"** (💾) to overwrite the local file with your changes.
|
||||
- **Note:** This action overwrites the entire file. Ensure this is what you want before proceeding.
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
- **Monitor External Changes:**
|
||||
- Refresh content in big-AGI if the local file has been modified outside the application.
|
||||
|
||||
- **Use a Version Control System:**
|
||||
- For critical files, consider using Git or other version control systems to track and monitor changes, authorship, and history.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **LiveFile Options Not Visible:**
|
||||
- Ensure you are using a **supported desktop browser**.
|
||||
- Check that you have the latest version of big-AGI.
|
||||
|
||||
- **Permission Issues:**
|
||||
- Confirm that you granted big-AGI permission to access your files.
|
||||
- Check your browser's settings to ensure file access is allowed.
|
||||
|
||||
---
|
||||
|
||||
## Technical Details
|
||||
|
||||
LiveFile uses the [File System Access API](https://developer.mozilla.org/en-US/docs/Web/API/File_System_Access_API) to
|
||||
interact with your local files securely. It leverages the [browser-fs-access](https://github.com/GoogleChromeLabs/browser-fs-access) library,
|
||||
an open-source project by Google Chrome Labs, which provides an easy interface to the File System Access API with fallbacks for broader browser support.
|
||||
|
||||
- **Security:**
|
||||
- Access to files requires explicit user permission.
|
||||
|
||||
- **Performance:**
|
||||
- Designed to handle dozens of files efficiently (tested on hundreds).
|
||||
- Works with the Big-AGI attachment system to recursively add directories.
|
||||
|
||||
- **Browser Support:**
|
||||
- Fully supported on **Google Chrome** and **Microsoft Edge** desktop versions.
|
||||
|
||||
---
|
||||
|
||||
## Another Big-AGI First!
|
||||
|
||||
You can significantly boost your productivity and streamline your workflow within big-AGI
|
||||
by understanding how to utilize LiveFile's features fully.
|
||||
|
||||
This Feature is in Beta as there are a few limitations and improvements to be made.
|
||||
Join us in enjoying and enhancing this feature on [big-AGI.com](https://big-agi.com), or
|
||||
[GitHub](https://github.com/enricoros/big-AGI) for support and [Discord](https://discord.gg/MkH4qj2Jp9)
|
||||
to share the love.
|
||||
@@ -1,141 +0,0 @@
|
||||
# Enabling Microphone Access for Speech Recognition
|
||||
|
||||
This guide explains how to enable microphone access for speech recognition in various browsers and mobile devices.
|
||||
Ensuring microphone access is essential for using voice features in applications like big-AGI.
|
||||
|
||||
## Desktop Browsers
|
||||
|
||||
### Google Chrome (All Platforms, recommended)
|
||||
|
||||
1. Open the website (e.g., big-AGI) in Chrome.
|
||||
2. Click the **lock icon** in the address bar.
|
||||
3. In the dropdown, find **"Microphone"**.
|
||||
- Set it to **"Allow"**.
|
||||
4. If "Microphone" isn't listed:
|
||||
- Click on **"Site settings"**.
|
||||
- Find **"Microphone"** in the permissions list.
|
||||
- Change the setting to **"Allow"**.
|
||||
5. **Refresh** the page.
|
||||
|
||||
### Safari (macOS)
|
||||
|
||||
**[Watch the video tutorial: How to enable Speech Recognition in Safari](https://vimeo.com/1010342201)**
|
||||
|
||||
If you're seeing a "Speech Recognition permission denied" error, follow these steps:
|
||||
|
||||
1. Open **System Settings**.
|
||||
- Go to **Privacy & Security** > **Speech Recognition**.
|
||||
- Enable Safari in the list of allowed applications.
|
||||
- Quit & Open Safari.
|
||||
2. Click **Safari** in the top menu bar.
|
||||
- Select **Settings**.
|
||||
- Go to the **Websites** tab.
|
||||
- Select **Microphone** from the sidebar.
|
||||
- Find big-AGI (or localhost for developers) in the list and set it to **Allow**.
|
||||
- Close the Settings window.
|
||||
3. **Refresh** the page.
|
||||
|
||||
This quick and simple fix should get essential voice input working in big-AGI on your Mac.
|
||||
|
||||
### Microsoft Edge (Windows)
|
||||
|
||||
1. Open the website in Edge.
|
||||
2. Click the **lock icon** in the address bar.
|
||||
3. Click **"Permissions for this site"**.
|
||||
4. Find **"Microphone"**.
|
||||
- Set it to **"Allow"**.
|
||||
5. **Refresh** the page.
|
||||
|
||||
### Firefox (All Platforms)
|
||||
|
||||
> **Note:** The Speech Recognition API is **not supported** in Firefox. If you're using Firefox, please switch to a supported browser to use speech recognition
|
||||
> features.
|
||||
|
||||
## Mobile Devices
|
||||
|
||||
### Android (Chrome)
|
||||
|
||||
1. Open the website in Chrome.
|
||||
2. Tap the **lock icon** in the address bar.
|
||||
3. Tap **"Permissions"**.
|
||||
4. Find **"Microphone"**.
|
||||
- Set it to **"Allow"**.
|
||||
5. **Refresh** the page.
|
||||
|
||||
### iOS (Safari)
|
||||
|
||||
1. Open the **Settings** app on your device.
|
||||
2. Scroll down and tap **"Safari"**.
|
||||
3. Tap **"Microphone"**.
|
||||
4. Ensure **"Ask"** or **"Allow"** is selected.
|
||||
5. Return to Safari and open the website.
|
||||
6. If prompted, allow microphone access.
|
||||
7. **Refresh** the page.
|
||||
|
||||
### iOS (Chrome)
|
||||
|
||||
> **Note:** Chrome on iOS uses Safari's engine due to system limitations. Microphone permissions are managed through iOS settings.
|
||||
|
||||
1. Open the **Settings** app.
|
||||
2. Scroll down and tap **"Chrome"**.
|
||||
3. Ensure **"Microphone"** is toggled **on**.
|
||||
4. Open Chrome and navigate to the website.
|
||||
5. If prompted, allow microphone access.
|
||||
6. **Refresh** the page.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
If you're still experiencing issues after enabling microphone access:
|
||||
|
||||
**Check System Permissions (macOS):**
|
||||
|
||||
- Open **System Settings**.
|
||||
- Go to **"Privacy & Security"**.
|
||||
- Select the **"Privacy"** tab.
|
||||
- Click **"Microphone"** in the sidebar.
|
||||
- Ensure your browser (e.g., Chrome, Safari) is checked.
|
||||
- You may need to unlock the settings by clicking the lock icon at the bottom.
|
||||
|
||||
**Check Microphone Access (Windows):**
|
||||
|
||||
- Open **Settings**.
|
||||
- Go to **"Privacy"** > **"Microphone"**.
|
||||
- Ensure **"Allow apps to access your microphone"** is **on**.
|
||||
- Scroll down and make sure your browser is allowed.
|
||||
|
||||
**Close Other Applications:**
|
||||
|
||||
- Close any applications that might be using the microphone.
|
||||
|
||||
**Restart the Browser:**
|
||||
|
||||
- Close all browser windows and reopen.
|
||||
|
||||
**Update Your Browser:**
|
||||
|
||||
- Ensure you're using the latest version.
|
||||
|
||||
**Check for Browser Extensions:**
|
||||
|
||||
- Disable extensions that might block access to the microphone.
|
||||
|
||||
For persistent issues, consult your browser's official support resources or contact big-AGI support.
|
||||
|
||||
## Technical Details
|
||||
|
||||
Big-AGI uses the [Web Speech API (SpeechRecognition)](https://developer.mozilla.org/en-US/docs/Web/API/SpeechRecognition)
|
||||
to transcribe spoken words into text. This API provides real-time transcription with live previews and works on most
|
||||
modern mobile and desktop browsers.
|
||||
|
||||
**Note on Browser Support:**
|
||||
|
||||
| Browser | Support Level | Notes |
|
||||
|----------------|-----------------|------------------------------------------------------------------------|
|
||||
| Google Chrome | ✅ Recommended | Fully supported on desktop and Android. Preferred for best experience. |
|
||||
| Safari | ✅ Supported | Requires macOS/iOS 14 or later. |
|
||||
| Microsoft Edge | ✅ Supported | Fully supported on desktop. |
|
||||
| Firefox | ❌ Not Supported | SpeechRecognition API not available. |
|
||||
|
||||
**Recommendation:**
|
||||
For the best experience with speech recognition features, we strongly recommend using Google Chrome.
|
||||
Ensure your browser is up to date to benefit from the latest features and security updates.
|
||||
+8
-39
@@ -7,7 +7,7 @@ process for your own instance of big-AGI and related products.
|
||||
|
||||
**Try big-AGI** - You don't need to install anything if you want to play with big-AGI
|
||||
and have your API keys to various model services. You can access our free instance on [big-AGI.com](https://big-agi.com).
|
||||
The free instance runs the latest `main` branch from this repository.
|
||||
The free instance runs the latest `main-stable` branch from this repository.
|
||||
|
||||
## 🧩 Build-your-own
|
||||
|
||||
@@ -72,8 +72,9 @@ Create your GitHub fork, create a Vercel project over that fork, and deploy it.
|
||||
|
||||
### Deploy on Cloudflare
|
||||
|
||||
> Note: Cloudflare Pages deployment has limitations due to Edge Runtime constraints.
|
||||
> See the [Cloudflare guide](deploy-cloudflare.md) for details and known issues.
|
||||
Deploy on Cloudflare's global network by installing big-AGI on
|
||||
Cloudflare Pages. Check out the [Cloudflare Installation Guide](deploy-cloudflare.md)
|
||||
for step-by-step instructions.
|
||||
|
||||
### Docker Deployments
|
||||
|
||||
@@ -98,42 +99,10 @@ or follow the steps below for a quick start.
|
||||
```
|
||||
Access your big-AGI instance at `http://localhost:3000`.
|
||||
|
||||
If you deploy big-AGI behind a reverse proxy, you may want to check out the [Reverse Proxy Configuration Guide](deploy-reverse-proxy.md).
|
||||
### Midori AI Subsystem for Docker Deployment
|
||||
|
||||
### Kubernetes Deployment
|
||||
|
||||
Deploy big-AGI on a Kubernetes cluster for enhanced scalability and management. Follow these steps for a Kubernetes deployment:
|
||||
|
||||
1. Clone the big-AGI repository:
|
||||
```bash
|
||||
git clone https://github.com/enricoros/big-AGI.git
|
||||
cd big-AGI
|
||||
```
|
||||
|
||||
2. Configure the environment variables:
|
||||
```bash
|
||||
cp docs/k8s/env-secret.yaml env-secret.yaml
|
||||
vim env-secret.yaml # Edit the file to set your environment variables
|
||||
```
|
||||
|
||||
3. Apply the Kubernetes configurations:
|
||||
```bash
|
||||
kubectl create namespace ns-big-agi
|
||||
kubectl apply -f docs/k8s/big-agi-deployment.yaml -f env-secret.yaml
|
||||
```
|
||||
|
||||
4. Verify the deployment:
|
||||
```bash
|
||||
kubectl -n ns-big-agi get svc,pod,deployment
|
||||
```
|
||||
|
||||
5. Access the big-AGI application:
|
||||
```bash
|
||||
kubectl -n ns-big-agi port-forward service/svc-big-agi 3000:3000
|
||||
```
|
||||
Your big-AGI instance is now accessible at `http://localhost:3000`.
|
||||
|
||||
For more detailed instructions on Kubernetes deployment, including updating and troubleshooting, refer to our [Kubernetes Deployment Guide](deploy-k8s.md).
|
||||
Follow the instructions found on [Midori AI Subsystem Site](https://io.midori-ai.xyz/subsystem/manager/)
|
||||
for your host OS. After completing the setup process, install the Big-AGI docker backend to the Midori AI Subsystem.
|
||||
|
||||
## Enterprise-Grade Installation
|
||||
|
||||
@@ -145,6 +114,6 @@ Enjoy all the features of big-AGI without the hassle of infrastructure managemen
|
||||
Join our vibrant community of developers, researchers, and AI enthusiasts. Share your projects, get help, and collaborate with others.
|
||||
|
||||
- [Discord Community](https://discord.gg/MkH4qj2Jp9)
|
||||
- [X (Twitter)](https://x.com/enricoros)
|
||||
- [Twitter](https://twitter.com/yourusername)
|
||||
|
||||
For any questions or inquiries, please don't hesitate to [reach out to our team](mailto:hello@big-agi.com).
|
||||
|
||||
@@ -1,52 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: ns-big-agi
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: big-agi
|
||||
name: deployment-big-agi
|
||||
namespace: ns-big-agi
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: big-agi
|
||||
strategy: {}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: big-agi
|
||||
spec:
|
||||
containers:
|
||||
- image: ghcr.io/enricoros/big-agi:latest
|
||||
name: big-agi
|
||||
ports:
|
||||
- containerPort: 3000
|
||||
args:
|
||||
- next
|
||||
- start
|
||||
- -p
|
||||
- "3000"
|
||||
envFrom:
|
||||
- secretRef:
|
||||
name: env
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
labels:
|
||||
app: big-agi
|
||||
name: svc-big-agi
|
||||
namespace: ns-big-agi
|
||||
spec:
|
||||
ports:
|
||||
- name: "http"
|
||||
port: 3000
|
||||
targetPort: 3000
|
||||
selector:
|
||||
app: big-agi
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: env
|
||||
namespace: ns-big-agi
|
||||
type: Opaque
|
||||
stringData:
|
||||
# IMPORTANT: This file contains sensitive information. Do not commit changes to version control.
|
||||
# All variables are optional. Fill in only the ones you need.
|
||||
#
|
||||
# For the latest information on all the environment variables, see /docs/environment-variables.md
|
||||
#
|
||||
|
||||
# LLMs
|
||||
OPENAI_API_KEY: ""
|
||||
OPENAI_API_HOST: ""
|
||||
OPENAI_API_ORG_ID: ""
|
||||
ALIBABA_API_HOST: ""
|
||||
ALIBABA_API_KEY: ""
|
||||
AZURE_OPENAI_API_ENDPOINT: ""
|
||||
AZURE_OPENAI_API_KEY: ""
|
||||
ANTHROPIC_API_KEY: ""
|
||||
ANTHROPIC_API_HOST: ""
|
||||
DEEPSEEK_API_KEY: ""
|
||||
GEMINI_API_KEY: ""
|
||||
GROQ_API_KEY: ""
|
||||
LOCALAI_API_HOST: ""
|
||||
LOCALAI_API_KEY: ""
|
||||
MISTRAL_API_KEY: ""
|
||||
MOONSHOT_API_KEY: ""
|
||||
OLLAMA_API_HOST: ""
|
||||
OPENPIPE_API_KEY: ""
|
||||
OPENROUTER_API_KEY: ""
|
||||
PERPLEXITY_API_KEY: ""
|
||||
TOGETHERAI_API_KEY: ""
|
||||
XAI_API_KEY: ""
|
||||
|
||||
# Browse
|
||||
PUPPETEER_WSS_ENDPOINT: ""
|
||||
|
||||
# Search
|
||||
GOOGLE_CLOUD_API_KEY: ""
|
||||
GOOGLE_CSE_ID: ""
|
||||
|
||||
# Text-To-Speech: Eleven Labs
|
||||
ELEVENLABS_API_KEY: ""
|
||||
ELEVENLABS_API_HOST: ""
|
||||
ELEVENLABS_VOICE_ID: ""
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 55 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 62 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 234 KiB |
@@ -1,47 +0,0 @@
|
||||
---
|
||||
unlisted: true
|
||||
---
|
||||
|
||||
# ReAct: question answering with Reasoning and Actions
|
||||
|
||||
## What is ReAct?
|
||||
|
||||
[ReAct](https://arxiv.org/abs/2210.03629) (Reason+Act) is a classis AI question-answering feature,
|
||||
that combines reasoning with actions to provide informed answers.
|
||||
|
||||
Within Big-AGI, users can invoke ReAct to ask complex questions that require multiple steps to answer.
|
||||
|
||||
| Mode | Activation | Information Sources | Reasoning Visibility | When to Use |
|
||||
|-------|-----------------------------------|------------------------------------------------------|------------------------------------|--------------------------------------------------|
|
||||
| Chat | Just type and send | **Pre-trained knowledge only** | Only shows final response | Quick answers, general knowledge queries |
|
||||
| ReAct | Type "/react" before the question | **Web loads, Web searches, Wikipedia, calculations** | Shows step-by-step thought process | Complex, multi-step, or research-based questions |
|
||||
|
||||
Example of ReAct in action, taking a question about current events, googling results, opening a page, and summarizing the information:
|
||||
|
||||
https://github.com/user-attachments/assets/c3480428-9ab8-4257-a869-2541bf44a062
|
||||
|
||||
The following tools are implemented in Big-AGI:
|
||||
|
||||
- **browse**: loads web pages (URLs) and extracts information, using a correctly configured `Tools > Browsing` API
|
||||
- **search**: searches the web to produce page URLs, using a correctly configured `Tools > Google Search` ([Google Programmable Search Engine](https://programmablesearchengine.google.com/about/)) API
|
||||
- **wikipedia**: looks up information on Wikipedia pages
|
||||
- **calculate**: performs mathematical calculations by executing typescript code
|
||||
- warning: (!) unsafe and dangerous, do not use for untrusted code/LLMs
|
||||
|
||||
## How to Use ReAct in Big-AGI
|
||||
|
||||
1. **Invoking ReAct**: Type "/react" followed by your question in the chat.
|
||||
2. **What to Expect**:
|
||||
|
||||
- An ephemeral space will show the AI's thought process and actions, showing all the steps taken.
|
||||
- The final answer will appear in the main chat.
|
||||
|
||||
3. **Available Actions**: Web searches, Wikipedia lookups, calculations, and optionally web browsing.
|
||||
|
||||
## Good to know:
|
||||
|
||||
- **ReAct operates in isolation** from the main chat history.
|
||||
- It **will take longer than standard responses** due to multiple steps.
|
||||
- Web searches and browsing may have privacy implications, and require **tool configuration** in the UI.
|
||||
- Errors or limitations in accessing external resources may affect results.
|
||||
- ReAct does not use the [Tool or Function Calling](https://platform.openai.com/docs/guides/function-calling) feature of AI models, rather uses the old school approach of parsing and executing actions.
|
||||
@@ -1,17 +0,0 @@
|
||||
import { defineConfig } from "eslint/config";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import js from "@eslint/js";
|
||||
import { FlatCompat } from "@eslint/eslintrc";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = path.dirname(__filename);
|
||||
const compat = new FlatCompat({
|
||||
baseDirectory: __dirname,
|
||||
recommendedConfig: js.configs.recommended,
|
||||
allConfig: js.configs.all
|
||||
});
|
||||
|
||||
export default defineConfig([{
|
||||
extends: compat.extends("next/core-web-vitals"),
|
||||
}]);
|
||||
@@ -1,39 +0,0 @@
|
||||
## Knowledge Base
|
||||
|
||||
Architecture and system documentation is available in the `/kb/` knowledge base, for use by AI agents and developers.
|
||||
|
||||
**Structure:**
|
||||
- `/kb/KB.md` - Already in context: this text
|
||||
- `/kb/vision-inlined.md` - Already in context (next section): long-term vision and north stars
|
||||
- `/kb/modules/` - Core business logic (e.g. AIX)
|
||||
- `/kb/systems/` - Infrastructure (routing, startup)
|
||||
|
||||
### Modules Documentation
|
||||
|
||||
#### AIX - AI Communication Framework
|
||||
- **[AIX.md](modules/AIX.md)** - AIX streaming architecture documentation
|
||||
- **[AIX-callers-analysis.md](modules/AIX-callers-analysis.md)** - Analysis of AIX entry points, call chains, common and different rendering, error handling, etc.
|
||||
|
||||
#### CSF - Client-Side Fetch
|
||||
- **[CSF.md](systems/client-side-fetch.md)** - Direct browser-to-API communication for LLM requests
|
||||
|
||||
### Systems Documentation
|
||||
|
||||
#### Core Platform Systems
|
||||
- **[app-routing.md](systems/app-routing.md)** - Next.js routing, provider stack, and display state hierarchy
|
||||
- **[LLM-parameters-system.md](systems/LLM-parameters-system.md)** - Language model parameter flow across the system
|
||||
- **[LLM-vendor-integration.md](modules/LLM-vendor-integration.md)** - Adding new LLM providers
|
||||
|
||||
### KB Guidelines
|
||||
|
||||
#### Writing Style
|
||||
|
||||
- **Direct and factual** - No marketing language
|
||||
- **Present tense** - "AIX handles streaming" not "AIX will handle"
|
||||
- **Active voice** - "The system processes" not "Processing is done by"
|
||||
- **Concrete examples** - Show actual code/config when helpful, briefly
|
||||
|
||||
#### Maintenance
|
||||
|
||||
- Remove outdated knowledge base information when detected
|
||||
- Keep cross-references current when files move
|
||||
@@ -1,145 +0,0 @@
|
||||
# AIX Chat Generation Calls Analysis
|
||||
|
||||
This document analyzes all AIX function callers and their patterns for message removal, placeholder handling, and error management.
|
||||
|
||||
## AIX Function Architecture
|
||||
|
||||
### Three-Tier Call Hierarchy
|
||||
|
||||
**Core AIX Functions** (Direct tRPC API callers):
|
||||
- `aixChatGenerateContent_DMessage_FromConversation` - 9 callers (conversation streaming)
|
||||
- `aixChatGenerateContent_DMessage_orThrow` - 6 callers (direct request/response)
|
||||
- `aixChatGenerateText_Simple` - 12 callers (text-only utilities)
|
||||
|
||||
**Utility Layer** (Hooks & Functions):
|
||||
- Conversation management, persona processing, content generation utilities
|
||||
|
||||
**UI Layer** (React Components):
|
||||
- User-facing interfaces with rich error states and fallback mechanisms
|
||||
|
||||
## Core Function Callers Analysis
|
||||
|
||||
### Conversation-Based Callers (`_FromConversation`)
|
||||
|
||||
| **Caller** | **Context** | **Message Removal** | **Placeholder** | **Error Handling** |
|
||||
|------------|-------------|-------------------|----------------|-------------------|
|
||||
| **Chat Persona** | `'conversation'` | `messageWasInterruptedAtStart()` → `removeMessage()` | None | Error fragments |
|
||||
| **XE Chat Generate** | `'conversation'` | `messageWasInterruptedAtStart()` → `removeMessage()` | `'...'` placeholder | Error fragments via messageEditor |
|
||||
| **Beam Scatter** | `'beam-scatter'` | `messageWasInterruptedAtStart()` → empty message | `SCATTER_PLACEHOLDER` | Ray status update |
|
||||
| **Beam Gather** | `'beam-gather'` | `messageWasInterruptedAtStart()` → clear fragments | `GATHER_PLACEHOLDER` | Re-throw errors |
|
||||
| **Beam Follow-up** | `'beam-followup'` | `messageWasInterruptedAtStart()` → remove message | `FOLLOWUP_PLACEHOLDER` | Status updates |
|
||||
| **ScratchChat** | `'scratch-chat'` | `aborted && !fragments` → array removal | `SCRATCH_CHAT_PLACEHOLDER` | Error fragments |
|
||||
| **Telephone** | `'call'` | None | None | Basic handling |
|
||||
| **ReAct Agent** | `'chat-react-turn'` | None | None | Append errors |
|
||||
| **Variform** | `'_DEV_'` | None | None | Throw errors |
|
||||
|
||||
### Direct Request Callers (`aixChatGenerateContent_DMessage`)
|
||||
|
||||
| **Caller** | **Context** | **Message Removal** | **Error Handling** |
|
||||
|------------|-------------|-------------------|-------------------|
|
||||
| **Auto Follow-ups** | `'chat-followup-*'` | `fragmentDelete()` on failure | `fragmentReplace()` with error |
|
||||
| **Gen CR Diffs** | `'aifn-gen-cr-diffs'` | None | State-based handling |
|
||||
| **Code Fixup** | `'fixup-code'` | None | Throw errors |
|
||||
| **Attachment Prompts** | `'chat-attachment-prompts'` | None | Throw errors |
|
||||
|
||||
### Text-Only Utilities (`aixChatGenerateText_Simple`)
|
||||
|
||||
| **Utility** | **Purpose** | **Error Strategy** | **Called By** |
|
||||
|-------------|-------------|-------------------|---------------|
|
||||
| **conversationTitle** | Auto-generate chat titles | Try/catch with fallback | UI components |
|
||||
| **conversationSummary** | Generate summaries | Try/catch with fallback | Chat drawer |
|
||||
| **useStreamChatText** | Generic text streaming | Error state management | FlattenerModal |
|
||||
| **useLLMChain** | Multi-step processing | Step-by-step handling | Persona creation |
|
||||
| **imaginePromptFromText** | Text → image prompts | Simple propagation | Image generation |
|
||||
| **aifnBeamGenerateBriefing** | Beam summaries | Null return on error | Beam completion |
|
||||
| **useAifnPersonaGenIdentity** | Extract persona identity | Query error handling | Persona flows |
|
||||
| **DiagramsModal** | Generate diagrams | Component error state | Manual generation |
|
||||
|
||||
## Message Removal Patterns
|
||||
|
||||
### 1. Complete Message Removal
|
||||
- **Chat Persona**: `messageWasInterruptedAtStart()` → `messageEditor.removeMessage()`
|
||||
- **ScratchChat**: `outcome === 'aborted' && !fragments?.length` → array removal
|
||||
- **Trigger**: Message aborted before any content generated
|
||||
|
||||
### 2. Fragment-Level Management
|
||||
- **Beam Gather**: Clear fragments array but keep message structure
|
||||
- **Auto Follow-ups**: Delete specific placeholder fragments on failure
|
||||
- **Purpose**: Maintain message structure while removing failed content
|
||||
|
||||
### 3. Empty Message Replacement
|
||||
- **Beam Scatter**: Replace with `createDMessageEmpty()` but preserve ray structure
|
||||
- **Purpose**: Keep UI structure intact while indicating failure
|
||||
|
||||
### 4. No Removal Strategy
|
||||
- **Text-only functions**: Use fallback values, error states, or null returns
|
||||
- **Simple callers**: Propagate errors upstream for handling
|
||||
|
||||
## Error Handling by Layer
|
||||
|
||||
### UI Layer (Components)
|
||||
- **Pattern**: Rich error states with user-facing messages
|
||||
- **Examples**: DiagramsModal, FlattenerModal
|
||||
- **Features**: Retry mechanisms, fallback UI, loading states
|
||||
|
||||
### Utility Layer (Hooks/Functions)
|
||||
- **Pattern**: Graceful degradation with fallbacks
|
||||
- **Examples**: conversationTitle, conversationSummary
|
||||
- **Features**: Silent failures, default values, try/catch blocks
|
||||
|
||||
### Core Layer (Direct API)
|
||||
- **Pattern**: Minimal handling, error propagation
|
||||
- **Examples**: Code Fixup, Attachment Prompts
|
||||
- **Features**: Assumes upstream error handling
|
||||
|
||||
## Key Implementation Details
|
||||
|
||||
### Message Removal Detection
|
||||
```typescript
|
||||
// Core detection logic
|
||||
function messageWasInterruptedAtStart(message: Pick<DMessage, 'generator' | 'fragments'>): boolean {
|
||||
return message.generator?.tokenStopReason === 'client-abort' && message.fragments.length === 0;
|
||||
}
|
||||
```
|
||||
|
||||
### Placeholder Management
|
||||
- **Initialization**: `createPlaceholderVoidFragment(placeholderText)`
|
||||
- **Replacement**: During streaming updates or on completion
|
||||
- **Cleanup**: Delete on error to avoid stale content
|
||||
|
||||
### Context Patterns
|
||||
- **Production**: `'conversation'`, `'beam-scatter'`, `'scratch-chat'`
|
||||
- **Features**: `'chat-followup-*'`, `'fixup-code'`, `'ai-diagram'`
|
||||
- **Development**: `'_DEV_'`
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Message Removal
|
||||
- Use `messageWasInterruptedAtStart()` for consistent detection
|
||||
- Only remove messages with no content that were client-aborted
|
||||
- Consider UI context when choosing removal vs. clearing strategy
|
||||
|
||||
### Error Handling
|
||||
- **Fragment-level**: Use `messageEditor.fragmentReplace()` with error fragments
|
||||
- **Message-level**: Use `messageEditor.removeMessage()` or array removal
|
||||
- **Status-level**: Update component state for UI feedback
|
||||
|
||||
### Placeholder Management
|
||||
- Initialize with descriptive placeholders using `createPlaceholderVoidFragment()`
|
||||
- Replace during streaming updates
|
||||
- Clean up on error to prevent stale content
|
||||
|
||||
## Architectural Insights
|
||||
|
||||
1. **Layered Error Handling**: Sophistication increases closer to UI
|
||||
2. **Context Specialization**: Different contexts for different use cases
|
||||
3. **Streaming vs Non-Streaming**: Conversation functions stream, utilities typically don't
|
||||
4. **Message vs Fragment Management**: Different strategies for different UI needs
|
||||
|
||||
The most sophisticated handling is in **Beam modules** and **Chat Persona** with comprehensive removal logic, while simpler callers rely on upstream error handling.
|
||||
|
||||
## Code References
|
||||
|
||||
- **Core function**: `src/modules/aix/client/aix.client.ts:aixChatGenerateContent_DMessage_FromConversation`
|
||||
- **Removal check**: `src/common/stores/chat/chat.message.ts:388:messageWasInterruptedAtStart()`
|
||||
- **Placeholder creation**: `src/common/stores/chat/chat.fragments.ts:createPlaceholderVoidFragment()`
|
||||
@@ -1,190 +0,0 @@
|
||||
# AIX
|
||||
|
||||
AIX is a client/server library for integrating advanced AI capabilities into web applications.
|
||||
|
||||
## Overview
|
||||
|
||||
AIX provides real-time, type-safe communication between a Typescript application and AI providers.
|
||||
|
||||
Built with tRPC, it manages the lifecycle of AI-generated content from request to rendering, supporting both streaming and non-streaming AI providers.
|
||||
|
||||
## Features
|
||||
|
||||
- Content Generation
|
||||
- Multi-Modal streaming/non-streaming
|
||||
- Throttled batching and error handling
|
||||
- Server-side timeout/retry
|
||||
- Function Calling and Code Execution
|
||||
- Complex AI Workflows (future)
|
||||
- Embeddings / Information Retrieval / Image Manipulation (future)
|
||||
|
||||
## AIX Providers support
|
||||
|
||||
| Service | Chat | Function Calling | Multi-Modal Input | Cont. (1) | Streaming | Idiosyncratic |
|
||||
|------------|------------|------------------|-------------------|-----------|-----------|---------------|
|
||||
| Alibaba | ✅ | ✅ | | ✅ | Yes + 📦 | |
|
||||
| Anthropic | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | |
|
||||
| Azure | ✅ | ✅ | | ✅ | Yes + 📦 | |
|
||||
| Deepseek | ✅ | ❌ (rejected) | | ✅ | Yes + 📦 | |
|
||||
| Gemini | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | Code ex.: ✅ |
|
||||
| Groq | ✅ | ✅ + Parallel | | ✅ | Yes + 📦 | |
|
||||
| LM Studio | ✅ | ❌ (not working) | | ❌ | Yes + 📦 | |
|
||||
| Local AI | ✅ | ✅ | | ❌ | Yes + 📦 | |
|
||||
| Mistral | ✅ | ✅ | | ✅ | Yes + 📦 | |
|
||||
| OpenAI | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | |
|
||||
| OpenPipe | ✅ | ✅ | Img: ✅ | ✅ | Yes + 📦 | |
|
||||
| OpenRouter | ✅ | ❌ (inconsistent) | | ✅ | Yes + 📦 | |
|
||||
| Perplexity | ✅ | ❌ (rejected) | | ✅ | Yes + 📦 | |
|
||||
| TogetherAI | ✅ | ✅ | | ✅ | Yes + 📦 | |
|
||||
| xAI | | | | | | |
|
||||
| Z.ai | ✅ | ✅ | Img: ✅ | ✅ | Yes + 📦 | Thinking mode |
|
||||
| Ollama (2) | ❌ (broken) | ? | | | | |
|
||||
|
||||
Notes:
|
||||
|
||||
- 1: Continuation marks: a. sends reason=max-tokens (streaming/non-streaming), b. TBA
|
||||
- 2: Ollama has not been ported to AIX yet due to the custom APIs.
|
||||
|
||||
## 1. System Architecture
|
||||
|
||||
The subsystem comprises three main components:
|
||||
|
||||
1. **Client (e.g. Next.js Frontend)**
|
||||
|
||||
- Initiates requests
|
||||
- Renders AI-generated content in real-time
|
||||
- Reconstructs streamed data
|
||||
|
||||
2. **Server (e.g. Next.js Backend)**
|
||||
|
||||
- Acts as an intermediary between client and AI providers
|
||||
- Handles request preparation, dispatching, and response processing
|
||||
- Streams responses back to the client
|
||||
|
||||
3. **Upstream AI Providers**
|
||||
|
||||
- Generate AI content based on requests
|
||||
|
||||
### ChatGenerate workflow:
|
||||
|
||||
1. Request Initialization: AIX Client prepares and sends request (systemInstruction, messages=AixWire_Parts[], etc.) to AIX Server
|
||||
2. Dispatch Preparation: AIX Server prepares for upstream communication
|
||||
3. AI Provider Interaction: AIX Server communicates with AI Provider (streaming or non-streaming)
|
||||
4. Data Decoding, Transformation and Transmission: AIX Server sends AixWire_Particles to AIX Client
|
||||
5. Client-side Processing: Client's ContentReassembler processes AixWire_Particles into a list (likely a single) of multi-fragment (DMessageContentFragment[]) messages
|
||||
6. Completion: AIX Server sends 'done' control message, AIX Client finalizes data update
|
||||
7. Error Handling: AIX Server sends specific error messages when necessary
|
||||
|
||||
## 2. Files and Folders
|
||||
|
||||
AIX is organized into the following files and folders:
|
||||
|
||||
1. Client-Side (`/client/`):
|
||||
|
||||
- `aix.client.ts`: Main client-side entry point for AIX operations.
|
||||
- `aix.client.chatGenerateRequest.ts`: Handles conversion of chat messages to AIX-compatible format (AixWire_Content, AixWire_Parts, etc.).
|
||||
|
||||
2. Server-Side (`/server/`):
|
||||
|
||||
- API (`/server/api/`) - Client to Server communication:
|
||||
- `aix.router.ts`: Defines the tRPC router for AIX operations.
|
||||
- `aix.wiretypes.ts`: Contains Zod schemas for types and calls incoming from the client (AixWire_Parts, AixWire_Content, AixWire_Tooling, AixWire_API, ...), and outgoing (AixWire_Particles)
|
||||
|
||||
- Dispatch (`/server/dispatch/`) - Server to AI Provider communication:
|
||||
- `/server/dispatch/chatGenerate/`: Content Generation with chat-style inputs:
|
||||
- `./adapters/`: Adapters for creating API requests for different AI protocols (Anthropic, Bedrock, Gemini, OpenAI Chat Completions, OpenAI Responses, xAI Responses).
|
||||
- `./parsers/`: Parsers for parsing streaming/non-streaming responses from different AI protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, OpenAI Responses).
|
||||
- `chatGenerate.dispatch.ts`: Creates a pipeline to execute Chat Generation to a specific provider.
|
||||
- `ChatGenerateTransmitter.ts`: Used to serialize and transmit AixWire_Particles to the client.
|
||||
- `/server/dispatch/wiretypes/`: AI provider Wire Types:
|
||||
- Type definitions for different AI providers/protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, xAI).
|
||||
- `stream.demuxers.ts`: Handles demuxing of different stream formats.
|
||||
|
||||
## 3. Architecture Diagram
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant AIX Client
|
||||
participant AIX Server
|
||||
participant PartTransmitter
|
||||
participant AI Provider
|
||||
AIX Client ->> AIX Client: Initialize ContentReassembler
|
||||
AIX Client ->> AIX Client: Convert DMessage*Part to AixWire_Parts
|
||||
AIX Client ->> AIX Server: Send messages (arrays of AixWire_Parts)
|
||||
AIX Server ->> AIX Server: Prepare Dispatch (Upstream request, demux, parsing)
|
||||
|
||||
alt Dispatch Preparation Error
|
||||
AIX Server ->> AIX Client: Send `dispatch-prepare` error message
|
||||
else Dispatch Fetch
|
||||
AIX Server ->> AI Provider: Send AI-provider specific stream/non-stream request
|
||||
AIX Server ->> AIX Client: Send 'start' control message
|
||||
AIX Server ->> PartTransmitter: Initialize part particle serialization
|
||||
|
||||
alt Streaming AI Provider
|
||||
loop Until stream end or error
|
||||
AI Provider ->> AIX Server: Stream response chunk
|
||||
AIX Server ->> AIX Server: Demux chunk into DispatchEvents
|
||||
loop For each AI-provider specific DispatchEvent
|
||||
AIX Server ->> AIX Server: Parse DispatchEvent
|
||||
AIX Server ->> PartTransmitter: (Parser) Calls serialization functions
|
||||
PartTransmitter ->> PartTransmitter: Generate and throttle AixWire_PartParticles
|
||||
PartTransmitter -->> AIX Server: Yield AixWire_PartParticle
|
||||
end
|
||||
AIX Server ->> AIX Client: Send accumulated AixWire_PartParticles
|
||||
end
|
||||
AIX Server ->> PartTransmitter: Request any remaining particles
|
||||
PartTransmitter -->> AIX Server: Yield any final AixWire_PartParticles
|
||||
AIX Server ->> AIX Client: Send final AixWire_PartParticles (if any)
|
||||
else Non-Streaming AI Provider
|
||||
AI Provider ->> AIX Server: Send AI-provider specific complete response
|
||||
alt AI-provider specific full-response parser
|
||||
AIX Server ->> AIX Server: Parse full response
|
||||
AIX Server ->> PartTransmitter: Call particle serialization functions
|
||||
PartTransmitter ->> PartTransmitter: Generate AixWire_PartParticle
|
||||
PartTransmitter -->> AIX Server: Yield ALL AixWire_PartParticle
|
||||
end
|
||||
AIX Server ->> AIX Client: Send all AixWire_PartParticles
|
||||
end
|
||||
AIX Server ->> AIX Client: Send 'done' control message
|
||||
loop For each received batch of particles
|
||||
AIX Client ->> AIX Client: ContentReassembler processes particles into DMessage*Part
|
||||
alt DMessageTextPart
|
||||
AIX Client ->> AIX Client: Update UI with text content
|
||||
else DMessageImageRefPart
|
||||
AIX Client ->> AIX Client: Load and display image
|
||||
else DMessageToolInvocationPart
|
||||
AIX Client ->> AIX Client: Process tool invocation (dev only)
|
||||
else DMessageToolResponsePart
|
||||
AIX Client ->> AIX Client: Process tool response (dev only)
|
||||
else DMessageErrorPart
|
||||
AIX Client ->> AIX Client: Display error message
|
||||
else DMessageDocPart
|
||||
AIX Client ->> AIX Client: Process and display document
|
||||
else DVoidPlaceholderPart
|
||||
AIX Client ->> AIX Client: Handle placeholder (non-submitted)
|
||||
end
|
||||
end
|
||||
AIX Client ->> AIX Client: Finalize data update
|
||||
end
|
||||
|
||||
alt Error Handling
|
||||
AIX Server ->> AIX Client: Send 'error' specific control messages
|
||||
end
|
||||
|
||||
note over AIX Server, AI Provider: Server-side Timeout/Retry mechanism
|
||||
loop Retry on timeout (server-side)
|
||||
AIX Server ->> AI Provider: Retry request
|
||||
end
|
||||
|
||||
note over AIX Client: Client-side Timeout mechanism
|
||||
AIX Client ->> AIX Client: Timeout if no response received within set time
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2025-03-14 Update
|
||||
AIX is used in production in Big-AGI and is stable and performant.
|
||||
The code is tightly coupled with the tRPC framework and the rest of our codebase,
|
||||
so it is not recommended to use it outside of our ecosystem.
|
||||
|
||||
For a great Typescript alternative we recommend the Vercel AI SDK.
|
||||
@@ -1,126 +0,0 @@
|
||||
# LLM Vendor Integration Guide
|
||||
|
||||
How to add support for new LLM providers in Big-AGI. There are two integration paths, and
|
||||
the dynamic backend path is strongly preferred for new vendors.
|
||||
|
||||
## Integration Paths
|
||||
|
||||
### Path 1: Dynamic Backend (preferred)
|
||||
|
||||
For any provider with an **OpenAI-compatible API** (which is nearly all new providers).
|
||||
|
||||
**Surface area**: 1-2 files, no UI changes, no registry changes.
|
||||
|
||||
A dynamic backend provides:
|
||||
- Hostname-based auto-detection when the user adds the provider's API URL
|
||||
- Automatic model list parsing with vendor-specific metadata (pricing, context windows, capabilities)
|
||||
- Zero UI code - uses the existing "Custom OpenAI-compatible" service setup
|
||||
|
||||
**Files touched**:
|
||||
- `src/modules/llms/server/openai/models/{vendor}.models.ts` (required) - model definitions + hostname heuristic
|
||||
- `src/modules/llms/server/openai/wiretypes/{vendor}.wiretypes.ts` (optional) - Zod schemas for vendor-specific wire format
|
||||
- `src/modules/llms/server/listModels.dispatch.ts` - add heuristic to the detection chain (2 lines)
|
||||
|
||||
**What the model file must export**:
|
||||
```typescript
|
||||
// 1. Hostname heuristic - returns true when the user's API URL matches this vendor
|
||||
export function vendorHeuristic(hostname: string): boolean {
|
||||
return hostname.includes('.vendor-domain.com');
|
||||
}
|
||||
|
||||
// 2. Model converter - transforms vendor's /v1/models response to ModelDescriptionSchema[]
|
||||
export function vendorModelsToModelDescriptions(wireModels: unknown): ModelDescriptionSchema[] {
|
||||
// Parse wire format, map to ModelDescriptionSchema with:
|
||||
// - id, label, description
|
||||
// - contextWindow, maxCompletionTokens
|
||||
// - interfaces (Chat, Vision, Fn, Reasoning, etc.)
|
||||
// - chatPrice (input/output per token)
|
||||
// - parameterSpecs (temperature, etc.)
|
||||
}
|
||||
```
|
||||
|
||||
**Existing examples**: `novita.models.ts`, `chutesai.models.ts`, `fireworksai.models.ts`
|
||||
|
||||
MUST also provide the updated vendor icon like other icons in `src/common/components/icons/vendors/`.
|
||||
Make sure all the information is available if in the future we want to promote those to full registered vendors.
|
||||
|
||||
### Path 2: Registered Vendor (heavyweight, discouraged for new providers)
|
||||
|
||||
Full first-class integration with dedicated UI, own dialect, and registry entry. Reserved for
|
||||
providers with **non-OpenAI protocols** (Anthropic, Gemini, Ollama) or providers with enough
|
||||
user demand to warrant a dedicated setup flow.
|
||||
|
||||
**Surface area**: 5+ files across 3 directories.
|
||||
|
||||
**Files touched**:
|
||||
- `src/modules/llms/vendors/{vendor}/{vendor}.vendor.ts` - IModelVendor implementation
|
||||
- `src/modules/llms/vendors/{vendor}/{VendorName}ServiceSetup.tsx` - React UI setup component
|
||||
- `src/modules/llms/vendors/vendors.registry.ts` - registry entry + ModelVendorId union
|
||||
- `src/modules/llms/server/openai/models/{vendor}.models.ts` - model definitions
|
||||
- `src/modules/llms/server/listModels.dispatch.ts` - dispatch case
|
||||
- Possibly server protocol adapter if not OpenAI-compatible
|
||||
- Possibly more files, e.g. wires, etc.
|
||||
- See existing providers and commits that added them for full scope
|
||||
|
||||
**When to use this path**: Only when the provider has a meaningfully different API protocol
|
||||
(not OpenAI-compatible), or when there is significant user demand AND the provider offers
|
||||
unique capabilities that benefit from dedicated UI (e.g., Ollama's local model management).
|
||||
|
||||
When using this path, please add links to upstream documentation. Make sure all constants
|
||||
are correctly handled everywhere, especially for provider-based switches.
|
||||
|
||||
## Decision Criteria
|
||||
|
||||
| Question | Dynamic | Registered |
|
||||
|----------|---------|------------|
|
||||
| OpenAI-compatible API? | Yes - use dynamic | Only if not OAI-compatible |
|
||||
| Needs custom auth UI? | No - uses generic fields | Yes - custom setup form |
|
||||
| Unique protocol? | No | Yes (Anthropic, Gemini, Ollama) |
|
||||
| User demand level | Any | High + sustained |
|
||||
| Maintenance burden | Minimal | Significant (5+ files) |
|
||||
|
||||
## For External Contributors / Vendor Requests
|
||||
|
||||
When vendors or community members request integration via GitHub issues:
|
||||
|
||||
1. **Point them to the dynamic backend path** - it's faster to implement, review, and maintain
|
||||
2. **Requirements for a dynamic backend PR**:
|
||||
- Model file with heuristic + converter exporting `ModelDescriptionSchema[]`
|
||||
- Wire types if the vendor's `/v1/models` response has non-standard fields
|
||||
- Vendor icon (SVG preferred) in `src/common/components/icons/vendors/`
|
||||
- Two-line addition to the heuristic chain in `listModels.dispatch.ts`
|
||||
3. **Do not accept**: New registered vendors for OpenAI-compatible providers. The maintenance
|
||||
cost of a full vendor (UI component, registry entry, dispatch case) is not justified when
|
||||
dynamic detection achieves the same result with a fraction of the code.
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
### How Dynamic Detection Works
|
||||
|
||||
In `listModels.dispatch.ts`, the `case 'openai':` handler:
|
||||
1. Fetches `/v1/models` from the user-provided API host
|
||||
2. Runs the hostname through a chain of heuristics (in order)
|
||||
3. First matching heuristic's converter is used to parse models
|
||||
4. Falls back to stock OpenAI parsing if no heuristic matches
|
||||
|
||||
### Hostname Security
|
||||
|
||||
Hostname matching uses `llmsHostnameMatches()` from `openai.access.ts` which parses the
|
||||
URL properly to prevent DNS spoofing. Always use `.includes()` on the parsed hostname,
|
||||
never on the raw URL string.
|
||||
|
||||
### Key Types
|
||||
|
||||
- `ModelDescriptionSchema` (`llm.server.types.ts`) - output type for all model converters
|
||||
- `DModelInterfaceV1` (`llms.types.ts`) - capability flags (Chat, Vision, Fn, Reasoning, etc.)
|
||||
- `IModelVendor` (`vendors/IModelVendor.ts`) - interface for registered vendors only
|
||||
- `ManualMappings` / `KnownModel` (`models.mappings.ts`) - server-side model patches
|
||||
|
||||
### File Locations
|
||||
|
||||
- Dynamic backends: `src/modules/llms/server/openai/models/`
|
||||
- Wire types: `src/modules/llms/server/openai/wiretypes/`
|
||||
- Dispatch: `src/modules/llms/server/listModels.dispatch.ts`
|
||||
- Registered vendors: `src/modules/llms/vendors/*/`
|
||||
- Vendor icons: `src/common/components/icons/vendors/`
|
||||
- Type definitions: `src/modules/llms/server/llm.server.types.ts`
|
||||
@@ -1,120 +0,0 @@
|
||||
# LLM Parameters System
|
||||
|
||||
This document describes how parameters flow through Big-AGI's LLM parameters system, from definition to API invocation.
|
||||
|
||||
## System Overview
|
||||
|
||||
The LLM parameters system operates across five layers that transform parameters from global definitions to vendor-specific API calls. Each layer serves a specific purpose in the parameter resolution pipeline.
|
||||
|
||||
## Parameter Flow Architecture
|
||||
|
||||
### Layer 1: Parameter Registry
|
||||
**File**: `src/common/stores/llms/llms.parameters.ts`
|
||||
|
||||
The `DModelParameterRegistry` defines all available parameters with their constraints and metadata. Each parameter includes type information, validation rules, and default behavior.
|
||||
|
||||
**Default Value System**: The registry supports multiple default mechanisms:
|
||||
- `nullable` - Parameters that can be explicitly null to skip API transmission
|
||||
- `initialValue` - Parameter's base default (e.g., `llmVndOaiRestoreMarkdown: true`)
|
||||
|
||||
### Layer 2: Model Specifications
|
||||
**File**: `src/modules/llms/server/llm.server.types.ts`
|
||||
|
||||
Models declare which parameters they support through `parameterSpecs` arrays. Each spec can override registry defaults:
|
||||
|
||||
```typescript
|
||||
parameterSpecs: [
|
||||
{ paramId: 'llmVndAntThinkingBudget', initialValue: 1024 }, // Override default
|
||||
{ paramId: 'llmVndGeminiThinkingBudget', rangeOverride: [0, 8192] }, // Custom range
|
||||
]
|
||||
```
|
||||
|
||||
**Parameter Visibility**: The `hidden` flag removes parameters from the UI while keeping them functional. Models can also mark parameters as `required`.
|
||||
|
||||
### Layer 3: Client Configuration
|
||||
|
||||
The system provides two UI configurators with different scopes:
|
||||
|
||||
#### Full Model Configuration Dialog
|
||||
**File**: `src/modules/llms/models-modal/LLMParametersEditor.tsx`
|
||||
Shows all non-hidden parameters from model's `parameterSpecs`. Used in the models modal for complete configuration.
|
||||
|
||||
#### ChatPanel Quick Controls
|
||||
**File**: `src/apps/chat/components/layout-panel/ChatPanelModelParameters.tsx`
|
||||
Shows only parameters that are:
|
||||
- In model's `parameterSpecs`
|
||||
- Listed in `_interestingParameters` array
|
||||
- Not marked as `hidden`
|
||||
|
||||
**Value Resolution**: Both UIs use `getAllModelParameterValues()` to merge:
|
||||
1. **Fallback values** - Implicit parameters get their `LLMImplicitParametersRuntimeFallback` values
|
||||
2. **Initial values** - Model's `initialParameters` (populated during model creation)
|
||||
3. **User values** - User's `userParameters` (highest priority)
|
||||
|
||||
### Layer 4: AIX Translation
|
||||
**File**: `src/modules/aix/client/aix.client.ts`
|
||||
|
||||
The AIX client transforms DLLM parameters to wire protocol format. This layer handles parameter precedence rules and name transformations.
|
||||
|
||||
**Client Options**: The system supports parameter overrides through `llmOptionsOverride` and complete replacement via `llmUserParametersReplacement`.
|
||||
|
||||
### Layer 5: Vendor Adaptation
|
||||
**Files**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
|
||||
|
||||
Server-side adapters translate AIX parameters to vendor APIs. Each vendor may interpret parameters differently:
|
||||
|
||||
- **OpenAI**: `vndEffort` -> `reasoning_effort`
|
||||
- **Perplexity**: Reuses OpenAI parameter format
|
||||
- **OpenAI Responses API**: Maps to structured reasoning config with additional logic
|
||||
|
||||
## Parameter Initialization Process
|
||||
|
||||
When a model is loaded:
|
||||
|
||||
1. **Model Creation**: `_createDLLMFromModelDescription()` creates the DLLM with empty `initialParameters`
|
||||
2. **Initial Value Application**: `applyModelParameterSpecsInitialValues()` populates initial values from:
|
||||
- Model spec `initialValue` (highest priority)
|
||||
- Registry `initialValue` (fallback)
|
||||
3. **Runtime Resolution**: `getAllModelParameterValues()` creates final parameter set:
|
||||
- Required fallbacks (for missing required parameters)
|
||||
- Initial parameters (model defaults)
|
||||
- User parameters (user overrides)
|
||||
|
||||
## Special Parameter Behaviors
|
||||
|
||||
**Hidden Parameters**: Parameters like `llmRef` are marked `hidden: true` in the registry and never appear in the UI, but remain functional for system use.
|
||||
|
||||
**Nullable Parameters**: Parameters with `nullable` configuration can be explicitly set to `null` to prevent transmission to the API, distinct from being undefined.
|
||||
|
||||
**Range Overrides**: Models can override parameter ranges (e.g., different Gemini models support different thinking budget ranges).
|
||||
|
||||
**Parameter Interactions**: The UI implements business logic like disabling web search when reasoning effort is 'minimal'.
|
||||
|
||||
## Type Safety Mechanisms
|
||||
|
||||
The system maintains type safety through:
|
||||
- `DModelParameterId` union from registry keys
|
||||
- `DModelParameterValue<T>` conditional types for values
|
||||
- `DModelParameterSpecAny` interfaces for specifications
|
||||
- Runtime validation via Zod schemas at API boundaries
|
||||
|
||||
## Model Variant Pattern
|
||||
|
||||
Some vendors use model variants to enable features, for instance:
|
||||
- **Anthropic**: Creates separate `idVariant: 'thinking'` entries forcing value of hidden parameters
|
||||
- **Google/OpenAI**: Parameters directly on base models
|
||||
|
||||
## Migration and Compatibility
|
||||
|
||||
The architecture supports parameter evolution:
|
||||
- **Precedence Rules**: Newer parameters take priority during AIX translation
|
||||
- **Graceful Degradation**: Unknown parameters log warnings but don't break functionality
|
||||
|
||||
## Key Implementation Files
|
||||
|
||||
- **Registry**: `src/common/stores/llms/llms.parameters.ts`
|
||||
- **Specifications**: `src/modules/llms/server/llm.server.types.ts`
|
||||
- **UI Controls**: `src/modules/llms/models-modal/LLMParametersEditor.tsx`
|
||||
- **AIX Translation**: `src/modules/aix/client/aix.client.ts`
|
||||
- **Wire Types**: `src/modules/aix/server/api/aix.wiretypes.ts`
|
||||
- **Vendor Adapters**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
|
||||
@@ -1,151 +0,0 @@
|
||||
# Big-AGI Routing & Display States
|
||||
|
||||
This document describes the routing architecture and display state hierarchy in Big-AGI, from top-level providers down to component-level states.
|
||||
|
||||
## Overview
|
||||
|
||||
Big-AGI uses Next.js Pages Router with a provider stack that determines what users see based on application state and configuration.
|
||||
|
||||
## Quick Reference: Route Configurations
|
||||
|
||||
| Route | Purpose | Key Features |
|
||||
|-------|---------|--------------|
|
||||
| `/` | Main chat app | Default application |
|
||||
| `/call` | Voice interface | Voice-to-voice AI conversations |
|
||||
| `/personas` | Persona management | Create and manage AI personas |
|
||||
| ... | | |
|
||||
|
||||
## Decision Flow Diagram
|
||||
|
||||
The routing decisions follow a hierarchy from system-level provider configuration down to component-level states.
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Start([Navigate to Route]) --> Root[_app.tsx]
|
||||
|
||||
Root --> Theme[ProviderTheming]
|
||||
Theme --> Error[ErrorBoundary]
|
||||
Error --> Bootstrap[ProviderBootstrapLogic]
|
||||
|
||||
Bootstrap --> BootCheck{Bootstrap Checks}
|
||||
BootCheck -->|News| News[↗️ /news]
|
||||
BootCheck -->|Continue| Router{Router}
|
||||
|
||||
Router -->|/| Chat[Chat App]
|
||||
Router -->|/personas,/call,/beam...| OtherApps[Other Apps]
|
||||
Router -->|/news| NewsApp[News App]
|
||||
|
||||
Chat --> ChatStates{Chat States}
|
||||
|
||||
ChatStates -->|No Models| ZeroModels[🟡 Setup Models]
|
||||
ChatStates -->|No Conv| ZeroConv[🟡 Select Chat]
|
||||
ChatStates -->|No Msgs| PersonaGrid[Choose Persona]
|
||||
ChatStates -->|Ready| Active[🟢 Active Chat]
|
||||
|
||||
Active --> Features[Features:<br/>• Chat Bar<br/>• Beam Mode<br/>• Attachments]
|
||||
|
||||
style ZeroModels fill:#fff4cc
|
||||
style ZeroConv fill:#fff4cc
|
||||
style Active fill:#ccffcc
|
||||
style Chat fill:#f0f8ff
|
||||
style OtherApps fill:#f0f8ff
|
||||
style NewsApp fill:#f0f8ff
|
||||
```
|
||||
|
||||
## Display State Hierarchy
|
||||
|
||||
```
|
||||
_app.tsx (Root)
|
||||
├── ProviderTheming ← Always Applied
|
||||
├── ErrorBoundary ← Always Applied
|
||||
├── ProviderBootstrapLogic ← Always Applied
|
||||
│ ├── Tiktoken preload & Model auto-config
|
||||
│ ├── Storage maintenance & cleanup
|
||||
│ └── News Redirect (if conditions met)
|
||||
│
|
||||
└── Page Component
|
||||
├── AppChat (/) → Default app
|
||||
│ ├── CMLZeroModels → If no models configured
|
||||
│ ├── CMLZeroConversation → If no conversation selected
|
||||
│ └── PersonaGrid → If conversation empty
|
||||
│
|
||||
└── Other Apps → Personas, Call, Draw, News, Beam
|
||||
```
|
||||
|
||||
## Provider Stack
|
||||
|
||||
| Provider | Purpose | Key Functions |
|
||||
|----------|---------|---------------|
|
||||
| **ProviderTheming** | UI theme management | Theme switching, CSS variables |
|
||||
| **ErrorBoundary** | Error handling | Catches and displays errors gracefully |
|
||||
| **ProviderBootstrapLogic** | App initialization | • Tiktoken preload<br>• Model auto-config<br>• Storage cleanup<br>• News redirect logic |
|
||||
|
||||
For detailed initialization sequence and provider functions, see [app-startup-sequence.md](app-startup-sequence.md), if present.
|
||||
|
||||
## Application Routes
|
||||
|
||||
### Primary Apps
|
||||
- `/` → AppChat (default)
|
||||
- `/call` → Voice call interface
|
||||
- `/beam` → Multi-model reasoning
|
||||
- `/draw` → Image generation
|
||||
- `/personas` → Personas app
|
||||
- `/news` → News/updates
|
||||
|
||||
### Zero States
|
||||
|
||||
#### Chat App Zero States
|
||||
|
||||
**CMLZeroModels**
|
||||
- **Location**: `/src/apps/chat/components/messages-list/CMLZeroModels.tsx`
|
||||
- **Triggered**: No LLM sources configured
|
||||
- **Shows**: Welcome screen with "Setup Models" button
|
||||
|
||||
**CMLZeroConversation**
|
||||
- **Location**: `/src/apps/chat/components/messages-list/CMLZeroConversation.tsx`
|
||||
- **Triggered**: No conversation selected
|
||||
- **Shows**: "Select/create conversation" prompt
|
||||
|
||||
**PersonaGrid**
|
||||
- **App**: Chat (when conversation is empty)
|
||||
- **Triggered**: Conversation exists but has no messages
|
||||
- **Shows**: Persona selector interface
|
||||
|
||||
#### Feature-Specific Zero States
|
||||
|
||||
**Beam Tutorial**
|
||||
- **Feature**: Beam (multi-model reasoning)
|
||||
- **Component**: `ExplainerCarousel`
|
||||
- **Triggered**: First-time Beam usage
|
||||
- **Shows**: Interactive feature walkthrough
|
||||
|
||||
## Common Scenarios
|
||||
|
||||
### New User First Visit
|
||||
1. Navigates to `/` → Provider stack loads
|
||||
2. Bootstrap runs → No news redirect (first visit)
|
||||
3. Chat loads → **CMLZeroModels** (no models configured)
|
||||
4. User clicks "Setup Models" → Configuration flow
|
||||
|
||||
### Returning User with Saved State
|
||||
1. Navigates to `/` → Provider stack loads
|
||||
2. IndexedDB restores state → Previous conversation loaded
|
||||
3. Chat loads → **Active chat interface** (bypasses all zero states)
|
||||
4. All messages and context preserved from last session
|
||||
|
||||
### Shared Chat Viewer
|
||||
1. Navigates to `/link/chat/[id]` → Full provider stack
|
||||
2. Views read-only chat → May see "Import" option
|
||||
3. If importing → Checks for duplicates, creates new local conversation
|
||||
|
||||
## Storage System
|
||||
|
||||
Big-AGI uses a local-first architecture:
|
||||
- **Zustand** for reactive state management
|
||||
- **IndexedDB** for persistent storage via Zustand persist middleware
|
||||
- **Version-based migrations** for data structure upgrades
|
||||
|
||||
Key stores:
|
||||
- `app-chats`: Conversations and messages (IndexedDB)
|
||||
- `app-llms`: Model configurations (IndexedDB)
|
||||
- `app-ui`: UI preferences (localStorage)
|
||||
@@ -1,13 +0,0 @@
|
||||
# CSF - Client-Side Fetch
|
||||
|
||||
Client-Side Fetch (CSF) enables direct browser-to-API communication, bypassing the server for LLM requests. When enabled, the browser makes requests directly to vendor APIs (e.g., `api.openai.com`, `api.groq.com`) instead of routing through the Next.js server. This reduces latency, decreases server load, and is particularly useful for local models where the browser can communicate directly with Ollama or LM Studio.
|
||||
|
||||
## Implementation
|
||||
|
||||
CSF is implemented as an opt-in setting stored as `csf: boolean` in each vendor's service settings. The vendor interface exposes `csfAvailable?: (setup) => boolean` to determine if CSF can be enabled (typically checking if an API key or host is configured). The actual execution happens in `aix.client.direct-chatGenerate.ts` which dynamically imports when CSF is active, making direct fetch calls using the same wire protocols as the server.
|
||||
|
||||
All 20+ supported vendors (OpenAI, Anthropic, Gemini, Ollama, LocalAI, Deepseek, Groq, Mistral, xAI, OpenRouter, Perplexity, Together AI, Alibaba, Moonshot, OpenPipe, LM Studio, Z.ai, Azure, Bedrock) support CSF. Cloud vendors require CORS support from the API provider (all tested vendors return `access-control-allow-origin: *`). Local vendors (Ollama, LocalAI, LM Studio) require CORS to be enabled on the local server.
|
||||
|
||||
## UI
|
||||
|
||||
The CSF toggle appears in each vendor's setup panel under "Advanced" settings, labeled "Direct Connection". It becomes visible when the prerequisites are met (API key present for cloud vendors, host configured for local vendors). The setting is managed through `useModelServiceClientSideFetch` hook which provides `csfAvailable`, `csfActive`, `csfToggle`, and `csfReset` for UI consumption.
|
||||
@@ -1,3 +0,0 @@
|
||||
## Strategic Vision
|
||||
|
||||
If provided, the following influences the long-term vision, product and architectural goals/north stars for Big-AGI.
|
||||
@@ -0,0 +1,7 @@
|
||||
import { clerkMiddleware } from '@clerk/nextjs/server';
|
||||
|
||||
export default clerkMiddleware();
|
||||
|
||||
export const config = {
|
||||
matcher: ['/((?!.+.[w]+$|_next).*)', '/', '/(api|trpc)(.*)'],
|
||||
};
|
||||
@@ -0,0 +1,70 @@
|
||||
// Non-default build types
|
||||
const buildType =
|
||||
process.env.BIG_AGI_BUILD === 'standalone' ? 'standalone'
|
||||
: process.env.BIG_AGI_BUILD === 'static' ? 'export'
|
||||
: undefined;
|
||||
|
||||
buildType && console.log(` 🧠 big-AGI: building for ${buildType}...\n`);
|
||||
|
||||
/** @type {import('next').NextConfig} */
|
||||
let nextConfig = {
|
||||
reactStrictMode: true,
|
||||
|
||||
// [exports] https://nextjs.org/docs/advanced-features/static-html-export
|
||||
...buildType && {
|
||||
output: buildType,
|
||||
distDir: 'dist',
|
||||
|
||||
// disable image optimization for exports
|
||||
images: { unoptimized: true },
|
||||
|
||||
// Optional: Change links `/me` -> `/me/` and emit `/me.html` -> `/me/index.html`
|
||||
// trailingSlash: true,
|
||||
},
|
||||
|
||||
// [puppeteer] https://github.com/puppeteer/puppeteer/issues/11052
|
||||
experimental: {
|
||||
serverComponentsExternalPackages: ['puppeteer-core'],
|
||||
},
|
||||
|
||||
webpack: (config, _options) => {
|
||||
// @mui/joy: anything material gets redirected to Joy
|
||||
config.resolve.alias['@mui/material'] = '@mui/joy';
|
||||
|
||||
// @dqbd/tiktoken: enable asynchronous WebAssembly
|
||||
config.experiments = {
|
||||
asyncWebAssembly: true,
|
||||
layers: true,
|
||||
};
|
||||
|
||||
// prevent too many small chunks (40kb min) on 'client' packs (not 'server' or 'edge-server')
|
||||
if (typeof config.optimization.splitChunks === 'object' && config.optimization.splitChunks.minSize)
|
||||
config.optimization.splitChunks.minSize = 40 * 1024;
|
||||
|
||||
return config;
|
||||
},
|
||||
|
||||
// Note: disabled to check whether the project becomes slower with this
|
||||
// modularizeImports: {
|
||||
// '@mui/icons-material': {
|
||||
// transform: '@mui/icons-material/{{member}}',
|
||||
// },
|
||||
// },
|
||||
|
||||
// Uncomment the following leave console messages in production
|
||||
// compiler: {
|
||||
// removeConsole: false,
|
||||
// },
|
||||
};
|
||||
|
||||
// Validate environment variables, if set at build time. Will be actually read and used at runtime.
|
||||
// This is the reason both this file and the servr/env.mjs files have this extension.
|
||||
await import('./src/server/env.mjs');
|
||||
|
||||
// conditionally enable the nextjs bundle analyzer
|
||||
if (process.env.ANALYZE_BUNDLE) {
|
||||
const { default: withBundleAnalyzer } = await import('@next/bundle-analyzer');
|
||||
nextConfig = withBundleAnalyzer({ openAnalyzer: true })(nextConfig);
|
||||
}
|
||||
|
||||
export default nextConfig;
|
||||
-160
@@ -1,160 +0,0 @@
|
||||
import type { NextConfig } from 'next';
|
||||
import type { WebpackConfigContext } from 'next/dist/server/config-shared';
|
||||
import { execSync } from 'node:child_process';
|
||||
import { readFileSync } from 'node:fs';
|
||||
|
||||
// Build information: from CI, or git commit hash
|
||||
let buildHash = process.env.NEXT_PUBLIC_BUILD_HASH || process.env.GITHUB_SHA || process.env.VERCEL_GIT_COMMIT_SHA; // Docker or custom, GitHub Actions, Vercel
|
||||
try {
|
||||
// fallback to local git commit hash
|
||||
if (!buildHash)
|
||||
buildHash = execSync('git rev-parse --short HEAD').toString().trim();
|
||||
} catch {
|
||||
// final fallback
|
||||
buildHash = '2-dev';
|
||||
}
|
||||
// The following are used by/available to Release.buildInfo(...)
|
||||
process.env.NEXT_PUBLIC_BUILD_HASH = (buildHash || '').slice(0, 10);
|
||||
process.env.NEXT_PUBLIC_BUILD_PKGVER = JSON.parse('' + readFileSync(new URL('./package.json', import.meta.url))).version;
|
||||
process.env.NEXT_PUBLIC_BUILD_TIMESTAMP = new Date().toISOString();
|
||||
process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE = process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE || (process.env.VERCEL_ENV ? `vercel-${process.env.VERCEL_ENV}` : 'local'); // Docker or custom, Vercel
|
||||
console.log(` 🧠 \x1b[1mbig-AGI\x1b[0m v${process.env.NEXT_PUBLIC_BUILD_PKGVER} (@${process.env.NEXT_PUBLIC_BUILD_HASH})`);
|
||||
|
||||
// Non-default build types
|
||||
const buildType =
|
||||
process.env.BIG_AGI_BUILD === 'standalone' ? 'standalone' as const
|
||||
: process.env.BIG_AGI_BUILD === 'static' ? 'export' as const
|
||||
: undefined;
|
||||
|
||||
buildType && console.log(` 🧠 big-AGI: building for ${buildType}...\n`);
|
||||
|
||||
/** @type {import('next').NextConfig} */
|
||||
let nextConfig: NextConfig = {
|
||||
reactStrictMode: !process.env.NO_STRICT_MODE, // default: enabled
|
||||
|
||||
// [exports] https://nextjs.org/docs/advanced-features/static-html-export
|
||||
...(buildType && {
|
||||
output: buildType,
|
||||
distDir: 'dist',
|
||||
|
||||
// disable image optimization for exports
|
||||
images: { unoptimized: true },
|
||||
|
||||
// Optional: Change links `/me` -> `/me/` and emit `/me.html` -> `/me/index.html`
|
||||
// trailingSlash: true,
|
||||
}),
|
||||
|
||||
// [puppeteer] https://github.com/puppeteer/puppeteer/issues/11052
|
||||
// NOTE: we may not be needing this anymore, as we use '@cloudflare/puppeteer'
|
||||
serverExternalPackages: ['puppeteer-core'],
|
||||
|
||||
webpack: (config: any, { isServer, webpack /*, dev, nextRuntime*/ }: WebpackConfigContext) => {
|
||||
// @mui/joy: anything material gets redirected to Joy
|
||||
config.resolve.alias['@mui/material'] = '@mui/joy';
|
||||
|
||||
// @dqbd/tiktoken: enable asynchronous WebAssembly
|
||||
config.experiments = {
|
||||
asyncWebAssembly: true,
|
||||
layers: true,
|
||||
};
|
||||
|
||||
// client-side bundling
|
||||
if (!isServer) {
|
||||
/**
|
||||
* AIX client-side
|
||||
* We replace certain server-only modules with client-side mocks, to reuse the exact same imports
|
||||
* while avoiding importing server-only code which would break the build or break at runtime.
|
||||
*/
|
||||
const serverToClientMocks: ReadonlyArray<[RegExp, string]> = [
|
||||
[/\/posthog\.server/, '/posthog.client-mock'],
|
||||
[/\/env\.server/, '/env.client-mock'],
|
||||
];
|
||||
config.plugins = [
|
||||
...config.plugins,
|
||||
...serverToClientMocks.map(([pattern, replacement]) =>
|
||||
new webpack.NormalModuleReplacementPlugin(pattern, (resource: any) => {
|
||||
// console.log(' 🧠 [WEBPACK REPLACEMENT]:', resource.request, '->', resource.request.replace(pattern, replacement));
|
||||
resource.request = resource.request.replace(pattern, replacement);
|
||||
}),
|
||||
),
|
||||
];
|
||||
|
||||
// cosmetic: fix warnings for (absent!) top-level awaits in the browser (https://github.com/vercel/next.js/issues/64792)
|
||||
config.output.environment = { ...config.output.environment, asyncFunction: true };
|
||||
}
|
||||
|
||||
// prevent too many small chunks (40kb min) on 'client' packs (not 'server' or 'edge-server')
|
||||
// noinspection JSUnresolvedReference
|
||||
if (typeof config.optimization.splitChunks === 'object' && config.optimization.splitChunks.minSize) {
|
||||
// noinspection JSUnresolvedReference
|
||||
config.optimization.splitChunks.minSize = 40 * 1024;
|
||||
}
|
||||
|
||||
return config;
|
||||
},
|
||||
|
||||
// Optional Analytics > PostHog
|
||||
skipTrailingSlashRedirect: true, // required to support PostHog trailing slash API requests
|
||||
async rewrites() {
|
||||
return [
|
||||
{
|
||||
source: '/a/ph/static/:path*',
|
||||
destination: 'https://us-assets.i.posthog.com/static/:path*',
|
||||
},
|
||||
{
|
||||
source: '/a/ph/:path*',
|
||||
destination: 'https://us.i.posthog.com/:path*',
|
||||
},
|
||||
{
|
||||
source: '/a/ph/decide',
|
||||
destination: 'https://us.i.posthog.com/decide',
|
||||
},
|
||||
{
|
||||
source: '/a/ph/flags',
|
||||
destination: 'https://us.i.posthog.com/flags',
|
||||
},
|
||||
];
|
||||
},
|
||||
|
||||
// Note: disabled to check whether the project becomes slower with this
|
||||
// modularizeImports: {
|
||||
// '@mui/icons-material': {
|
||||
// transform: '@mui/icons-material/{{member}}',
|
||||
// },
|
||||
// },
|
||||
|
||||
// Uncomment the following leave console messages in production
|
||||
// compiler: {
|
||||
// removeConsole: false,
|
||||
// },
|
||||
};
|
||||
|
||||
// Validate environment variables at build time, if required. Server env vars will be actually read and used at runtime (cloud/edge).
|
||||
import { env as validateEnv } from '~/server/env.server';
|
||||
void validateEnv; // Triggers env validation - throws if required vars are missing
|
||||
|
||||
// PostHog error reporting with source maps for production builds
|
||||
import { withPostHogConfig } from '@posthog/nextjs-config';
|
||||
if (process.env.POSTHOG_API_KEY && process.env.POSTHOG_ENV_ID) {
|
||||
console.log(' 🧠 \x1b[1mbig-AGI\x1b[0m: building with PostHog issue reporting and source maps...');
|
||||
nextConfig = withPostHogConfig(nextConfig, {
|
||||
personalApiKey: process.env.POSTHOG_API_KEY,
|
||||
envId: process.env.POSTHOG_ENV_ID,
|
||||
host: 'https://us.i.posthog.com', // backtrace upload host
|
||||
logLevel: 'error', // lowered, too noisy
|
||||
sourcemaps: {
|
||||
enabled: process.env.NODE_ENV === 'production',
|
||||
project: 'big-agi',
|
||||
version: process.env.NEXT_PUBLIC_BUILD_HASH,
|
||||
deleteAfterUpload: false, // false: leave them in the tree, which would also help debugging of open-source installs
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// conditionally enable the nextjs bundle analyzer
|
||||
import withBundleAnalyzer from '@next/bundle-analyzer';
|
||||
if (process.env.ANALYZE_BUNDLE) {
|
||||
nextConfig = withBundleAnalyzer({ openAnalyzer: true })(nextConfig) as NextConfig;
|
||||
}
|
||||
|
||||
export default nextConfig;
|
||||
Generated
+3621
-7695
File diff suppressed because it is too large
Load Diff
+63
-76
@@ -1,104 +1,91 @@
|
||||
{
|
||||
"name": "big-agi",
|
||||
"version": "2.0.4",
|
||||
"version": "1.16.0",
|
||||
"private": true,
|
||||
"author": "Enrico Ros <enrico@big-agi.com> (https://www.enricoros.com)",
|
||||
"homepage": "https://big-agi.com",
|
||||
"author": "Enrico Ros <enrico.ros@gmail.com>",
|
||||
"repository": "https://github.com/enricoros/big-agi",
|
||||
"scripts": {
|
||||
"dev": "next dev --turbopack",
|
||||
"dev-debug": "cross-env NODE_OPTIONS='--inspect' next dev",
|
||||
"dev-https": "next dev --experimental-https",
|
||||
"dev": "next dev",
|
||||
"build": "next build",
|
||||
"start": "next start",
|
||||
"lint": "next lint",
|
||||
"postinstall": "prisma generate --no-hints",
|
||||
"gen:icon-sprites": "node tools/develop/gen-icon-sprites/generate-llm-sprites.ts",
|
||||
"postinstall": "prisma generate",
|
||||
"db:push": "prisma db push",
|
||||
"db:studio": "prisma studio",
|
||||
"vercel:env:pull": "npx vercel env pull .env.development.local",
|
||||
"sharp:win32_x64": "npm install --os=win32 --cpu=x64 sharp"
|
||||
"vercel:env:pull": "npx vercel env pull .env.development.local"
|
||||
},
|
||||
"prisma": {
|
||||
"schema": "src/server/prisma/schema.prisma"
|
||||
},
|
||||
"dependencies": {
|
||||
"@dnd-kit/core": "^6.3.1",
|
||||
"@dnd-kit/modifiers": "^9.0.0",
|
||||
"@dnd-kit/sortable": "^10.0.0",
|
||||
"@dnd-kit/utilities": "^3.2.2",
|
||||
"@emotion/cache": "^11.14.0",
|
||||
"@emotion/react": "^11.14.0",
|
||||
"@clerk/nextjs": "^5.0.8",
|
||||
"@emotion/cache": "^11.11.0",
|
||||
"@emotion/react": "^11.11.4",
|
||||
"@emotion/server": "^11.11.0",
|
||||
"@emotion/styled": "^11.14.1",
|
||||
"@googleworkspace/drive-picker-react": "^0.2.0",
|
||||
"@mui/icons-material": "^5.18.0",
|
||||
"@mui/joy": "^5.0.0-beta.52",
|
||||
"@next/bundle-analyzer": "~15.1.12",
|
||||
"@prisma/client": "~5.22.0",
|
||||
"@tanstack/react-query": "5.90.21",
|
||||
"@tanstack/react-virtual": "^3.13.22",
|
||||
"@trpc/client": "11.5.1",
|
||||
"@trpc/next": "11.5.1",
|
||||
"@trpc/react-query": "11.5.1",
|
||||
"@trpc/server": "11.5.1",
|
||||
"@vercel/analytics": "^1.6.1",
|
||||
"@vercel/speed-insights": "^1.3.1",
|
||||
"aws4fetch": "^1.0.20",
|
||||
"browser-fs-access": "^0.38.0",
|
||||
"cheerio": "^1.1.2",
|
||||
"csv-stringify": "^6.6.0",
|
||||
"dexie": "~4.0.11",
|
||||
"dexie-react-hooks": "~1.1.7",
|
||||
"diff": "^8.0.3",
|
||||
"eventemitter3": "^5.0.4",
|
||||
"idb-keyval": "^6.2.2",
|
||||
"mammoth": "^1.11.0",
|
||||
"nanoid": "^5.1.6",
|
||||
"next": "~15.1.12",
|
||||
"@emotion/styled": "^11.11.5",
|
||||
"@mui/icons-material": "^5.15.17",
|
||||
"@mui/joy": "^5.0.0-beta.36",
|
||||
"@mui/material": "^5.15.17",
|
||||
"@next/bundle-analyzer": "^14.2.3",
|
||||
"@next/third-parties": "^14.2.3",
|
||||
"@prisma/client": "^5.13.0",
|
||||
"@sanity/diff-match-patch": "^3.1.1",
|
||||
"@t3-oss/env-nextjs": "^0.10.1",
|
||||
"@tanstack/react-query": "~4.36.1",
|
||||
"@trpc/client": "10.44.1",
|
||||
"@trpc/next": "10.44.1",
|
||||
"@trpc/react-query": "10.44.1",
|
||||
"@trpc/server": "10.44.1",
|
||||
"@vercel/analytics": "^1.2.2",
|
||||
"@vercel/speed-insights": "^1.0.10",
|
||||
"browser-fs-access": "^0.35.0",
|
||||
"eventsource-parser": "^1.1.2",
|
||||
"idb-keyval": "^6.2.1",
|
||||
"next": "~14.1.4",
|
||||
"nprogress": "^0.2.0",
|
||||
"pdfjs-dist": "5.4.54",
|
||||
"posthog-js": "^1.360.2",
|
||||
"posthog-node": "^5.28.2",
|
||||
"prismjs": "^1.30.0",
|
||||
"puppeteer-core": "^24.39.1",
|
||||
"pdfjs-dist": "4.2.67",
|
||||
"plantuml-encoder": "^1.4.0",
|
||||
"prismjs": "^1.29.0",
|
||||
"react": "^18.3.1",
|
||||
"react-beautiful-dnd": "^13.1.1",
|
||||
"react-csv": "^2.2.2",
|
||||
"react-dom": "^18.3.1",
|
||||
"react-hook-form": "^7.71.2",
|
||||
"react-markdown": "^10.1.0",
|
||||
"react-player": "^3.4.0",
|
||||
"react-resizable-panels": "^3.0.6",
|
||||
"react-timeago": "^8.3.0",
|
||||
"rehype-katex": "^7.0.1",
|
||||
"remark-gfm": "^4.0.1",
|
||||
"remark-mark-highlight": "^0.1.1",
|
||||
"react-katex": "^3.0.1",
|
||||
"react-markdown": "^9.0.1",
|
||||
"react-player": "^2.16.0",
|
||||
"react-resizable-panels": "^2.0.19",
|
||||
"react-timeago": "^7.2.0",
|
||||
"rehype-katex": "^7.0.0",
|
||||
"remark-gfm": "^4.0.0",
|
||||
"remark-math": "^6.0.0",
|
||||
"sharp": "^0.34.5",
|
||||
"superjson": "^2.2.6",
|
||||
"tesseract.js": "^7.0.0",
|
||||
"tiktoken": "^1.0.22",
|
||||
"turndown": "^7.2.2",
|
||||
"zod": "^4.3.6",
|
||||
"zustand": "5.0.7"
|
||||
"sharp": "^0.33.3",
|
||||
"superjson": "^2.2.1",
|
||||
"tesseract.js": "^5.1.0",
|
||||
"tiktoken": "^1.0.14",
|
||||
"uuid": "^9.0.1",
|
||||
"zod": "^3.23.8",
|
||||
"zustand": "^4.5.2"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@posthog/nextjs-config": "~1.6.4",
|
||||
"@types/node": "^25.5.0",
|
||||
"@cloudflare/puppeteer": "0.0.5",
|
||||
"@types/node": "^20.12.11",
|
||||
"@types/nprogress": "^0.2.3",
|
||||
"@types/prismjs": "^1.26.6",
|
||||
"@types/react": "^19.2.14",
|
||||
"@types/plantuml-encoder": "^1.4.2",
|
||||
"@types/prismjs": "^1.26.4",
|
||||
"@types/react": "^18.3.1",
|
||||
"@types/react-beautiful-dnd": "^13.1.8",
|
||||
"@types/react-csv": "^1.1.10",
|
||||
"@types/react-dom": "^19.2.3",
|
||||
"@types/turndown": "^5.0.6",
|
||||
"cross-env": "^10.1.0",
|
||||
"eslint": "^9.39.2",
|
||||
"eslint-config-next": "~15.1.12",
|
||||
"prettier": "^3.8.1",
|
||||
"prisma": "~5.22.0",
|
||||
"tsx": "^4.21.0",
|
||||
"typescript": "^5.9.3"
|
||||
"@types/react-dom": "^18.3.0",
|
||||
"@types/react-katex": "^3.0.4",
|
||||
"@types/react-timeago": "^4.1.7",
|
||||
"@types/uuid": "^9.0.8",
|
||||
"eslint": "^8.57.0",
|
||||
"eslint-config-next": "^14.2.3",
|
||||
"prettier": "^3.2.5",
|
||||
"prisma": "^5.13.0",
|
||||
"typescript": "^5.4.5"
|
||||
},
|
||||
"engines": {
|
||||
"node": "^24.0.0 || ^22.0.0 || ^20.0.0"
|
||||
"node": "^20.0.0 || ^18.0.0"
|
||||
}
|
||||
}
|
||||
|
||||
+27
-40
@@ -1,44 +1,31 @@
|
||||
import * as React from 'react';
|
||||
import Head from 'next/head';
|
||||
import dynamic from 'next/dynamic';
|
||||
import { MyAppProps } from 'next/app';
|
||||
import { Analytics as VercelAnalytics } from '@vercel/analytics/next';
|
||||
import { SpeedInsights as VercelSpeedInsights } from '@vercel/speed-insights/next';
|
||||
|
||||
import { Brand } from '~/common/app.config';
|
||||
import { apiQuery } from '~/common/util/trpc.client';
|
||||
|
||||
|
||||
// [server-client-safe] dynamic imports to avoid webpack bundling issues with next/navigation
|
||||
const VercelAnalytics = dynamic(() => import('@vercel/analytics/next').then(mod => mod.Analytics), { ssr: false });
|
||||
const VercelSpeedInsights = dynamic(() => import('@vercel/speed-insights/next').then(mod => mod.SpeedInsights), { ssr: false });
|
||||
|
||||
|
||||
import 'katex/dist/katex.min.css';
|
||||
import '~/common/styles/CodePrism.css';
|
||||
import '~/common/styles/GithubMarkdown.css';
|
||||
import '~/common/styles/NProgress.css';
|
||||
import '~/common/styles/agi.effects.css';
|
||||
import '~/common/styles/app.styles.css';
|
||||
|
||||
import { ErrorBoundary } from '~/common/components/ErrorBoundary';
|
||||
import { Is } from '~/common/util/pwaUtils';
|
||||
import { OverlaysInsert } from '~/common/layout/overlays/OverlaysInsert';
|
||||
import { ProviderAuth } from '~/common/providers/ProviderAuth';
|
||||
import { ProviderBackendCapabilities } from '~/common/providers/ProviderBackendCapabilities';
|
||||
import { ProviderBootstrapLogic } from '~/common/providers/ProviderBootstrapLogic';
|
||||
import { ProviderSingleTab } from '~/common/providers/ProviderSingleTab';
|
||||
import { ProviderSnacks } from '~/common/providers/ProviderSnacks';
|
||||
import { ProviderTRPCQuerySettings } from '~/common/providers/ProviderTRPCQuerySettings';
|
||||
import { ProviderTheming } from '~/common/providers/ProviderTheming';
|
||||
import { SnackbarInsert } from '~/common/components/snackbar/SnackbarInsert';
|
||||
import { hasGoogleAnalytics, OptionalGoogleAnalytics } from '~/common/components/3rdparty/GoogleAnalytics';
|
||||
import { hasPostHogAnalytics, OptionalPostHogAnalytics } from '~/common/components/3rdparty/PostHogAnalytics';
|
||||
import { hasGoogleAnalytics, OptionalGoogleAnalytics } from '~/common/components/GoogleAnalytics';
|
||||
import { isVercelFromFrontend } from '~/common/util/pwaUtils';
|
||||
|
||||
|
||||
const Big_AGI_App = ({ Component, emotionCache, pageProps }: MyAppProps) => {
|
||||
|
||||
// We are using a nextjs per-page layout pattern to bring the (Optima) layout creation to a shared place
|
||||
// This reduces the flicker and the time switching between apps, and seems to not have impact on
|
||||
// the build. This is a good trade-off for now.
|
||||
const getLayout = Component.getLayout ?? ((page: any) => page);
|
||||
|
||||
return <>
|
||||
const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) =>
|
||||
<>
|
||||
|
||||
<Head>
|
||||
<title>{Brand.Title.Common}</title>
|
||||
@@ -46,27 +33,27 @@ const Big_AGI_App = ({ Component, emotionCache, pageProps }: MyAppProps) => {
|
||||
</Head>
|
||||
|
||||
<ProviderTheming emotionCache={emotionCache}>
|
||||
<ProviderSingleTab>
|
||||
<ProviderBackendCapabilities>
|
||||
{/* ^ Backend capabilities & SSR boundary */}
|
||||
<ErrorBoundary outer>
|
||||
<ProviderBootstrapLogic>
|
||||
<SnackbarInsert />
|
||||
{getLayout(<Component {...pageProps} />)}
|
||||
<OverlaysInsert />
|
||||
</ProviderBootstrapLogic>
|
||||
</ErrorBoundary>
|
||||
</ProviderBackendCapabilities>
|
||||
</ProviderSingleTab>
|
||||
<ProviderAuth>
|
||||
<ProviderSingleTab>
|
||||
<ProviderTRPCQuerySettings>
|
||||
<ProviderBackendCapabilities>
|
||||
{/* ^ SSR boundary */}
|
||||
<ProviderBootstrapLogic>
|
||||
<ProviderSnacks>
|
||||
<Component {...pageProps} />
|
||||
</ProviderSnacks>
|
||||
</ProviderBootstrapLogic>
|
||||
</ProviderBackendCapabilities>
|
||||
</ProviderTRPCQuerySettings>
|
||||
</ProviderSingleTab>
|
||||
</ProviderAuth>
|
||||
</ProviderTheming>
|
||||
|
||||
{isVercelFromFrontend && <VercelAnalytics debug={false} />}
|
||||
{isVercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
|
||||
{hasGoogleAnalytics && <OptionalGoogleAnalytics />}
|
||||
{hasPostHogAnalytics && <OptionalPostHogAnalytics />}
|
||||
{Is.Deployment.VercelFromFrontend && <VercelAnalytics debug={false} />}
|
||||
{Is.Deployment.VercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
|
||||
|
||||
</>;
|
||||
};
|
||||
|
||||
// Initializes React Query and tRPC, and enables the tRPC React Query hooks (apiQuery).
|
||||
export default apiQuery.withTRPC(Big_AGI_App);
|
||||
// enables the React Query API invocation
|
||||
export default apiQuery.withTRPC(MyApp);
|
||||
+6
-25
@@ -2,7 +2,7 @@ import * as React from 'react';
|
||||
import { AppType, MyAppProps } from 'next/app';
|
||||
import { default as Document, DocumentContext, DocumentProps, Head, Html, Main, NextScript } from 'next/document';
|
||||
import createEmotionServer from '@emotion/server/create-instance';
|
||||
import InitColorSchemeScript from '@mui/joy/InitColorSchemeScript';
|
||||
import { getInitColorSchemeScript } from '@mui/joy/styles';
|
||||
|
||||
import { Brand } from '~/common/app.config';
|
||||
import { createEmotionCache } from '~/common/app.theme';
|
||||
@@ -37,37 +37,21 @@ export default function MyDocument({ emotionStyleTags }: MyDocumentProps) {
|
||||
<meta property='og:site_name' content={Brand.Meta.SiteName} />
|
||||
<meta property='og:type' content='website' />
|
||||
|
||||
{/* Twitter / X */}
|
||||
<meta name='twitter:card' content='summary_large_image' />
|
||||
{/* Twitter */}
|
||||
<meta property='twitter:card' content='summary_large_image' />
|
||||
<meta property='twitter:url' content={Brand.URIs.Home} />
|
||||
<meta property='twitter:title' content={Brand.Title.Common} />
|
||||
<meta property='twitter:description' content={Brand.Meta.Description} />
|
||||
{Brand.URIs.CardImage && <meta property='twitter:image' content={Brand.URIs.CardImage} />}
|
||||
<meta name='twitter:site' content={Brand.Meta.TwitterSite} />
|
||||
<meta name='twitter:creator' content='@enricoros' />
|
||||
|
||||
{/* Author & Structured Data */}
|
||||
<meta name='author' content='Enrico Ros' />
|
||||
<link rel='author' href='https://www.enricoros.com' />
|
||||
<script type='application/ld+json' dangerouslySetInnerHTML={{ __html: JSON.stringify({
|
||||
'@context': 'https://schema.org',
|
||||
'@type': 'SoftwareApplication',
|
||||
'name': 'Big-AGI',
|
||||
'url': 'https://big-agi.com',
|
||||
'applicationCategory': 'ProductivityApplication',
|
||||
'operatingSystem': 'All, Web',
|
||||
'description': Brand.Meta.Description,
|
||||
'sameAs': ['https://github.com/enricoros/big-agi', 'https://discord.gg/MkH4qj2Jp9',],
|
||||
'author': { '@type': 'Person', 'name': 'Enrico Ros', 'url': 'https://www.enricoros.com' },
|
||||
'publisher': { '@type': 'Organization', 'name': 'Token Fabrics LLC', 'url': 'https://www.tokenfabrics.com' },
|
||||
}) }} />
|
||||
<meta name='twitter:card' content='summary_large_image' />
|
||||
|
||||
{/* Style Sheets (injected and server-side) */}
|
||||
<meta name='emotion-insertion-point' content='' />
|
||||
{emotionStyleTags}
|
||||
</Head>
|
||||
<body>
|
||||
<InitColorSchemeScript />
|
||||
{getInitColorSchemeScript()}
|
||||
<Main />
|
||||
<NextScript />
|
||||
</body>
|
||||
@@ -116,10 +100,6 @@ MyDocument.getInitialProps = async (ctx: DocumentContext) => {
|
||||
});
|
||||
|
||||
const initialProps = await Document.getInitialProps(ctx);
|
||||
|
||||
// Inject the comment before the HTML tag
|
||||
initialProps.html = `<!-- ❤ Built with Big-AGI -->\n${initialProps.html}`;
|
||||
|
||||
// This is important. It prevents Emotion to render invalid HTML.
|
||||
// See https://github.com/mui/material-ui/issues/26561#issuecomment-855286153
|
||||
const emotionStyles = extractCriticalToChunks(initialProps.html);
|
||||
@@ -127,6 +107,7 @@ MyDocument.getInitialProps = async (ctx: DocumentContext) => {
|
||||
<style
|
||||
data-emotion={`${style.key} ${style.ids.join(' ')}`}
|
||||
key={style.key}
|
||||
// eslint-disable-next-line react/no-danger
|
||||
dangerouslySetInnerHTML={{ __html: style.css }}
|
||||
/>
|
||||
));
|
||||
|
||||
+4
-2
@@ -2,7 +2,9 @@ import * as React from 'react';
|
||||
|
||||
import { AppCall } from '../src/apps/call/AppCall';
|
||||
|
||||
import { withNextJSPerPageLayout } from '~/common/layout/withLayout';
|
||||
import { withLayout } from '~/common/layout/withLayout';
|
||||
|
||||
|
||||
export default withNextJSPerPageLayout({ type: 'optima' }, () => <AppCall />);
|
||||
export default function CallPage() {
|
||||
return withLayout({ type: 'optima' }, <AppCall />);
|
||||
}
|
||||
+4
-2
@@ -2,7 +2,9 @@ import * as React from 'react';
|
||||
|
||||
import { AppBeam } from '../../src/apps/beam/AppBeam';
|
||||
|
||||
import { withNextJSPerPageLayout } from '~/common/layout/withLayout';
|
||||
import { withLayout } from '~/common/layout/withLayout';
|
||||
|
||||
|
||||
export default withNextJSPerPageLayout({ type: 'optima' }, () => <AppBeam />);
|
||||
export default function BeamPage() {
|
||||
return withLayout({ type: 'optima' }, <AppBeam />);
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
import * as React from 'react';
|
||||
|
||||
import { AppDiff } from '../src/apps/diff/AppDiff';
|
||||
|
||||
import { withNextJSPerPageLayout } from '~/common/layout/withLayout';
|
||||
|
||||
|
||||
export default withNextJSPerPageLayout({ type: 'optima' }, () => <AppDiff />);
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user