Compare commits

..

1475 Commits

Author SHA1 Message Date
Enrico Ros 332c4fdf82 2 Open - Release 2025-10-27 19:26:18 -07:00
Enrico Ros 4d247344d5 2 Open: end changelog 2025-10-27 18:42:48 -07:00
Enrico Ros 4e4738d4f6 2 Open: dissolve 1.17 2025-10-27 18:31:31 -07:00
Enrico Ros dbfa7b0932 2 Open: TechLevels 2025-10-27 18:26:49 -07:00
Enrico Ros e90231d58d Roll AIX 2025-10-27 18:26:08 -07:00
Enrico Ros 9bc7d40425 2 Open: 200 2025-10-27 18:25:36 -07:00
Enrico Ros d2d5c0621b CC update 2025-10-27 18:24:50 -07:00
Enrico Ros e41d57c914 Update README.md 2025-10-27 18:24:50 -07:00
Enrico Ros 7c5336cba3 2 Open: Link to live changes. 2025-10-27 16:48:42 -07:00
Enrico Ros d041e4e2bf AIX: message the tRPC < 11.6 edge disconnections 2025-10-27 15:24:09 -07:00
Enrico Ros 7fba6255ff AIX: operation-level retry shall loop with abortSignal (to let the next iteration respond appropriately) rather than re-throw the RequestRetry which the tRPC router wouldn't know what to do with 2025-10-27 15:19:20 -07:00
Enrico Ros dc226d9ac0 AIX: Anthropic: support to retry on overloaded 2025-10-27 15:08:10 -07:00
Enrico Ros c01a937d7d AIX: operation-level retry (chatGenerate) with RequestRetry errors which can be thrown by parsers, if allowed 2025-10-27 15:07:41 -07:00
Enrico Ros ee6646a66f Server: abortable delay 2025-10-27 14:32:43 -07:00
Enrico Ros b73aa16001 tRPC: lock to 11.5.1 for the time being, because of #857 2025-10-27 13:55:46 -07:00
Enrico Ros 92c875459a Keep sourcemaps for Open debugging 2025-10-27 11:57:25 -07:00
Enrico Ros 011fbbe834 Try bisecting to a 11.4.4 tRPC in the context of #857 2025-10-27 11:41:34 -07:00
Enrico Ros a921ea6fe5 Try upgrading to a 11.7 tRPC in the context of #857 2025-10-27 11:36:22 -07:00
Enrico Ros 82bcc6d5d5 Try reverting to a 11.6 tRPC in the context of #857 2025-10-27 11:27:42 -07:00
Enrico Ros f6d52da034 Try reverting to a pre-11.7 tRPC in the context of #857 2025-10-27 11:15:37 -07:00
Enrico Ros cd3159cacf Sherpa: reduce debug 2025-10-27 10:54:38 -07:00
Enrico Ros 1af4e18cb3 Help debugging #857 2025-10-27 10:31:31 -07:00
Enrico Ros 7b6eb94bf7 Help debugging #857 2025-10-27 10:30:37 -07:00
Enrico Ros 8cc6d65dd4 Help debugging #857 2025-10-27 10:16:23 -07:00
Enrico Ros 54e5f9a1bc Server: listModels: improve print 2025-10-27 10:11:27 -07:00
Enrico Ros fa28305141 AIX: un-warn dispatch-fetch issues, as they're Error Corrected extensively 2025-10-27 09:59:00 -07:00
Enrico Ros 1e56b36eae Server: move retriers, as it outgrew the original AIX.chatGenerate purpose 2025-10-27 09:58:46 -07:00
Enrico Ros e2253cde7f Server: tRCP fetchers & retrier: downgrade logging severity 2025-10-27 09:44:21 -07:00
Enrico Ros 6a4bfc1cf2 server: improve message 2025-10-25 15:27:25 -07:00
Enrico Ros dfc0d5088d AIX: retrier: reduce to 2 retries 2025-10-25 13:24:09 -07:00
Enrico Ros 8f154305e9 AIX: server: activate logging for auto-retry 2025-10-24 19:58:17 -07:00
Enrico Ros 09b96a01bf AIX: server: upstream auto-retry 2025-10-24 19:57:55 -07:00
Enrico Ros 1ce0c631b4 Browse: strings 2025-10-24 15:52:52 -07:00
Enrico Ros 61a5b6d5eb Improve strings 2025-10-24 15:52:36 -07:00
Enrico Ros ca62bad217 LLMs: OpenRouter: improve only-free 2025-10-24 14:40:31 -07:00
Enrico Ros 13f352a901 Setups: upgraded descriptions 2025-10-24 14:40:30 -07:00
Enrico Ros 775af756fd Button Beam Mobile: outlined 2025-10-24 13:51:12 -07:00
Enrico Ros 5c4545877d Composer: 0.5 shorter 2025-10-24 13:51:08 -07:00
Enrico Ros 9c820dcaf1 AppChat: breathing room on mobile/bottom 2025-10-24 13:51:06 -07:00
Enrico Ros 49f0bf4802 AIX: Gemini: finish reason typesafety 2025-10-24 13:03:23 -07:00
Enrico Ros fbb2f106f0 tRPC: edge procedure (semantics) 2025-10-24 12:41:15 -07:00
Enrico Ros cb46d3d536 AIX: extract the CG executor 2025-10-24 12:32:22 -07:00
Enrico Ros 84289c4ade AIX: dispatch: move IParticleTransmitter to the parsers, while impl is up 2025-10-24 11:06:52 -07:00
Enrico Ros b35ffd9983 AIX: router: move echo out of connection loop 2025-10-24 11:05:15 -07:00
Enrico Ros 8197fed036 server: fetchers: explained the tRPC error 2025-10-24 10:57:34 -07:00
Enrico Ros f6c40cdce6 AIX: Gemini: finish reason parser: compress 2025-10-24 10:53:30 -07:00
Enrico Ros b8cca72cf1 server: fetchers: errors: remove cause for security 2025-10-24 10:36:39 -07:00
Enrico Ros d20cafa22b server: fetchers: errors: match Vercel's edge runtime error messages (absence of _cause) 2025-10-24 10:31:51 -07:00
Enrico Ros 421a5ae681 server: report the error cat/codes to the client 2025-10-23 20:10:33 -07:00
Enrico Ros 49157b9efa server: fetchers: redo all with good error detection 2025-10-23 20:10:03 -07:00
Enrico Ros c11684a9cf server: improve error formatting, removing unneeded stacks 2025-10-23 20:08:02 -07:00
Enrico Ros 12aa812b37 server: improve safeErrorString 2025-10-23 20:08:02 -07:00
Enrico Ros 3667425c61 AIX: Refactor - Client - Retry State Machine 2025-10-23 05:25:10 -07:00
Enrico Ros fd0ab93744 AIX: Refactor - Client - Retry & Resume
This refactor allows for low-level looping on the client side.

This can be used for network errors between server<>upstream reported as particles,
as well as for client<>server connections.

One special case of this is the OpenAI system to reattach to detached (background) requests,
or as an alternative to re-fetch them from the server once completed.
2025-10-23 04:26:06 -07:00
Enrico Ros a0b549855f AIX: Refactor - Router - Composable & Resumable
This refactor decomposes the chatGeneration procedure into composable blocks.

Allows for instance chatGeneration-like outputs from different inputs,
allowing for instance `resumability` of a background connection.

Moreover this reorganizes the phases of a CG operation, and includes a generic executor
that takes creator functions for Dispatchers.
2025-10-23 04:11:05 -07:00
Enrico Ros c70c89c2e8 AIX: Client: error as message in ContentReassembler promise chain 2025-10-23 04:11:05 -07:00
Enrico Ros 32c5c00d55 AIX: Client: error classification 2025-10-23 04:11:05 -07:00
Enrico Ros 013d0e0217 AIX: pre-refactor nits 2025-10-23 04:11:05 -07:00
Enrico Ros f0bf866654 Anthropic Skills: on extra 2025-10-23 01:47:40 -07:00
Enrico Ros 2c14cb1113 nit 2025-10-23 01:32:13 -07:00
Enrico Ros 15abecfbb6 LLMs: OpenRouter: add the haiku 4.5 thinking variant 2025-10-23 01:02:52 -07:00
Enrico Ros 827d64d49a remove icon 2025-10-23 01:02:52 -07:00
Enrico Ros 01c45b2286 Anthropic Skills: improve config 2025-10-23 01:02:24 -07:00
Enrico Ros d3e5c196f9 LLMs: remove vendorspec from the params editor 2025-10-23 00:36:22 -07:00
Enrico Ros 71978b94f2 Fragments: support placeholders of 'code-exec' type 2025-10-23 00:11:18 -07:00
Enrico Ros 79da87d823 AIX/LLMs: Anthropic: Skills: improve reporting of steps 2025-10-23 00:06:58 -07:00
Enrico Ros 1c19f36783 AIX/LLMs: Anthropic: improve spec 2025-10-22 23:58:13 -07:00
Enrico Ros a4d4e351e5 AIX/LLMs: OpenRouter search 2025-10-22 23:18:55 -07:00
Enrico Ros 45ef2afccb LLM Options: support 'all from vendor X' 2025-10-22 23:10:12 -07:00
Enrico Ros 9ef5b61722 AIX: Anthropic: Skills: parser fix 2025-10-22 22:53:57 -07:00
Enrico Ros ff008d1034 AIX: Anthropic: Parser: event sequence debugger 2025-10-22 22:39:01 -07:00
Enrico Ros 3cd38f471e DMessage: session draft 2025-10-22 22:32:44 -07:00
Enrico Ros 1581d46be7 AIX: Anthropic Skills dispatch / parse 2025-10-22 22:32:43 -07:00
Enrico Ros 32571e15eb LLMs: Anthropic: Custom Skills CRUD 2025-10-22 22:13:36 -07:00
Enrico Ros d69adaa6af LLMs: Anthropic Skills model params editor 2025-10-22 22:13:36 -07:00
Enrico Ros 246968098a LLMs: Anthropic Skills headers 2025-10-22 21:45:51 -07:00
Enrico Ros 861c4ef370 LLMs: Anthropic Skills model params 2025-10-22 21:45:35 -07:00
Enrico Ros bfe94e98f2 Anthropic: fix old-school get/post 2025-10-22 20:55:30 -07:00
Enrico Ros 9152318ef6 Merge pull request #855 from enricoros/claude/issue-829-20251022-2344
fix(call): propagate speech recognition errors to UI
2025-10-22 17:02:53 -07:00
claude[bot] 302694bdad fix(call): propagate speech recognition errors to UI
- Read recognitionState.errorMessage in Telephone component
- Pass error message to CallStatus component
- Display specific error messages instead of generic fallback
- Matches error handling pattern used in Chat/Composer

This ensures users see detailed error messages instead of generic
Browser may not support text.

Fixes #829 by making speech recognition errors visible to users.

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-22 23:47:26 +00:00
claude[bot] 14602a1411 LLMs: add user override for context window and max output tokens. Fixes #853 2025-10-22 14:57:55 -07:00
Enrico Ros 044baa5fc2 Starring: improve starring in models modal 2025-10-22 14:10:38 -07:00
Enrico Ros 3fa09194a7 LLM Options: reset to defaults on Mobile 2025-10-22 14:10:14 -07:00
Enrico Ros d3aa10f9d1 LLM Options: reset to defaults 2025-10-22 14:09:47 -07:00
Enrico Ros e2b2d5974f AIX: Gemini: detect internal 503s 2025-10-22 04:16:19 -07:00
Enrico Ros d99668aa40 AIX: fix Openrouter parsing 2025-10-22 03:56:12 -07:00
Enrico Ros 5f8d5678fa AIX: server: improve listModel errors 2025-10-22 03:07:39 -07:00
Enrico Ros 14f245df2b AIX: server: improve listModel errors 2025-10-22 02:44:24 -07:00
Enrico Ros f104fb64fd LLMs: Anthropic: update cache costs for 1M models 2025-10-21 02:06:43 -07:00
Enrico Ros 3c2d7a636a LLMs: Perplexity: remove globes from models 2025-10-21 01:35:45 -07:00
Enrico Ros 31b215e58b Roll AIX 2025-10-21 01:24:52 -07:00
Enrico Ros 53ae177396 LLMs: net-dependent Context computation 2025-10-21 01:24:52 -07:00
Enrico Ros 3e1bb3bb3d LLMs: Anthropic: show search caps 2025-10-21 01:24:52 -07:00
Enrico Ros eac150f590 LLMs: Anthropic: sort correctly 2025-10-21 01:24:52 -07:00
claude[bot] 5466b8a265 *LLMs: Add support for Anthropic 1M token context window
Fixes #852
2025-10-21 01:24:52 -07:00
Enrico Ros c3d10c355f *Improved keyboard/focus navigation on menus 2025-10-20 18:44:47 -07:00
Enrico Ros d96a8c14b9 CloseablePopup: assume trueish 2025-10-20 18:44:47 -07:00
Enrico Ros be94f31a85 AIX: Perplexity: parse the new undocumented chunk-like object. Fixes #851 2025-10-20 15:09:32 -07:00
Enrico Ros f7ce349125 AIX: Perplexity: parse and show costs. #851 2025-10-20 15:08:52 -07:00
Enrico Ros a4516b5fa6 Wire/server: pretty-print server-side Zod errors in return messages #851 2025-10-20 14:43:35 -07:00
Enrico Ros 7c1f30c3c7 workflows: CC: npm i hint 2025-10-19 14:51:38 -07:00
Enrico Ros df67be4b03 GC: identified issue with open beams 2025-10-19 14:49:22 -07:00
Enrico Ros 578bb93d8b Env: production helper fix 2025-10-19 14:45:50 -07:00
Enrico Ros b4c5a24864 Env: production helper 2025-10-19 12:53:27 -07:00
Enrico Ros c4a38a6cf6 LLMs: Anthropic: caching config below the fold 2025-10-19 12:40:44 -07:00
Enrico Ros e58f6cc48e Hidden state: use visible accelerator 2025-10-19 12:40:30 -07:00
claude[bot] 8a0c4747c7 fix: preserve model visibility across updates (complete fix)
Fixes #850 - Model visibility was being reset after app updates.

Root cause: User visibility changes were stored in `hidden` field instead of
`userHidden`, but the preservation logic only looked for `userHidden`. This
caused user preferences to be lost during model updates.

Changes:
- Added isLLMHidden() helper to compute effective visibility (userHidden ?? hidden)
- Fixed all write paths to set userHidden instead of hidden (3 files)
- Fixed all read paths to use isLLMHidden() (7 files, 14 locations)

This ensures:
- User preferences persist across updates
- Vendor visibility changes still propagate for untouched models
- Bulk operations work correctly

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-19 19:17:41 +00:00
Enrico Ros 8bef4b9aae FormSelectControl: minWidth can be redefined 2025-10-18 16:50:17 -07:00
Enrico Ros 66382ed980 AIX: Anthropic: Search/Fetch - done
NOTEs: this works without saving the server-side tool invocation and the subsequent responses
to AIX particles, and consequently to DMessageFragments of the opportune type.
-> Shall do it with execution graph fragments.
2025-10-18 14:51:21 -07:00
Enrico Ros 8984b65a51 AIX: Anthropic: Search: do not cite websites - too noisy 2025-10-18 14:30:49 -07:00
Enrico Ros efea6dafbd AIX: Anthropic: Fetch/Search: use placeholders until Tool execution graph abstraction 2025-10-18 14:20:55 -07:00
Enrico Ros 6d4d05e8f7 Roll AIX 2025-10-18 14:18:53 -07:00
Enrico Ros 560a07b4fe LLMs: Anthropic: compress 2025-10-18 14:18:24 -07:00
Enrico Ros fbaff3bde3 AIX: Anthropic: LLM param edit 2025-10-18 13:51:24 -07:00
Enrico Ros 2a01f929f1 AIX: Anthropic: wires nits 2025-10-18 13:20:16 -07:00
Enrico Ros d1d0c32a92 AIX: Anthropic: improve merge of #842 2025-10-18 12:37:45 -07:00
Enrico Ros 3a513e2a4d Merge remote-tracking branch 'opensource/claude/issue-842-20251018-0728' into m2 2025-10-18 12:37:41 -07:00
Enrico Ros 9b32c4b8c5 AIX: Anthropic: headers improvement 2025-10-18 12:37:30 -07:00
Enrico Ros 64542af5af Starring: also codify Emoji 2025-10-18 11:42:14 -07:00
Enrico Ros 1db35feeca Starring: also in useLLMSelect 2025-10-18 11:38:34 -07:00
Enrico Ros 7392063e25 Starring: centralized with styles and memo 2025-10-18 11:38:28 -07:00
Enrico Ros e6745b16f6 Panes: fix panes visibility 2025-10-18 10:41:32 -07:00
Enrico Ros be09b452f0 Panes: persist optima group expanded states 2025-10-18 10:41:07 -07:00
Enrico Ros 42588444a5 Optima Panels: controllable grouped list 2025-10-18 10:36:01 -07:00
Enrico Ros dc48bd1222 OptimaPanelGroupedList: nits 2025-10-18 10:34:37 -07:00
claude[bot] b59eb6cbfb feat: Add Anthropic web search and web fetch tools support
Implements comprehensive support for Anthropic's web search (web_search_20250305) and web fetch (web_fetch_20250910) tools.

- Add llmVndAntWebSearch and llmVndAntWebFetch parameters with ['auto', 'off'] options
- Enable tools for Claude 4.5, 4.1, 4, 3.7, 3.5 Sonnet/Haiku/Opus models (including thinking variants)
- Inject web_search_20250305 and web_fetch_20250910 tools based on parameter values
- Configure web search with max_uses=5 for progressive searches
- Configure web fetch with max_uses=5 and citations enabled
- Add dynamic beta header injection for web fetch (web-fetch-2025-09-10)
- Add UI controls in model settings for easy parameter configuration

Parser already supports web_search_tool_result and web_fetch_tool_result blocks (no changes needed).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-18 07:34:51 +00:00
Enrico Ros a75a31ff04 AIX: Anthropic: Parser: improve 2025-10-18 00:05:09 -07:00
Enrico Ros a0f97e9cd8 AIX: Anthropic: major protocol update 2025-10-17 23:40:31 -07:00
Enrico Ros fe6e7245de AIX: Anthropic: remove vnd.ant.tools for upgrade soon 2025-10-17 23:13:29 -07:00
Enrico Ros a46a9bf76c fix: #845 - no permissions on attachments on Edge 2025-10-17 15:45:04 -07:00
claude[bot] 925e500dc2 fix: simplify benign DOM error handling
- Use console.warn for benign removeChild errors
- Skip PostHog reporting for these errors
- More succinct implementation

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-17 15:00:26 -07:00
claude[bot] 22f0a70272 fix: suppress tiktoken WebAssembly loading errors from PostHog 2025-10-17 14:54:20 -07:00
Enrico Ros 220cc60f7d workflows: CC: DM to sonnet 2025-10-17 04:56:53 -07:00
Enrico Ros 3964fca4b2 cc: allow mcps 2025-10-16 12:18:23 -07:00
Enrico Ros 8fdbb21300 Roll AIX 2025-10-16 12:17:33 -07:00
Enrico Ros c42c9545d2 slashcommands: /aix:roll-aix 2025-10-16 12:16:56 -07:00
Enrico Ros 0de37e337b LLMs: OpenAI: added GPT-5'class models the no-stream option for unverified orgs 2025-10-16 11:37:27 -07:00
Enrico Ros 3ecf7f6016 Merge remote-tracking branch 'opensource/claude/issue-847-20251016-1807' 2025-10-16 11:35:16 -07:00
Enrico Ros da7a62945c workflows: CC: enable reviews and triaging, not DMs 2025-10-16 11:32:37 -07:00
Enrico Ros c876390e27 workflows: CC: enable other users 2025-10-16 11:14:22 -07:00
claude[bot] 9bbc2a2e00 fix: Add llmForceNoStream parameter to OpenAI models with Org ID streaming issues
- Added parameter to GPT-5, GPT-5 Pro, GPT-5 Codex, and GPT-5 Mini
- These models require Organization ID verification for streaming
- Kept existing parameter on o3 and o3-pro models
- Did not modify GPT-5 Nano, GPT-5 Chat Latest, GPT-4.1, or GPT-4o models which work fine

Fixes #847

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-16 18:09:53 +00:00
Enrico Ros 2b18cbc3b9 Errors: cleanup domains 2025-10-16 10:56:32 -07:00
Enrico Ros 388391ddae Errors: remove unused var 2025-10-16 10:46:03 -07:00
Enrico Ros 3e4e6b2f4b BackupRestore: fix a potential removal issue 2025-10-16 10:28:37 -07:00
Enrico Ros e6a65bdf8e BackupRestore: improve messaging 2025-10-16 10:05:05 -07:00
Enrico Ros 0e09cf3d84 Merge pull request #846 from enricoros/claude/issue-844-20251016-0857
fix: properly handle null and undefined in clipboard operations
2025-10-16 02:01:28 -07:00
claude[bot] 5634aa0cac fix: properly handle null and undefined in clipboard operations
- Add nullish coalescing (??) after optional chaining to ensure string return
- Prevents undefined from propagating through the promise chain
- Fixes potential TypeError when calling .startsWith() on undefined

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2025-10-16 08:58:59 +00:00
Enrico Ros 07916be684 workflows: CC: dm to 100 2025-10-16 01:11:20 -07:00
Enrico Ros 8d20b4675b workflows: CC: allow DM to PR 2025-10-16 00:54:19 -07:00
Enrico Ros d906669ea4 Fix: roll posthog 2025-10-16 00:44:34 -07:00
Enrico Ros 5d7b00f0dc workflows: CC: up max turns 2025-10-16 00:07:52 -07:00
Enrico Ros 740d76c15c LLMs: OpenAI: add gpt-5-search-api 2025-10-15 19:36:38 -07:00
Enrico Ros ca4d21d4b8 LLMs: Gemini: improve logs 2025-10-15 19:19:30 -07:00
Enrico Ros e4defc1baf tRPC Fetchers: improve logging 2025-10-15 19:10:46 -07:00
Enrico Ros 9ea859081d Error Message test 2025-10-15 18:40:47 -07:00
Enrico Ros 87d8320b31 tRPC: Fetchers: show warning also on network connection lost 2025-10-15 18:28:56 -07:00
Enrico Ros 84aea90860 KB: remove AIX adapters 2025-10-15 18:26:38 -07:00
Enrico Ros 95f35cb5cf Roll AIX 2025-10-15 18:08:12 -07:00
Enrico Ros c79ba097c0 CC: slashcommands: idiosyncratic updates 2025-10-15 17:54:14 -07:00
Enrico Ros 8ea1f02c86 workflows: label triaged/reviewed issues/prs 2025-10-15 17:51:10 -07:00
Enrico Ros 674c9c8c25 LLMs: Ollama: update and remove all descriptions (not used) 2025-10-15 17:29:02 -07:00
Enrico Ros 98a3e7e185 LLMs: Alibaba: update: remove any model information and jut return the sorted list from the API 2025-10-15 17:29:02 -07:00
Enrico Ros ee00c53ada LLMs: Perplexity: update 2025-10-15 17:29:02 -07:00
Enrico Ros 0553f64fe8 LLMs: xAI: update 2025-10-15 17:29:02 -07:00
Enrico Ros ff06f6f04c LLMs: Groq: update 2025-10-15 17:29:02 -07:00
Enrico Ros 3f45617e06 LLMs: Deepseek: update 2025-10-15 17:29:02 -07:00
Enrico Ros 9d93c8c55a LLMs: Mistral: update all 2025-10-15 17:29:02 -07:00
Enrico Ros 73eaf740db LLMs: Anthropic: Haku 4.5 (Thinking and not) 2025-10-15 17:29:02 -07:00
Enrico Ros 48426d5022 AIX: OpenAI: [Mistral, 2025-10-15]: non-standard delta.content for thinking blocks 2025-10-15 17:29:02 -07:00
Enrico Ros c79237b419 CC: slashcommands: Models ollama automation 2025-10-15 17:29:02 -07:00
Enrico Ros b0abaf4d9e CC: slashcommands: Models sync - pass1 2025-10-15 17:29:02 -07:00
Enrico Ros ec92a8d31a CC: slashcommands: API syncs 2025-10-15 17:29:02 -07:00
Enrico Ros a4600a4d1d tRPC Fetchers: show content type on parse failures 2025-10-15 17:29:02 -07:00
Enrico Ros ad6a465ce7 workflows: allow slashcommands for DMs 2025-10-15 17:29:02 -07:00
Enrico Ros 0820bb5af6 workflows: elevate max turns 2025-10-15 12:59:39 -07:00
Enrico Ros 73f8488d22 AIX: enable Abort on GET 2025-10-15 12:58:27 -07:00
Enrico Ros 2b3c1c38f3 tRPC Fetchers: improve error reporting at all 3 stages: connect, get response, parse contents 2025-10-15 12:58:24 -07:00
Enrico Ros 59f379f46b tRPC Fetchers: improve JSON decoding errors 2025-10-15 12:58:21 -07:00
Enrico Ros 2bc6ecbe4c tRPC: improve Abort support 2025-10-15 12:58:18 -07:00
Enrico Ros 8274a34841 InlineError: support newlines 2025-10-15 12:58:14 -07:00
Enrico Ros 6e7197caa3 QueryClient: explicit option 2025-10-15 12:57:33 -07:00
Enrico Ros 7c78d48b6c workflows: reduce noise 2025-10-14 22:50:30 -07:00
Enrico Ros b149eb7fa2 workflows: Claude Code DMs 2025-10-14 21:15:07 -07:00
Enrico Ros ba79a3c42c workflows: Claude Code Issue Triage 2025-10-14 21:15:07 -07:00
Enrico Ros 4445ac295f workflows: Claude Code PR Reviews 2025-10-14 21:15:07 -07:00
Enrico Ros 09c2a8b072 KB: renamed app-routing 2025-10-14 18:19:02 -07:00
Enrico Ros 92e371837d Modals: fullscreen options on desktop 2025-10-14 17:54:40 -07:00
Enrico Ros 7fad41dc8a FormSliderControl: option to hide the track 2025-10-14 17:31:40 -07:00
Enrico Ros 0be8ac7e09 CLAUDE.md 2025-10-14 17:31:40 -07:00
Enrico Ros de6e8a047c KB: Index 2025-10-14 17:31:40 -07:00
Enrico Ros 92955f92bf KB: AIX supports 2025-10-14 17:31:40 -07:00
Enrico Ros 5327866836 KB: routing architecture 2025-10-14 17:31:40 -07:00
Enrico Ros 54b8836faa KB: Update the params system 2025-10-14 17:31:40 -07:00
Enrico Ros eb39db9974 KB: structure 2025-10-14 17:31:40 -07:00
Enrico Ros 087e6e2eaf AIX: Client: decimator cleanups 2025-10-14 02:52:28 -07:00
Enrico Ros 295d91b310 AIX: Client: improve disconnect message. 2025-10-14 02:50:50 -07:00
Enrico Ros f75bcb78d7 FIX: AIX: Anthropic: hotfix: disable thinking with forced tool use (policy of: any, tool) 2025-10-14 02:48:22 -07:00
Enrico Ros ffb32d8720 Roll AIX 2025-10-14 02:22:48 -07:00
Enrico Ros 879458d692 Roll packages 2025-10-14 02:22:27 -07:00
Enrico Ros 96eece3a3e Optima: MobilePanel: pad bottom 2025-10-14 01:56:16 -07:00
Enrico Ros dc75136131 LLMs: OpenAI: flush 2025-10-14 01:15:28 -07:00
Enrico Ros 57c43b3c4e LLMs: OpenAI: reduce model desc size 2025-10-14 01:12:03 -07:00
Enrico Ros 4c5b7677e6 LLMs: nit 2025-10-14 01:12:03 -07:00
Enrico Ros 43890150e5 AIX: OpenAI: fix NS function parsing 2025-10-14 01:11:19 -07:00
Enrico Ros bc86214c5e LLMs: xAI: grok names 2025-10-13 18:51:30 -07:00
Enrico Ros ef1f412019 LLMs: OpenAI: slimmer 2025-10-13 18:51:14 -07:00
Enrico Ros 1249efb53b LLMs: xAI: add the latest groks. Fixes #837 2025-10-13 18:50:21 -07:00
Enrico Ros 8bc81e45ce Models: UI hide hidden by default 2025-10-13 17:52:23 -07:00
Enrico Ros 810f316185 Nit 2025-10-13 17:23:34 -07:00
Enrico Ros 5b49e801d1 Roll AIX 2025-10-13 17:20:03 -07:00
Enrico Ros 3269e10da9 LLMs: OpenAI: enable Audio output models 2025-10-13 17:10:23 -07:00
Enrico Ros 53a57fd7ff AIX: OpenAI: support Audio output models S/NS 2025-10-13 17:10:09 -07:00
Enrico Ros dbbf25c3af LLMs: new model description builder 2025-10-13 17:09:17 -07:00
Enrico Ros a2ff00f53b LLMs: mandatory fallbacks 2025-10-13 13:49:08 -07:00
Enrico Ros 4904383838 BlockPartModelAnnotations: fix Favicons for Google Search icons (vertex) 2025-10-13 13:25:00 -07:00
Enrico Ros 8221444308 LLMs: Gemini: enable search 2025-10-13 13:24:31 -07:00
Enrico Ros 7cd94b3163 *AIX: Gemini: Google Search support 2025-10-13 13:24:16 -07:00
Enrico Ros 52cdf7da4e AIX: Hosted tools - bits of cleanup 2025-10-13 12:46:02 -07:00
Enrico Ros 6ff010ae0e AIX: Gemini: remove the old google_search_retrieval 2025-10-13 12:05:19 -07:00
Enrico Ros 6d81150975 AIX: Gemini: parse also finish message and url grounding 2025-10-13 11:55:44 -07:00
Enrico Ros 0fdcc4c64d LLMs: Gemini: remove 4:5 and 5:4 support, not in the API 2025-10-13 11:52:30 -07:00
Enrico Ros f272c9cb12 AIX: Gemini: expand to all finish reasons 2025-10-13 11:36:11 -07:00
Enrico Ros 5354f83736 AIX: OpenAI: Responses: uniform S/NS parsing 2025-10-13 11:26:57 -07:00
Enrico Ros f4b2f36ac0 AIX: OpenAI: Responses: handle streaming stop reasons 2025-10-13 10:49:59 -07:00
Enrico Ros 5fca834c20 AIX: OpenAI: Responses: handle max_tokens 2025-10-13 10:49:59 -07:00
Enrico Ros fff48335ae AIX: OpenAI: skip also markdown restoration when custom tools with restrictive policies are applied 2025-10-13 10:49:59 -07:00
Enrico Ros f39a1825cf AIX: OpenAI: skip hosted tools when custom tools with restrictive policies are applied 2025-10-13 10:17:21 -07:00
Enrico Ros c1b10405a5 AIX: Anthropic: document context_window_exceeded 2025-10-13 10:17:21 -07:00
Enrico Ros 37ba583cf2 AIX: Anthropic: parse new stop reasons 2025-10-13 10:17:21 -07:00
Enrico Ros 4beb7de83f AIX: add tokenStopReasons: ok-pause, and filter-refusal 2025-10-13 10:17:21 -07:00
Enrico Ros cb8202e327 dMessageUtils: pedantic 2025-10-13 10:17:20 -07:00
Enrico Ros 90c90f78b6 LLMs: OpenAI: exclude sora-2/pro from Language models 2025-10-11 20:39:22 -07:00
Enrico Ros e700c27256 presentErrorToHumans: fix 2025-10-11 20:39:18 -07:00
Enrico Ros 7372287b5c AIX: Resumability: Block Control 2025-10-10 18:33:47 -07:00
Enrico Ros d059948f62 useFragmentBuckets: check for error 2025-10-10 18:11:42 -07:00
Enrico Ros 1cb6491d17 AIX: Errors are not appended as Text anymore 2025-10-10 18:04:26 -07:00
Enrico Ros 3a6e8a5f27 AIX: Resumability: OpenAI: request and parse 2025-10-10 17:49:50 -07:00
Enrico Ros c0cd820880 AIX: Resumability: enablement downstream 2025-10-10 17:49:50 -07:00
Enrico Ros 7b5655dd6d AIX: Resumability: enablement upstream 2025-10-10 17:49:50 -07:00
Enrico Ros 0f4c108614 AIX: Client: improve reuse 2025-10-10 17:35:54 -07:00
Enrico Ros 86f4cc66d1 DMessage: upstream handle support 2025-10-10 17:33:04 -07:00
Enrico Ros ca38e7f160 AIX: OAI: Responses: typo 2025-10-10 17:33:04 -07:00
Enrico Ros 99bd54ca79 BlockPartPlaceholder: fix mx: 1.5 2025-10-10 17:32:57 -07:00
Enrico Ros 9a3ef83078 README: Link images 2025-10-10 09:17:43 -07:00
Enrico Ros c1d3c5d350 AIX: Inspector: link in DesktopNav > Tools 2025-10-10 09:17:36 -07:00
Enrico Ros a36e202c80 AIX: Inspector: Quick Toggle 2025-10-10 09:06:14 -07:00
Enrico Ros b713b65a35 AIX: Inspector: show body size 2025-10-10 08:53:33 -07:00
Enrico Ros 925445c729 README: bits 2025-10-10 08:26:34 -07:00
Enrico Ros ce8140ce22 README: add and section out some changes 2025-10-10 02:17:15 -07:00
Enrico Ros d2f60e51c7 Add mascot section to README
Added a section for the mascot with an image.
2025-10-10 02:15:55 -07:00
Enrico Ros c66885d25c News: fix style 2025-10-10 01:24:16 -07:00
Enrico Ros 8d4ca7b547 2.0 update package 2025-10-10 01:06:08 -07:00
Enrico Ros 280b32b3a9 2.0 Adjusted news item 2025-10-10 00:49:43 -07:00
Enrico Ros 522bd890c1 2.0 Simple news items 2025-10-10 00:38:31 -07:00
Enrico Ros 88e1f51099 Model Configuration: don't fullscreen on mobile 2025-10-10 00:34:51 -07:00
Enrico Ros 8774b222d9 2.0 release graphics 2025-10-09 23:58:48 -07:00
Enrico Ros b9ef1d608c News: rename Callout 2025-10-09 22:02:28 -07:00
Enrico Ros a0d25a1d48 News: wire Dev up top 2025-10-09 21:41:05 -07:00
Enrico Ros 92cd9e5930 News: add Dev item 2025-10-09 21:39:41 -07:00
Enrico Ros 3099b0d0ec News: extract Card 2025-10-09 21:19:27 -07:00
Enrico Ros 4a5ce94d29 Fix removal of Release.App 2025-10-09 20:57:15 -07:00
Enrico Ros b47a1fd562 Remove Release.App/News 2025-10-09 20:51:11 -07:00
Enrico Ros 10bef4f75c Dissolve app.version 2025-10-09 20:44:51 -07:00
Enrico Ros 41c571caf5 Name as Open 2025-10-09 20:41:45 -07:00
Enrico Ros a21b049437 BackupRestore: remove version 2025-10-09 20:37:12 -07:00
Enrico Ros f06fbec8df webGeolocationUtils: correct package version 2025-10-09 20:35:56 -07:00
Enrico Ros 24b6b4e1a9 Update description 2025-10-09 20:10:16 -07:00
Enrico Ros df8f9b3e3a AIX: limit the echo request size 2025-10-09 19:55:24 -07:00
Enrico Ros 85a55bcc4c BlockEdit_TextFragment: fix escape key 2025-10-09 19:55:11 -07:00
Enrico Ros facb2e3f2b Update README.md 2025-10-09 19:44:12 -07:00
Enrico Ros f6e79510c9 Revise README for clarity and structure
Updated the README to improve clarity and formatting.
2025-10-09 19:43:11 -07:00
Enrico Ros 528055929a Legacy 2025-10-09 19:34:42 -07:00
Enrico Ros 7a1774a2ba Roll year 2025-10-09 19:32:15 -07:00
Enrico Ros 66749ded0a Update the README.md 2025-10-09 19:30:15 -07:00
Enrico Ros 6f74dc6c72 Misc small cleanups 2025-10-09 16:19:53 -07:00
Enrico Ros b8d27346e0 Update docs and refs (v2-dev -> main) 2025-10-09 16:19:53 -07:00
Enrico Ros e1e73cd260 Docker: update for main branch transition
Triggers on 'main'
2025-10-09 16:19:53 -07:00
Enrico Ros a1bf15c316 AixDebuggerDialog: fix scroll 2025-10-09 06:34:35 -07:00
Enrico Ros e69bf34ed6 GoodModal: uncontrolled maximization 2025-10-08 14:53:10 -07:00
Enrico Ros fa1a977870 AixDebuggerDialog: full screen on Mobile 2025-10-08 11:51:46 -07:00
Enrico Ros 7ed4ccb66c LoggerViewerDialog: full screen on Mobile 2025-10-08 11:50:10 -07:00
Enrico Ros 76a90ede24 ShortcutsModal: full screen on Mobile 2025-10-08 11:43:51 -07:00
Enrico Ros 89e8c24f46 Settings and Models: full screen on Mobile 2025-10-08 11:42:37 -07:00
Enrico Ros 430c7602d4 GoodModal: support Fullscreen and fix display:grid
Note that display:grid was fitting to contents, but we prefer display:flex (direction:column)
so we had to make the maxWidth property from 700 to adaptive to the screen size.
2025-10-08 11:42:23 -07:00
Enrico Ros 51b9fbac0f Roll packages 2025-10-08 08:08:22 -07:00
Enrico Ros 63eba761c5 Fix the Draw Provider selector 2025-10-07 09:25:01 -07:00
Enrico Ros e80fb7aa73 OpenAI gpt-image-1-mini: enable inpaint 2025-10-07 09:24:32 -07:00
Enrico Ros 8b2b98fc10 OpenAI gpt-image-1-mini support 2025-10-07 09:12:39 -07:00
Enrico Ros c9712c72a0 Fix Sonnet-4.5 display name 2025-10-07 08:52:35 -07:00
Enrico Ros d0ad4095c0 LLMs: Add OpenAI Gpt-5 Pro 2025-10-06 11:14:44 -07:00
Enrico Ros 1c00286a70 Roll AIX 2025-10-05 09:48:23 -07:00
Enrico Ros 8687c6b08b Merge pull request #839 from sam0jones0/add-claude-4.1-4.5-thinking-support
Add thinking support for Claude 4.1 Opus and 4.5 Sonnet via OpenRouter
2025-10-05 09:44:57 -07:00
Enrico Ros 7bdf467833 LLMs: disable staging log on openai 2025-10-03 20:53:14 -07:00
Enrico Ros 39736fbd27 idUtils: uuid underscore prefix stripping 2025-10-03 17:11:36 -07:00
Enrico Ros f5e34e8096 idUtils: uuid validation (fast, inaccurate) 2025-10-03 17:11:36 -07:00
Enrico Ros b2246ed922 Prevent wrong error matching 2025-10-03 13:05:05 -07:00
Enrico Ros a499e8463c LLMs: OpenAI: debug only in dev 2025-10-03 12:52:40 -07:00
Enrico Ros 708ae291cc LLMs: Gemini: debug only in dev 2025-10-03 12:52:36 -07:00
Enrico Ros 0d4db0322b tRPC: decode gemini 403 !ok 2025-10-03 12:40:28 -07:00
Enrico Ros 39ae2e47f9 LLMs: Gemini: enable Aspect Ratio 2025-10-02 20:10:42 -07:00
Enrico Ros 25159669df AIX/LLMs: Gemini: Nano Banana image aspect ratio 2025-10-02 20:10:37 -07:00
Enrico Ros 4e24281e18 LLMs: Gemini: update models 2025-10-02 18:48:13 -07:00
Enrico Ros d9bdeeb6b3 GoodModal: support darker bg 2025-10-02 18:48:13 -07:00
Enrico Ros b2847e7026 AIX: OpenRouter: let FCs through 2025-10-02 18:47:20 -07:00
Sam Jones 3f6bd90f64 Add thinking support for Claude 4.1 Opus and 4.5 Sonnet via OpenRouter 2025-10-02 11:30:39 +01:00
Enrico Ros 6b5984deac AIX: OpenAI: Variant Support 2025-10-01 17:48:39 -07:00
Enrico Ros 2dfaec9216 RenderImageURL: change the open/fullscreen icon 2025-10-01 06:21:08 -07:00
Enrico Ros ddbc5e65e8 AIX: OpenAI: fix Responses API breaking change 2025-09-30 18:55:59 -07:00
Enrico Ros 5dae51d2a1 AIX: extract CG impl for server-side usage, retry, etc. 2025-09-29 19:29:26 -07:00
Enrico Ros 75215955be AIX: export server Context type 2025-09-29 19:29:10 -07:00
Enrico Ros 79ee764a9f AIX: Inspector: limit to specific requests in production 2025-09-29 19:29:07 -07:00
Enrico Ros dce27e89a1 AIX: roll 2025-09-29 15:38:15 -07:00
Enrico Ros 448df4baf8 useLLMSelect: model options 2025-09-29 15:35:35 -07:00
Enrico Ros dafd09084a dMessageUtils: support Anthropic 4.5 and Gemini Robotics 2025-09-29 15:03:06 -07:00
Enrico Ros cae7d06256 LLMs: Gemini: add newer Flash/Flash-Lite and Robotics models 2025-09-29 15:03:06 -07:00
Enrico Ros a27eae46f6 LLMs: Gemini: remove vague symlinks 2025-09-29 15:03:06 -07:00
Enrico Ros 9f067c07f0 LLMs: Gemini: remove gen 1.5 models 2025-09-29 15:02:06 -07:00
Enrico Ros 1f0be73695 LLMs: OpenAI GPT-5 Codex 2025-09-29 13:28:26 -07:00
Enrico Ros ce6d42dcdd AIX: OpenAI Responses: allow re-entering the same summary block 2025-09-29 13:28:26 -07:00
Enrico Ros 439740adba ContentReassembler: newline before breakage messages 2025-09-29 13:28:26 -07:00
Enrico Ros cff36c0c31 AutoBlocksRenderer: re-enable ERC on completion 2025-09-29 13:28:26 -07:00
Enrico Ros 7c9edaf186 BlockPartModelAux: support Markdown (but off for now) 2025-09-29 13:28:26 -07:00
Enrico Ros bbc736d72a LLMs: Anthropic thinking models first 2025-09-29 12:00:14 -07:00
Enrico Ros 47439b9907 LLMs: Anthropic Claude Sonnet 4.5 2025-09-29 12:00:14 -07:00
Enrico Ros c3274d66c9 Move tf pointers 2025-09-29 07:49:16 -07:00
Enrico Ros d4836914dd [TOOLS] AIX: support Vnd.Ant 2024-10-22 2025-09-29 07:42:39 -07:00
Enrico Ros 4a44393878 AIX: Inspector: fixes 2025-09-29 07:42:39 -07:00
Enrico Ros 123ebc0f26 AIX: remove old debugging 2025-09-29 07:42:39 -07:00
Enrico Ros 0a133a764b DMessageMetadata: initial recipients 2025-09-29 07:42:39 -07:00
Enrico Ros c1d807a516 OptimaBarDropdown: support showFaded with 67% opacity 2025-09-29 07:39:15 -07:00
Enrico Ros aad715f7e1 Placeholder: mx 1.5 on op chip 2025-09-29 07:39:15 -07:00
Enrico Ros f1ec94111a FormInputKey: fix min width on mobile 2025-09-18 16:22:54 -07:00
Enrico Ros 07fcecc5b5 DebouncedInput: support startDecorator 2025-09-18 16:22:41 -07:00
Enrico Ros c56328009e Models-modal: Wizard uses the same autocomplete pattern 2025-09-17 12:39:27 -07:00
Enrico Ros f8cbb6faa2 Models-modal: shift + add -> wizard 2025-09-17 12:39:22 -07:00
Enrico Ros c07eb4014f FormInputKey: support for tooltips 2025-09-17 12:39:18 -07:00
Enrico Ros 94c1b35cee FormInputKey: bind to a username to help pass managers remember 2025-09-17 12:39:14 -07:00
Enrico Ros 2277fd0880 FormInputKey: smaller keys 2025-09-17 12:39:14 -07:00
Enrico Ros a2313186e4 FormInputKey: autocomplete on 2025-09-17 12:39:14 -07:00
Enrico Ros 3351d61ca7 LLMs: Gemini: update models 2025-09-17 01:11:31 -07:00
Enrico Ros 905d438075 Void fragments: render follow-up placeholders 2025-09-16 12:49:13 -07:00
Enrico Ros ba3290f4e1 animationSpinHalfPause 2025-09-16 12:48:51 -07:00
Enrico Ros a828ea45aa BlockPartPlaceholder: render model messages 2025-09-16 12:40:32 -07:00
Enrico Ros 7c484ea5d8 BlockPartPlaceholder: render model messages 2025-09-16 11:37:16 -07:00
Enrico Ros 5b68608d5b AIX: fragment-compatible support of model ops 2025-09-16 11:36:21 -07:00
Enrico Ros 08ef5396f3 Placeholder: don't draw the datastreamviz if we have a model placeholder 2025-09-16 10:46:52 -07:00
Enrico Ros 662ef5ae4f Fragments: support model placeholders 2025-09-16 10:45:59 -07:00
Enrico Ros 23a1e9b335 DLLM/AIX: Image generation options 2025-09-16 10:16:13 -07:00
Enrico Ros b79f8a1508 Roll packages 2025-09-16 00:23:19 -07:00
Enrico Ros a793fa041e LLMs: update scores from Sep 8, 2025 2025-09-16 00:20:53 -07:00
Enrico Ros c5ef92f1f7 Bump AIX 2025-09-15 23:36:45 -07:00
Enrico Ros 7ccf22c2f4 LLMS/AIX: OpenAI Image Generation Tool 2025-09-15 23:24:00 -07:00
Enrico Ros 67df71ab45 Doc: LLMS/AIX: parameters system 2025-09-15 22:50:43 -07:00
Enrico Ros 0636ca76ea AIX: OpenAI Responses: built-in image_generation_call config and output 2025-09-15 22:24:13 -07:00
Enrico Ros 2f2e4e36be AIX: OpenAI Responses: handle web_search_calls even for obscure types 2025-09-15 19:25:59 -07:00
Enrico Ros 913c821eae AIX: OpenAI Responses: fix different 'refusal' naming :/ 2025-09-15 19:25:59 -07:00
Enrico Ros 43f2bacf58 AIX: OpenAI Responses: response object stub definitions 2025-09-15 19:25:59 -07:00
Enrico Ros ae0cf1a89e AIX: OpenAI Responses: high quality citation output 2025-09-15 19:25:59 -07:00
Enrico Ros 0b2d037385 AIX: OpenAI Responses: web search results transmission 2025-09-15 19:25:59 -07:00
Enrico Ros cd5cef1c25 AIX: Placeholder supports (pre beginning of the real content, then done) 2025-09-15 19:25:59 -07:00
Enrico Ros 8c5f70a339 AIX: OpenAI Responses: Web Search: handle web search results too 2025-09-15 19:25:59 -07:00
Enrico Ros f5ecfd1d74 Annotations: copy citations as text or table 2025-09-15 19:25:59 -07:00
Enrico Ros e0de908741 LLMs: document search off 2025-09-15 16:31:25 -07:00
Enrico Ros cd2ccff0d7 Annotations: improve style for regular citations 2025-09-15 16:24:07 -07:00
Enrico Ros 3df6c62dce CloseablePopup: support custom boxShadow 2025-09-15 13:19:15 -07:00
Enrico Ros 463af67d17 Typo 2025-09-15 13:18:58 -07:00
Enrico Ros 80f345b179 LLM Attachment Drafts: has images 2025-09-13 12:17:50 -07:00
Enrico Ros bf212ca83f AIX: Client: Auto-Index of images excludes caption 2025-09-13 12:17:50 -07:00
Enrico Ros 0185712cbf AIX: Client: Auto-index for Images on System Messages 2025-09-13 01:28:28 -07:00
Enrico Ros a5199a23d9 AIX: Support for Images on System Messages 2025-09-13 00:52:54 -07:00
Enrico Ros 011c382360 AIX: Gemini: note on Image ordering 2025-09-13 00:52:54 -07:00
Enrico Ros 5c9ce84249 AIX: Content.SystemInstruction: support InlineImage parts 2025-09-13 00:52:54 -07:00
Enrico Ros 9e89ba9b10 ImageAttachmentFragments: support renderVariant, for rendering from the PersonasEditData 2025-09-13 00:52:54 -07:00
Enrico Ros cb8cefb0ea Attachment Menu: zIndex over modal for when used on the Persona Edit modal 2025-09-13 00:52:54 -07:00
Enrico Ros 7607b8fec5 ChatMessage: render system images below user images 2025-09-13 00:52:54 -07:00
Enrico Ros 05a96c5aca Fragment buckets: use correct classifier
# Conflicts:
#	src/common/stores/chat/hooks/useFragmentBuckets.ts
2025-09-13 00:49:27 -07:00
Enrico Ros 762b0c11ff AppChat: small comment 2025-09-12 17:49:42 -07:00
Enrico Ros c903f9bc5c LLMs: OpenAI: [DEV] models delta code 2025-09-12 17:49:28 -07:00
Enrico Ros c190ae89ce LLMs: OpenAI: remove older models 2025-09-12 17:49:28 -07:00
Enrico Ros 1b6b491eee LLMs: OpenAI: add new Audio and Realtime models descs 2025-09-12 16:45:17 -07:00
Enrico Ros 4e9c0ba489 LLMs: OpenAI: remove older 4o-realtime models 2025-09-12 16:45:16 -07:00
Enrico Ros 13fcb932d1 LLMs: OpenAI: prevent older realtime models from showing 2025-09-12 16:45:16 -07:00
Enrico Ros f9f2c3d2b2 AIX: LLM_IF_Outputs_Audio replaces needs_audio 2025-09-12 16:36:38 -07:00
Enrico Ros bdab75c336 LLMs: OpenAI: deprecated/shut down some models 2025-09-12 16:36:38 -07:00
Enrico Ros 5996934f60 AIX: OpenAI chatGenerate: enable audio modalities 2025-09-12 16:36:37 -07:00
Enrico Ros 2f8659fc38 AIX: OpenAI chatGenerate: ignore pure-obfuscation messages 2025-09-12 16:29:00 -07:00
Enrico Ros 1e1206ab7e Merge branch 'fork/powyncify/v2-dev-paulshort' into v2-dev
# Conflicts:
#	.claude/settings.local.json
#	src/modules/aix/server/dispatch/chatGenerate/adapters/openai.responsesCreate.ts
2025-09-12 14:27:31 -07:00
Enrico Ros 4682afc985 More permissions 2025-09-12 14:26:31 -07:00
Enrico Ros 8722e1be6c Azure: update docs 2025-09-12 14:26:31 -07:00
Enrico Ros fbd6fd3e7c LLMs: Azure: Disable the web search option from the model description (configurable by code) 2025-09-12 14:19:20 -07:00
Enrico Ros cfba3ce834 Azure: update env var docs 2025-09-12 14:19:20 -07:00
Enrico Ros a4ad1e8295 Azure: remove description of the fix for #828, now it's merged 2025-09-12 14:19:20 -07:00
Enrico Ros aa441b0656 LLMs: Azure OpenAI: cleaned up (and moved) azureOpenAIAccess - simpler and modularized code 2025-09-12 14:19:20 -07:00
Enrico Ros 39a7e30880 AIX: Azure OpenAI: renamed Env Vars 2025-09-12 13:58:23 -07:00
Enrico Ros 74b69f9ea4 AIX: Azure OpenAI: verified web_search_preview not present yet 2025-09-12 13:57:49 -07:00
Enrico Ros 3094540b93 LLMs: misc bits 2025-09-12 13:24:30 -07:00
Enrico Ros 513500b16e LLMs: extract access response type 2025-09-12 13:23:52 -07:00
Enrico Ros 51c41473a5 Azure: remove path from the example 2025-09-12 12:47:09 -07:00
Enrico Ros e79df4a347 AIX: Azure/Request API: improve web search tool use and exceptions 2025-09-12 11:21:51 -07:00
Enrico Ros 53a4a66e9e AIX: OpenAI Responses: support temperature in ChatGPT-5 2025-09-12 10:57:53 -07:00
Enrico Ros aaf2de278f OpenAI: move and rename ChatGPT-NR 2025-09-12 10:57:53 -07:00
Enrico Ros d2e8bad75f OpenAI: decorate LLMs that search 2025-09-12 10:57:53 -07:00
Enrico Ros 98bcbba7ca useLLMSelect: option to configure models at the bottom, and optimize styles 2025-09-12 10:57:02 -07:00
Enrico Ros 61258163e2 Optimize FormSliderControl by 1000x 2025-09-11 16:14:17 -07:00
Enrico Ros 80b393ca14 AIX: improve llm user config replacement/override 2025-09-11 14:02:58 -07:00
Enrico Ros b57c292581 LLMs: bits 2025-09-11 14:02:03 -07:00
Enrico Ros 044e2f9b57 ModelConfiguration: explicit 'undefined' modelParameters 2025-09-11 14:01:47 -07:00
Enrico Ros b14e9c91c6 Speech Recognition: add dispose (unmounts) as a one-way street 2025-09-11 14:01:40 -07:00
Enrico Ros 58fe41edc3 OpenAI Verbosity: llm params editor 2025-09-11 13:59:01 -07:00
Enrico Ros 73a089e177 OpenAI Verbosity: models 2025-09-11 13:58:57 -07:00
Enrico Ros ada9e07c2f OpenAI Verbosity: AIX & llms 2025-09-11 13:58:55 -07:00
Enrico Ros 3b9e42948e More permissions 2025-09-11 13:58:04 -07:00
Enrico Ros 2e822b1eeb Common: KeyStroke: color support 2025-09-04 15:11:22 -07:00
Enrico Ros 8f67c3e398 Common: KeyStroke: pass onClick 2025-09-04 15:11:21 -07:00
Enrico Ros 82289c0564 More permissions 2025-09-03 17:55:32 -07:00
Enrico Ros 16e5e08d21 Roll packages, but hold back Zustand 2025-09-01 17:45:49 -07:00
Enrico Ros 62671ae04f AIX: improve error messaging of captive portals and requests too large 2025-08-21 14:28:22 -07:00
Enrico Ros 266a5c6408 CloseablePopup: prevent keystrokes from being intercepted by MenuItems's accelerators 2025-08-17 14:12:44 -07:00
Enrico Ros e9264c782f ErrorBoundary: support link 2025-08-17 14:11:59 -07:00
Enrico Ros 37eb046c10 Optima: Page heading: ellipsize 2025-08-17 14:11:41 -07:00
Enrico Ros 6e75f7dbee OpenAI: abortable create/edit image requests 2025-08-17 14:10:49 -07:00
Enrico Ros e420fa9661 Server-side fetchers: note the abort path 2025-08-17 14:10:34 -07:00
Enrico Ros 505649e360 OptimaPanelGroupedList: any title 2025-08-17 14:09:51 -07:00
paulhshort 3d93c856ba Fix Azure OpenAI web_search_preview tool incompatibility
Azure OpenAI doesn't support the web_search_preview tool, which was causing
"Hosted tool 'web_search_preview' is not supported" errors with GPT-5 models.

## Changes:
- Pass dialect information to aixToOpenAIResponses function
- Skip web_search_preview tool addition when dialect is 'azure'
- Add logging when web search is skipped for Azure
- Document known Azure limitations in implementation guide

## Impact:
- Fixes web browsing errors with Azure GPT-5 models
- Maintains web search functionality for regular OpenAI models
- Provides clear logging for debugging

This is a critical fix for Azure OpenAI compatibility as web search is not
currently supported on Azure's Responses API implementation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 20:07:19 -04:00
paulhshort 9fe5697fd4 Fix Azure OpenAI Resource Not Found errors for GPT-5 and o3 models (#828)
This commit addresses GitHub issue #828 by fixing URL construction for Azure OpenAI's Responses API
and preventing malformed URLs from client configuration issues.

## Problems Fixed:
1. Host normalization: Prevents malformed URLs when client config includes paths/queries
2. API paradigm support: Properly handles Azure's next-gen v1 Responses API
3. API version consistency: Centralizes version management with env overrides

## Key Changes:
- Normalize Azure host URLs to origin only (strip path/query)
- Prefer server environment variables over client-provided hosts
- Add special handling for Responses API (/openai/v1/responses)
- Support both traditional (deployment-based) and v1 API paradigms
- Add configurable API versions via environment variables
- Include debug logging for API paradigm selection

## New Environment Variables:
- AZURE_API_V1: Enable next-gen v1 API explicitly
- AZURE_RESPONSES_API_VERSION: Control Responses API version
- AZURE_CHAT_API_VERSION: Control Chat Completions API version
- AZURE_DEPLOYMENTS_API_VERSION: Control deployments listing API version

## Testing:
Validated with Azure OpenAI endpoint showing:
- List Deployments:  Works
- Chat Completions:  Works (with correct params for GPT-5)
- Responses API (v1):  Works with /openai/v1/responses?api-version=preview
- Responses API (traditional):  404 (Azure doesn't support this pattern)

The fix defaults to using Azure's recommended next-gen v1 API for Responses
while maintaining backward compatibility for existing deployments.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-08-11 19:37:41 -04:00
Enrico Ros 7fde3473ea BlobUtils: export direct 2025-08-09 14:12:52 -07:00
Enrico Ros 56a2d68c71 GPT-5: add Markdown Restore option, like the 'o' models 2025-08-08 16:48:11 -07:00
Enrico Ros 3d140604f8 Roll AIX 2025-08-08 16:48:11 -07:00
Enrico Ros 0a2167fa6a AIX: OpenAI: GPT-5 markdown restoration with the Responses API 2025-08-08 16:47:36 -07:00
Enrico Ros 79e95379ec AIX: OpenAI Chat Completions: GPT-5 as an 'o' model 2025-08-08 16:36:40 -07:00
Enrico Ros 3f740f3800 roll packages 2025-08-08 16:01:04 -07:00
Enrico Ros a4a0ecc0e5 LLMs: reset user params 2025-08-07 19:47:32 -07:00
Enrico Ros 686ad2ed7b FormControls size setting 2025-08-07 19:47:32 -07:00
Enrico Ros d8db79b4e5 AIX: OpenAI does not support Web Search with reasoning:minimal 2025-08-07 14:44:17 -07:00
Enrico Ros d33449f4af GPT-5: support 'minimal' reasoning 2025-08-07 14:37:32 -07:00
Enrico Ros 7e6a12bddf Roll AIX 2025-08-07 14:37:11 -07:00
Enrico Ros ec80413be0 AIX: vnd.oai.reasoning:minimal support 2025-08-07 14:34:54 -07:00
Enrico Ros c2af65facd GPT 5 naming 2025-08-07 13:38:10 -07:00
Enrico Ros 46266ac825 Rename search size to web search 2025-08-07 13:38:10 -07:00
Enrico Ros 91dc25e1c6 OpenAI: GPT5 models (configurable reasoning & search) 2025-08-07 13:38:10 -07:00
Enrico Ros 7f9dafd749 Roll AIX 2025-08-06 15:26:19 -07:00
Enrico Ros 989d0e5741 Anthropic: 4.1 > 4 2025-08-06 15:24:54 -07:00
Enrico Ros 3277c009fa Composer: change tips 2025-08-06 07:57:40 -07:00
Enrico Ros 85f1fe088d Fix models description 2025-08-05 21:37:03 -07:00
Enrico Ros 3c554c92d6 Anthropic: update models 2025-08-05 20:20:42 -07:00
Enrico Ros f95d071197 Anthropic: fix dMessageUtils 2025-08-05 20:20:42 -07:00
Enrico Ros da887d58db Incognito: improve chat drawer icon 2025-08-05 20:20:42 -07:00
Enrico Ros 5273a3c84f ClosablePopup: support size 2025-08-05 20:13:38 -07:00
Enrico Ros f51712867f Folders: feed items count (commented) 2025-08-05 20:13:38 -07:00
Enrico Ros ecac1dffec Folders: support for items count, if any 2025-08-05 20:13:37 -07:00
Enrico Ros 28817bee72 Folders: only drag on the icon 2025-08-05 20:04:04 -07:00
Enrico Ros 3fd41329ea ChatDrawer: improve stepping of the progressive limiter 2025-08-05 20:04:04 -07:00
Enrico Ros f734f0b5f7 ChatDrawer: limit items (ListItems are on the heavy side) 2025-08-04 23:55:37 -07:00
Enrico Ros 3b34a8b96d Fix blob conversion types with newer TS 2025-08-04 14:19:52 -07:00
Enrico Ros 74e6ee4b2d Posthog: improve config (if/when enabled) 2025-08-04 14:15:45 -07:00
Enrico Ros 61929527a3 Roll packages: tRPC, zod, zustand, typescript 2025-08-04 14:13:50 -07:00
Enrico Ros a3e216c956 Logger: max 200 entries 2025-08-02 19:20:37 -07:00
Enrico Ros d4203f728e Nicer replacement of ph when interrupted 2025-08-01 15:34:11 -07:00
Enrico Ros 616376f4ac BackupRestore: Flash: vendor prefix 2025-08-01 14:45:49 -07:00
Enrico Ros 1a309c9bdf BackupRestore: Flash: change schema/version 2025-08-01 14:36:07 -07:00
Enrico Ros 253fc3b213 Approximate Tokenization - optimized 2025-08-01 14:27:05 -07:00
Enrico Ros a79fd0a10c Approximate Tokenization - default on new Mobile installs 2025-08-01 14:26:26 -07:00
Enrico Ros 04df3dcba8 Butter-smooth decimator with deadlines 2025-07-31 11:33:13 -07:00
Enrico Ros 00fbf77dbd Raise update frequency due to optimizations 2025-07-31 09:07:47 -07:00
Enrico Ros 9a34c1e376 References (to Image Assets): restore places that assumed 'image_ref' 2025-07-30 18:06:40 -07:00
Enrico Ros e248104d4b Reference: embed a ref summary 2025-07-30 17:11:22 -07:00
Enrico Ros c10558f230 Reference: cleanup migrated fragments 2025-07-30 16:26:41 -07:00
Enrico Ros 5be41b8199 References: add support (migration, creation (attaching, aix.reassembler, t2i , ego-msg), gc dblobs collection, render image, render button icon, dereference, bucketing, token count) with exhaustive checks 2025-07-30 16:09:05 -07:00
Enrico Ros d6b6e30cf5 Fragments: References 2025-07-30 14:52:26 -07:00
Enrico Ros 825ca7ba87 Fragments: soft-track the originId for multi-origin fragments in messages 2025-07-30 10:26:40 -07:00
Enrico Ros 5c2a8a4996 Tools: fragment type discrimination 2025-07-30 06:49:18 -07:00
Enrico Ros 58aef2a97d Don't transmit ui-side unhandled exceptions (the handler shall be there already) 2025-07-29 16:10:13 -07:00
Enrico Ros e983f9d8a9 Domain for errors 2025-07-29 16:09:46 -07:00
Enrico Ros 7e95dcc1cb PDF: roll pdfjs 2025-07-29 16:03:03 -07:00
Enrico Ros 69a21a82ac PDF: prevent loading issues 2025-07-29 15:56:17 -07:00
Enrico Ros 95d2fee63d Paste.gg: fully removed, the website seems up no more 2025-07-29 13:08:01 -07:00
Enrico Ros 6f22a71555 PostHog: debug valid client-side errors (if Error is provided) 2025-07-29 12:51:56 -07:00
Enrico Ros a30409fcfc Stacked bar: improve 2025-07-29 08:11:24 -07:00
Enrico Ros 217346f572 BackupRestore: partial export (no/settings) 2025-07-29 07:20:38 -07:00
Enrico Ros 4472671470 Stacked bar: support node titles 2025-07-28 16:07:21 -07:00
Enrico Ros b1a026bdd1 Prodia: remove for now as the API keeps changing and is not a good default for our users - may put this back anytime (#786) 2025-07-26 11:06:34 -07:00
Enrico Ros 312fae5f6d OpenAI: T2I: Auto-select the latest model 2025-07-26 10:53:18 -07:00
Enrico Ros 46235aa28a T2I: Auto-select the provider 2025-07-26 10:53:07 -07:00
Enrico Ros 6fe0e297eb Remove Prodia image generation - does not seem to be up to the quality par 2025-07-26 08:46:49 -07:00
Enrico Ros 3b3214ef5e Roll packages 2025-07-26 06:21:20 -07:00
Enrico Ros ebc28ed8a4 Entangled: add DMessage metadata 2025-07-22 07:42:37 -07:00
Enrico Ros 54c23a9907 Update packages 2025-07-21 14:14:55 -07:00
Enrico Ros 8fce40be80 PostHog: revert changes to the Edge route (details inside)
Build fails with:
./node_modules/posthog-node/lib/edge/index.mjs
Module not found: Can't resolve 'crypto' in '/vercel/path0/node_modules/posthog-node/lib/edge'
2025-07-21 14:08:13 -07:00
Enrico Ros 5c5a213c4c PostHog errors: cleanups 2025-07-21 13:28:20 -07:00
Enrico Ros 3b730680cb PostHog errors: cloud/edge routes 2025-07-21 13:28:20 -07:00
Enrico Ros d7765ae578 PostHog errors: add node/edge utility functions 2025-07-21 13:28:19 -07:00
Enrico Ros ab21d5c308 PostHog errors: add node/edge package 2025-07-21 13:28:18 -07:00
Enrico Ros 392319a300 PostHog errors: remove error capture 2025-07-21 13:28:17 -07:00
Enrico Ros bee5f950b9 PostHog: client-side on ErrorBoundary (escape the auto capture) 2025-07-21 12:42:42 -07:00
Enrico Ros 3fc1e3f643 Show the load progress past 500ms 2025-07-21 09:47:57 -07:00
Enrico Ros ee10b39866 PostHog: reduce config 2025-07-21 08:47:37 -07:00
Enrico Ros 867265fd31 PostHog: verbosity off 2025-07-21 08:19:20 -07:00
Enrico Ros 68109a4a37 PostHog: integrate in build 2025-07-21 08:08:57 -07:00
Enrico Ros 874401ef8c PostHog: add cli/nextjs conf 2025-07-21 07:59:11 -07:00
Enrico Ros 303a3f2c7d New react-player 3 2025-07-19 13:15:01 -07:00
Enrico Ros 915f338378 Deep roll 2025-07-19 12:24:47 -07:00
Enrico Ros fd2e1fe34b Roll packages 2025-07-19 12:23:18 -07:00
Enrico Ros e2e7453431 Roll zod@4 2025-07-19 12:22:17 -07:00
Enrico Ros b07573ec4f chat.message: nit 2025-07-17 11:13:53 -07:00
Enrico Ros 66c279e895 Telephone: fix for cancel 2025-07-17 11:06:36 -07:00
Enrico Ros 06e879b884 storageUtils: don't keep requesting permission once granted 2025-07-16 12:19:22 -07:00
Enrico Ros f205dafe4d AppChat: lazy-load modals 2025-07-16 10:48:19 -07:00
Enrico Ros fec18d7039 ModelsModals: extract auto-open 2025-07-16 10:06:30 -07:00
Enrico Ros 5ef09455da ModelsModals: lazy load 2025-07-16 10:06:12 -07:00
Enrico Ros c799869e3b ModelsModals: extract the configurator modal 2025-07-16 10:01:02 -07:00
Enrico Ros 448f5a85d0 LLMs: centralize icons 2025-07-16 09:35:46 -07:00
Enrico Ros 9909a537c2 Modals: lazy load Settings 2025-07-16 09:16:45 -07:00
Enrico Ros 9772a18bf4 LLMs: centralize config UI into 1 function 2025-07-16 08:59:30 -07:00
Enrico Ros 0ac80b26bd Gemini: remove unused options 2025-07-16 08:59:30 -07:00
Enrico Ros 1f5e25a57b Deep roll packages 2025-07-16 08:36:14 -07:00
Enrico Ros 8e5f7ef977 Roll safe packages 2025-07-16 08:24:19 -07:00
Enrico Ros ed21c8affd Logger: improve console output snr 2025-07-15 16:37:19 -07:00
Enrico Ros 023228c2c5 AnimUtils: add opacity pulse 2025-07-14 15:16:56 -07:00
Enrico Ros 68f4118bde AIX: needs a roll for the xAI changes 2025-07-11 17:20:22 -07:00
Enrico Ros 0edc839857 xAI: final touches, it's good now 2025-07-11 17:04:04 -07:00
Enrico Ros ee6f560388 xAI: perfect search 2025-07-11 16:48:42 -07:00
Enrico Ros c100355b7b xAI: models: define search support 2025-07-11 16:47:46 -07:00
Enrico Ros 4f7402c343 xAI: support Live Search on X, Web, etc. 2025-07-11 16:13:21 -07:00
Enrico Ros 5ac73e9599 xAI: update Grok models 2025-07-11 15:30:45 -07:00
Enrico Ros c1e46e00d9 Improve looks of pure-markdown pre code blocks 2025-07-10 23:18:01 -07:00
Enrico Ros 7a05f0f9ab Exclude unused 2025-07-10 20:47:47 -07:00
Enrico Ros afcd511893 More permissions 2025-07-09 17:12:31 -07:00
Enrico Ros 8f42900e8e PerfUtils: interval annotation function 2025-07-09 07:21:00 -07:00
Enrico Ros bcc12876d7 Optima Pane Peek: 25% faster enter 2025-07-08 21:32:49 -07:00
Enrico Ros e1c2f85bda Beam: change count looks 2025-07-07 09:43:45 -07:00
Enrico Ros 6989a807d6 InlineTextArea: stop event propagation on 'esc' 2025-07-06 18:57:56 -07:00
Enrico Ros d92739c793 Remove warning 2025-07-03 19:07:54 -07:00
Enrico Ros 2fcb80b932 Logger: prevent localStorage overflow 2025-07-02 15:14:57 -07:00
Enrico Ros 03b0e88ef7 ChatMessageList: fragment replace as done 2025-07-01 19:20:24 -07:00
Enrico Ros a5a73ddbef Release: const dev build 2025-07-01 14:29:52 -07:00
Enrico Ros eb57147ed3 BackupRestore: improve DB schema restore 2025-07-01 09:08:50 -07:00
Enrico Ros 0cf12d2a8f Rename Chat Bars 2025-07-01 08:49:52 -07:00
Enrico Ros 06d332e785 Restore: selective restore 2025-07-01 08:37:49 -07:00
Enrico Ros a75eaaec69 Bubble: cut tool 2025-06-29 06:58:59 -07:00
Enrico Ros 513ee36027 LLMs: Gemini: update visibilities 2025-06-27 11:09:52 -07:00
Enrico Ros 975f425ae4 LLMs: Gemini: update models 2025-06-27 11:05:45 -07:00
Enrico Ros c310ca9c5c LLMs: Gemini: remove obsolete 2025-06-27 10:41:39 -07:00
Enrico Ros 21a6f0aa50 LLMs: Gemini: update pricing 2025-06-27 10:41:31 -07:00
Enrico Ros c2c3fdf7d4 AIX: Gemini: update parser (thinking) 2025-06-27 10:41:00 -07:00
Enrico Ros ce0880bf5b AIX: OpenAI Responses: support for item done (web_search_call)
Still missing: annotation support, web search step reporting, sequence machine on the 3 new added events
2025-06-27 00:23:54 -07:00
Enrico Ros eed099bfed AIX: OpenAI Responses: text annotations (not implemented yet) 2025-06-27 00:19:07 -07:00
Enrico Ros 08b37efb55 AIX: OpenAI Responses: support web_search_call events 2025-06-27 00:07:25 -07:00
Enrico Ros 8443445ed0 AIX: OpenAI Responses: support the web_search_preview Tool 2025-06-27 00:07:24 -07:00
Enrico Ros d011599060 OpenAI: add Deep-Research o3 and o4-mini 2025-06-26 23:13:55 -07:00
Enrico Ros 0dd043cb6a Zod: tree-shakeable 2025-06-26 15:51:48 -07:00
Enrico Ros 1ebd1d9e15 Zod-4: reduce deprecated 2025-06-26 12:47:33 -07:00
Enrico Ros 202aef8916 tRPC use stable httpBatchStreamLink 2025-06-26 12:46:38 -07:00
Enrico Ros 30acf51410 Lints 2025-06-26 12:46:25 -07:00
Enrico Ros d4b01398c7 Remove zod-to-json-schema 2025-06-26 12:01:40 -07:00
Enrico Ros 4dde3d0fe7 Zod-4: Migrate 2025-06-26 12:01:40 -07:00
Enrico Ros 8aa6fd7c8e Zod-4: for JSON schema 2025-06-26 12:01:26 -07:00
Enrico Ros e2e6e6d641 Zod: qualify records 2025-06-26 12:00:59 -07:00
Enrico Ros 20aa91b9a6 Link: change the dataObject to any, from passthrough 2025-06-26 12:00:59 -07:00
Enrico Ros 7bfd82ae4f AIX: OpenAI: remove unnecessary default 2025-06-26 12:00:59 -07:00
Enrico Ros c5101ee4cf LLMs: remove unnecessary validation 2025-06-26 11:56:31 -07:00
Enrico Ros 378f390941 AIX: FC: convert to z.json() where not string (DEF, gemini Call/Res).
Annotate FC-DEF/FC/FC-R
2025-06-26 11:56:31 -07:00
Enrico Ros 3bc8360959 AIX: Anthropic fix 2025-06-26 11:56:31 -07:00
Enrico Ros af124e7cd9 Roll packages deep 2025-06-26 09:27:27 -07:00
Enrico Ros 71633ff441 Roll packages 2025-06-26 09:13:20 -07:00
Enrico Ros daf2e58c99 Mistral: turn off gaps debug 2025-06-25 15:38:41 -07:00
Enrico Ros 3818af2156 Mistral: full auto-spec of models 2025-06-25 15:32:43 -07:00
Enrico Ros dd0fd2edcf AIX: fw-compatible check deprio 2025-06-25 13:14:57 -07:00
Enrico Ros 07304c6d0e AIX: OpenAI: Responses: Tools check 2025-06-25 13:02:50 -07:00
Enrico Ros 4db1708fae LLMs: OpenAI: enable o3-pro, o1-pro, codex-mini 2025-06-25 12:54:42 -07:00
Enrico Ros 0952926265 AIX: OpenAI: Responses: parser NS/S complete
NOTE: check the console for ANY log. We don't throw to complete requests, but we make
large assumptions on the ordering/sequencing of events.
2025-06-25 12:53:19 -07:00
Enrico Ros a695484921 AIX: OpenAI: Responses: wires 2025-06-25 12:52:12 -07:00
Enrico Ros 55c3eb4cf0 AIX: OpenAI: Responses: dispatch fixes 2025-06-25 09:19:02 -07:00
Enrico Ros 8e42356956 Metrics: include dtAll 2025-06-25 09:19:02 -07:00
Enrico Ros 255ef64b37 AIX: roll AIX 2025-06-24 22:26:25 -07:00
Enrico Ros e3f1307b30 LLMs: OpenAI: change pSpecs for restore markdown 2025-06-24 22:25:57 -07:00
Enrico Ros 93beda7fff AIX: OpenAI: Responses NS parser 2025-06-24 22:21:21 -07:00
Enrico Ros 91251985db AIX: OpenAI: responses interfaces 2025-06-24 19:29:21 -07:00
Enrico Ros b41cb74f45 AIX: response API dispatch switch 2025-06-24 19:23:31 -07:00
Enrico Ros 303b90d1ee AIX: response API dispatcher 2025-06-24 19:20:57 -07:00
Enrico Ros 86f80a320d AIX: response API model annotations 2025-06-24 19:20:44 -07:00
Enrico Ros d4e158a8b6 OpenAI Responses: Wires 2025-06-24 19:18:01 -07:00
Enrico Ros f58eae623a ERC: fix overflow 2025-06-24 13:38:02 -07:00
Enrico Ros bc5493ed50 Zero State models 2025-06-24 10:42:02 -07:00
Enrico Ros 4e51f26ef2 RenderCode: fixed line numbers 2025-06-24 07:44:55 -07:00
Enrico Ros 04226eb686 Attract to Model Config 2025-06-23 12:43:00 -07:00
Enrico Ros f9743fd04b GoodModal: options to not react on Backdrop or Escape closes 2025-06-23 11:38:23 -07:00
Enrico Ros b9746ef100 GoodModal: fix the drag-closes issue
When clicking inside the dialog and dragging on the backdrop, the dialog would
close.

Now we only close if initiated within the dialog.
2025-06-23 11:33:43 -07:00
Enrico Ros 92e56c3c84 Lint 2025-06-23 09:38:42 -07:00
Enrico Ros aa134d7f21 Fix build 2025-06-23 09:38:21 -07:00
Enrico Ros f2bea1867c Perplexity: early variants support (disabled) 2025-06-23 09:18:37 -07:00
Enrico Ros a55acf5146 Perplexity: fix first response 2025-06-23 09:10:22 -07:00
Enrico Ros 869b9b994d Perplexity: enable search context size 2025-06-23 08:27:15 -07:00
Enrico Ros 93fca32e9a Move Date Range config 2025-06-23 08:26:58 -07:00
Enrico Ros 1d7dfd53f4 Improve Search Context config 2025-06-23 08:26:48 -07:00
Enrico Ros a68f35d909 Citations: render date 2025-06-23 07:59:13 -07:00
Enrico Ros f800639e1a Citations: add date to Fragments 2025-06-23 07:54:59 -07:00
Enrico Ros ed45a01267 Perplexity: fix system-only message, or assistant-before-user 2025-06-23 07:39:50 -07:00
Enrico Ros b0634e272d Perplexity: improved search results support (with title) 2025-06-23 07:25:12 -07:00
Enrico Ros d90f012140 Perplexity: reorder citations to not interrupt the first reasoning block 2025-06-23 07:14:26 -07:00
Enrico Ros 41363a534f Perplexity: update models 2025-06-23 07:00:53 -07:00
Enrico Ros 44d53e581b Perplexity: add Academic research and range 2025-06-23 07:00:44 -07:00
Enrico Ros 5aeb034945 OAI: fix spell 2025-06-23 06:33:06 -07:00
Enrico Ros 13a95db7a4 ElevenLabs: return the buffer, optionally 2025-06-22 13:00:05 -07:00
Enrico Ros 1705461e80 Logger: deduplicate 2025-06-21 18:59:34 -07:00
Enrico Ros 3fa7d61c7e ElevenLabs: return play status 2025-06-21 09:50:07 -07:00
Enrico Ros 0b8268fea3 ChatMessage: imperative handle 2025-06-20 16:41:57 -07:00
Enrico Ros 22ffc74371 Drawer: option to pin it 2025-06-20 15:49:37 -07:00
Enrico Ros 31edb6a881 RenderCode: nowrap 2025-06-18 07:52:15 -07:00
Enrico Ros b8245095c9 CleanerMessage: improve layout 2025-06-17 16:03:46 -07:00
Enrico Ros ed26e57352 CleanerMessage: display the presence of attachments, images, etc. 2025-06-17 15:40:28 -07:00
Enrico Ros ea8a757b19 Fix drawers size 2025-06-17 07:52:05 -07:00
Enrico Ros b5d1e5f6c9 Revert "CSS: round() ... [WARNING]" - Not risking it in v2-dev.
This reverts commit 142a4495a6.
2025-06-16 13:08:40 -07:00
Enrico Ros 142a4495a6 CSS: round() to fix potential blurs [WARNING]
Warning: older browsers will ignore the entire CSS lines containing round() calls.

However we already introduced top-level layout rounds in 85e4946f (Fix fractional sizes of drawer and pane).

To restore support of old browsers, calls to 'round()' need to be stripped of the round part.
2025-06-16 13:04:34 -07:00
Enrico Ros 7a9a21c02e Drawer/Panel: fix fractional sizes and shade 2025-06-16 12:24:55 -07:00
Enrico Ros a60c84987d Mobile Drawer/Panel: suppress fractional sizes 2025-06-16 09:27:53 -07:00
Enrico Ros 3150900e13 Drawer: remove the thin border line, in case 2025-06-16 09:26:32 -07:00
Enrico Ros 85e4946ff5 Fix fractional sizes of drawer and pane 2025-06-16 09:26:05 -07:00
Enrico Ros dbf6ad70f5 Mark optima input wrappers 2025-06-16 08:54:09 -07:00
Enrico Ros bf7a16559b Panel: restore gaps 2025-06-16 08:47:04 -07:00
Enrico Ros fa4c78c9c2 Remove duplicate models configuration 2025-06-16 08:06:35 -07:00
Enrico Ros 9d99f46f3c Patch to show a 22px first icon rather than 20 2025-06-16 08:00:08 -07:00
Enrico Ros 5dc86c5649 Remove extra Textsms/Outlined icons 2025-06-16 07:59:54 -07:00
Enrico Ros fa82083670 AppChat: nav: improve icon 2025-06-16 07:59:38 -07:00
Enrico Ros fa3bff3e6d Restore active icons 2025-06-16 07:58:22 -07:00
Enrico Ros 9d68b26868 Chats icon 2025-06-16 07:58:11 -07:00
Enrico Ros 47a0214105 Panel: peek after 1 second
Note that we need to remove the leave handler from the hovered button as it's
covered by the panel itself, and that would trigger a loop
2025-06-16 07:30:17 -07:00
Enrico Ros 82ea6fef3d Fix z-index of Selection header 2025-06-15 17:22:28 -07:00
Enrico Ros eec61adad1 Drawer/Pane: fix data-closed 2025-06-15 17:14:32 -07:00
Enrico Ros ada9fb10e8 Drawer/Pane: switch to aria-expanded from aria-hidden (inverted) 2025-06-15 15:29:00 -07:00
Enrico Ros c2bd9c3310 Unnecessary mouse tracking 2025-06-15 12:50:21 -07:00
Enrico Ros ba93062638 Peeking: discard dedicated action getter 2025-06-15 11:44:34 -07:00
Enrico Ros 61366b7096 Panel: add peeking support to the store 2025-06-15 11:44:34 -07:00
Enrico Ros e1dd9c0117 Drawer: remove unused soft unmount 2025-06-15 11:12:02 -07:00
Enrico Ros 407d3d8db4 Panel: optimize with CSS 2025-06-15 11:12:02 -07:00
Enrico Ros 5a2fa26dad Drawer: optimize opening with css 2025-06-15 10:57:21 -07:00
Enrico Ros fd22faeef8 Drawer: adjust 'peek' timings 2025-06-15 10:57:21 -07:00
Enrico Ros 76c5ef46d0 Drawer: 'peek' functionality, for faster chat switch/etc when the drawer is closed (testing) 2025-06-14 20:46:47 -07:00
Enrico Ros 1e725984cd ScratchClip: support for adding the current clipboard content 2025-06-14 17:26:22 -07:00
Enrico Ros 12c6b6f59b Compact: rename from compress/minify 2025-06-13 11:51:01 -07:00
Enrico Ros 4e1d7f0b82 InlineTextArea: auto-select all on edit
Except for the Fusion instructions, and the Compact
2025-06-13 11:11:36 -07:00
Enrico Ros 0635edbfff FormSelectControl: don't overflow (but no wrap for now) 2025-06-13 07:33:21 -07:00
Enrico Ros 07e2ab07ab Improve Reasoning Traces ordering 2025-06-13 07:33:00 -07:00
Enrico Ros 134d82c673 Improve Reasoning Traces messaging 2025-06-13 07:28:00 -07:00
Enrico Ros 947f9c8355 InlineError: fix break on mobile 2025-06-13 07:05:15 -07:00
Enrico Ros 5e6575a63d Tryfix Sharp differently. 2025-06-12 14:16:05 -07:00
Enrico Ros bef61a8547 Revert "Sharp: fix windows build with a win32 dev dependency"
This reverts commit 7eb8c08e6e.
2025-06-12 14:10:05 -07:00
Enrico Ros 7eb8c08e6e Sharp: fix windows build with a win32 dev dependency
Background: all of a sudden Sharp started not building anymore with the following error message:

```
./public/images/covers/release-cover-v1.12.0.png
Error: Could not load the "sharp" module using the win32-x64 runtime
Possible solutions:
- Ensure optional dependencies can be installed:
    npm install --include=optional sharp
- Ensure your package manager supports multi-platform installation:
    See https://sharp.pixelplumbing.com/install#cross-platform
- Add platform-specific dependencies:
    npm install --os=win32 --cpu=x64 sharp
- Consult the installation documentation:
    See https://sharp.pixelplumbing.com/install
    at Object.<anonymous> (PATH\node_modules\sharp\lib\sharp.js:113:9)
    at Module._compile (node:internal/modules/cjs/loader:1730:14)
    at Object..js (node:internal/modules/cjs/loader:1895:10)
    at Module.load (node:internal/modules/cjs/loader:1465:32)
    at Function._load (node:internal/modules/cjs/loader:1282:12)
    at TracingChannel.traceSync (node:diagnostics_channel:322:14)
    at wrapModuleLoad (node:internal/modules/cjs/loader:235:24)
    at Module.<anonymous> (node:internal/modules/cjs/loader:1487:12)
    at mod.require (PATH\node_modules\next\dist\server\require-hook.js:65:28)
    at require (node:internal/modules/helpers:135:16)
```

This is without changing anything in the system nor in the build. May be a faulty env detection, and happens across all branches.

Deploying this and trying it out.
2025-06-12 14:01:16 -07:00
Enrico Ros aed5272b6c Roll packages 2025-06-12 13:32:45 -07:00
Enrico Ros 13e0779ced OpenAI: update models 2025-06-11 14:11:14 -07:00
Enrico Ros 702006f6ea Remove the @mui/material dependency (brought in as peer by @mui/icons-material, stuck to ^5 for Joy) 2025-06-11 13:23:13 -07:00
Enrico Ros b4fad03c46 Roll packages deeper 2025-06-11 13:13:07 -07:00
Enrico Ros 77e43a4a7e Roll packages 2025-06-11 13:08:37 -07:00
Enrico Ros cfd21e7abb Roll tRPC 2025-06-11 13:05:31 -07:00
Enrico Ros db490bf4fb Sticky Code: fix on Beam 2025-06-10 22:26:18 -07:00
Enrico Ros bc6f3401f8 Pane: proportional header font 2025-06-10 13:44:19 -07:00
Enrico Ros e5c0079f0e Pane: smaller font 2025-06-10 13:16:54 -07:00
Enrico Ros a68d80f7aa Mobile/Pane: use a +1 scaling bump to still allow for xs 2025-06-10 11:43:43 -07:00
Enrico Ros 872c9e9e3b System -> Show Instruction 2025-06-10 11:25:42 -07:00
Enrico Ros 0e51924e5e OptimaPanelGroupedList: remove margin 2025-06-10 11:21:53 -07:00
Enrico Ros c9460a07ef Preferences: rename 2025-06-10 11:01:42 -07:00
Enrico Ros f8d80730fe Mobile: Panes: larger items 2025-06-10 11:00:14 -07:00
Enrico Ros c2e0cd844b Auto Hide on Mobile too 2025-06-10 10:37:41 -07:00
Enrico Ros 5493896392 BTL: style for mobile nav 2025-06-10 09:03:49 -07:00
Enrico Ros 1ad3cb460e Mobile Nav Menu: style 2025-06-10 09:03:49 -07:00
Enrico Ros 721e23de68 Mobile Drawer: unfilter backdrop 2025-06-10 08:47:58 -07:00
Enrico Ros 97b9f5a232 Mobile Drawer: correctly compress inserted content 2025-06-10 08:47:58 -07:00
Enrico Ros 1a9f5a4fda Mobile: transfer App menu to the Drawer 2025-06-10 08:47:58 -07:00
Enrico Ros b2153a14d8 ChatPane: smaller switch 2025-06-10 06:33:45 -07:00
Enrico Ros 8d6499a91c ChatPane: 2-col actions 2025-06-10 06:33:37 -07:00
Enrico Ros 6d6fbac01f Vector Clock Device IDs: SSR fix 2025-06-09 20:13:08 -07:00
Enrico Ros d576e2387e Improve Vector Clock Device IDs 2025-06-09 20:08:51 -07:00
Enrico Ros 4e255a355f Auto Hide: fix mobile compression with an outer div 2025-06-09 12:17:50 -07:00
Enrico Ros 94401f95d7 Auto Hide on Mobile too 2025-06-09 11:37:32 -07:00
Enrico Ros 739f613881 Auto Hide The Message Box
Added option in "Settings > Labs" to auto-hide the Composer.
Fixes #812
2025-06-08 17:57:56 -07:00
Enrico Ros 5dc24557e6 Chat Pane Focus: Fix 'close other panes' 2025-06-08 12:09:10 -07:00
Enrico Ros 65842a976e Chat Pane Focus: AppChat bits 2025-06-08 12:00:44 -07:00
Enrico Ros c6dfc66a14 RenderCode: sticky overlay 2025-06-07 11:34:08 -07:00
Enrico Ros bc54967720 Metrics: fix tier tokens calculations 2025-06-06 13:01:39 -07:00
Enrico Ros 1112aa292f Gemini: support tiered cache pricing 2025-06-06 11:17:40 -07:00
Enrico Ros 31bb06293d Gemini: auto-caching and reasoning tokens parsing 2025-06-06 11:16:29 -07:00
Enrico Ros 0139f0421b Merge branch 'sam0jones0-v2-dev' into v2-dev 2025-06-06 11:12:14 -07:00
Enrico Ros 4f63e98e7f OpenRouter: support for Anthropic thinking variants via the OpenAI protocol. #811 2025-06-06 11:11:13 -07:00
Enrico Ros c04e147ca7 Gemini: full thinking budget support, including showing the reasoning traces! 2025-06-06 09:36:00 -07:00
Enrico Ros b88feeac2c Gemini: caching pricing 2025-06-06 09:20:36 -07:00
Enrico Ros 0902c35e13 Gemini: support today's 2.5 Pro 2025-06-06 09:20:02 -07:00
Enrico Ros e02ee99d26 Typo 2025-06-05 15:57:51 -07:00
Enrico Ros 313313db1f Add TODO 2025-06-05 12:10:52 -07:00
Sam Jones b7bdae00f8 tidy up 2025-06-05 17:35:39 +01:00
Sam Jones b699a665a1 Use reasoning instead of thinking for openrouter 2025-06-05 17:29:43 +01:00
Sam Jones b28a282aba Remove extraprops 2025-06-05 16:50:41 +01:00
Sam Jones a30d2ca025 fix indent again 2025-06-05 16:48:21 +01:00
Sam Jones f7f3929342 fix indent 2025-06-05 16:47:59 +01:00
Sam Jones 35abb6e69d remove comment 2025-06-05 16:47:23 +01:00
Sam Jones b759be62ea Add Claude 4 via openrouter thinking support 2025-06-05 16:32:15 +01:00
Enrico Ros 9a2db4a6e9 Typo fix 2025-06-04 19:30:51 -07:00
Enrico Ros 5bff478d06 Rationalize Single Desktop Overflow Menu 2025-06-04 16:55:40 -07:00
Enrico Ros 3a7402b03d Nav: hide Tokenizer (dev only and hidden) 2025-06-04 16:55:40 -07:00
Enrico Ros d076e73de6 Add PhSquaresFour icon 2025-06-04 16:55:40 -07:00
Enrico Ros 1d98a994d0 AppNews: dev build info only on mobile 2025-06-04 16:55:40 -07:00
Enrico Ros 3957fae782 ScratchClip: unfilter background 2025-06-04 16:55:40 -07:00
Enrico Ros 72c07faedf App: rc3 2025-06-04 16:55:40 -07:00
Enrico Ros be3b6ee394 App: add BaseProduct 2025-06-04 16:55:40 -07:00
Enrico Ros 61910827e6 News: fix keys 2025-06-04 16:55:40 -07:00
Enrico Ros 6582beaf2a Mobile: nav: hide a few apps 2025-06-04 16:55:40 -07:00
Enrico Ros 840223af6f More permissions for Claude Code 2025-06-04 16:55:40 -07:00
Enrico Ros a084b71682 DBlobs: selective types export 2025-06-04 12:56:09 -07:00
Enrico Ros 1dbe30af3d DBlobs: collapse the portability layer 2025-06-04 12:34:13 -07:00
Enrico Ros e57fbb88bf DBlobs: add a portability layer 2025-06-04 12:21:42 -07:00
Enrico Ros a5002b4c12 Remove asyncCanvasToBlob 2025-06-03 17:28:58 -07:00
Enrico Ros c139884671 Image Viewer: flush DBlob actions 2025-06-03 17:23:40 -07:00
Enrico Ros 2b97b0e0cf Image Viewer: bytes size 2025-06-03 17:23:09 -07:00
Enrico Ros 2e4176d41c Image Viewer: download 2025-06-03 16:54:11 -07:00
Enrico Ros 40d62b6f2d Image viewer extended: Attachment Fragments (user-top), Content 'image_ref' Fragments (assistant t2i / draw t2i). Remove openObjectRLInNewTab 2025-06-03 16:41:05 -07:00
Enrico Ros 43d7e19dfb imageUtils: operate only on Blob, free of base64 converters 2025-06-03 16:04:09 -07:00
Enrico Ros ef06071ab1 addDBImage: more Blob usage with removal of resizeBase64ImageIfNeeded 2025-06-03 15:54:39 -07:00
Enrico Ros 18578a63ec Rationalize openObjectRLInNewTab 2025-06-03 15:54:01 -07:00
Enrico Ros aab0beba93 Remove showing image URL in new tab 2025-06-03 15:40:51 -07:00
Enrico Ros 7d32de50a6 DBlobs: partially remove ContextId 2025-06-03 15:12:56 -07:00
Enrico Ros 57d91e330e Images: push Blob usage deeper 2025-06-03 15:08:55 -07:00
Enrico Ros a81da26452 Images: convert Attachments/Reassembler to use Blobs more than base64/base64dataUrls 2025-06-03 13:13:44 -07:00
Enrico Ros 803f6bbdea Canvas/Video: improve Blobs support 2025-06-02 16:40:53 -07:00
Enrico Ros 10a3669551 ImageUtils: support SVG to PNG conversion 2025-06-02 16:16:45 -07:00
Enrico Ros d910fbcae1 Roll pdfjs (4 -> 5) 2025-06-02 15:26:48 -07:00
Enrico Ros e2a6ee94b0 Update Claude Code settings 2025-06-02 15:21:56 -07:00
Enrico Ros 055a2134e0 Remove punycode overrides as we don't get the npm warning anymore
@ref https://github.com/nodejs/node/pull/56632
2025-06-02 15:05:29 -07:00
Enrico Ros 30310a51ff Roll packages 2025-06-02 14:56:54 -07:00
Enrico Ros be648017f5 Roll tRPC 2025-06-02 14:51:53 -07:00
Enrico Ros e737272a39 Enable Node 24 2025-06-02 14:50:55 -07:00
Enrico Ros d7a5c50ce3 Beam: change models on Merges 2025-06-02 14:16:19 -07:00
Enrico Ros a51d5c315f DeepSeek: update models 2025-06-02 10:17:11 -07:00
Enrico Ros 8c1af95b0e Add claude code permissions 2025-06-02 10:00:37 -07:00
Enrico Ros c4d61fdd21 ChutesAI: add support
This includes prettifying the model name, assuming the interfaces
(images are a force) and auto-sizing the context window based on the
response.
2025-06-02 09:53:05 -07:00
Enrico Ros 6301f1f6b5 BlockPartModelAux: parametrize reasoning colorpalette 2025-06-02 09:40:15 -07:00
Enrico Ros edbe2e55bc Accommodate Chutes.ai / sglang parsing 2025-06-02 09:32:59 -07:00
Enrico Ros 604cf43627 No persona selected: finite duration 2025-06-02 09:28:46 -07:00
Enrico Ros e124669545 Attachments: use Blobs, not ArrayBuffer, unless required 2025-06-01 13:52:20 -07:00
Enrico Ros 9ee7c6dddd Attachments: do not take image attachments to not require domain transfer 2025-06-01 09:46:39 -07:00
Enrico Ros 5136261c8e Attachments: open up to incoming web Blobs support, but still perform the whole chain in base64. 2025-05-30 12:08:53 -07:00
Enrico Ros c9ebb44442 Wire all up to BlobUtils 2025-05-30 11:25:30 -07:00
Enrico Ros 95d9976a2c BlobUtils: minor fix 2025-05-30 11:03:56 -07:00
Enrico Ros 1d177c960f Beam: shift to re-run active Beams 2025-05-30 11:02:27 -07:00
Enrico Ros 81a34ca96c BlobUtils: improve Exceptions 2025-05-30 10:06:19 -07:00
Enrico Ros 9749b44dbb BlobUtils: Add WithMetadata 2025-05-30 09:57:45 -07:00
Enrico Ros 6dfe2a92a1 BlobUtils: reduce zero-length checks 2025-05-30 09:25:06 -07:00
Enrico Ros 44646001c1 BlobUtils: add Blob <> X functions 2025-05-29 17:51:56 -07:00
Enrico Ros 088e67c235 Move server-side functions around 2025-05-29 16:22:20 -07:00
Enrico Ros 0d41c92c01 Revert "DMessageFragment: future: Audio Ref fragments"
This reverts commit ce7699c06b.
2025-05-28 17:29:57 -07:00
Enrico Ros e966674d39 Revert "AudioRef: placeholder render"
This reverts commit c6d3bbd7b9.
2025-05-28 17:29:57 -07:00
Enrico Ros ff74a8ed9c Revert "AudioRef: placeholder tokens"
This reverts commit ab217596d8.
2025-05-28 17:29:57 -07:00
Enrico Ros 64fd32de9a Revert "AudioRef: placeholder CGR"
This reverts commit 61b2bedf5e.
2025-05-28 17:29:56 -07:00
Enrico Ros 6584bb4cd1 Doc edit pane: fix controlled looks 2025-05-28 12:45:17 -07:00
Enrico Ros a9065d1a1e Doc edit pane: fix controlled editing 'ok' 2025-05-28 12:43:47 -07:00
Enrico Ros a22832f741 DocAttachments: show delete on edit empty 2025-05-28 12:42:06 -07:00
Enrico Ros 663a33a895 DocAttachmentFragmentPane: improve looks 2025-05-28 12:37:43 -07:00
Enrico Ros 5f7508633b Doc edit pane: fix formatting 2025-05-28 12:34:12 -07:00
Enrico Ros 6a99f65979 Beam: Fix Edit/Delete in (propagate undefined handlers) 2025-05-28 12:28:43 -07:00
Enrico Ros a983f25fb9 Doc Attachment: fix replacement optionality 2025-05-28 12:17:02 -07:00
Enrico Ros 7119d92321 Doc Attachment: edit titles, move the switch button to inside the tooltip 2025-05-28 12:15:22 -07:00
Enrico Ros 5f1a52d620 GoodModal: support autoOverflow 2025-05-28 12:05:52 -07:00
Enrico Ros 42d58ed202 Allow for empty edited fragments, unless the caller branches otherwise 2025-05-28 11:25:26 -07:00
Enrico Ros 20f0dd5b80 DocAttachmentFragments: support controlled editing (ignore the overlay state) 2025-05-28 11:07:45 -07:00
Enrico Ros d95e8b70b9 BlockEdit_TextFragment: support controlled passive editing (don't run onSubmit, don't overlay the edited text) 2025-05-28 10:45:57 -07:00
Enrico Ros 69d7f3f195 updateFragmentWithEditedText: DX annotations 2025-05-28 10:16:52 -07:00
Enrico Ros 61b2bedf5e AudioRef: placeholder CGR 2025-05-27 17:49:12 -07:00
Enrico Ros ab217596d8 AudioRef: placeholder tokens 2025-05-27 17:49:08 -07:00
Enrico Ros c6d3bbd7b9 AudioRef: placeholder render 2025-05-27 17:49:01 -07:00
Enrico Ros ce7699c06b DMessageFragment: future: Audio Ref fragments 2025-05-27 17:34:12 -07:00
Enrico Ros ca3df18d99 AIX: Gemini: also strip the system prompt - TTS only takes 1 message 2025-05-27 17:28:38 -07:00
Enrico Ros 0f96c9f825 AIX: Gemini: enable TTS models 2025-05-27 17:20:11 -07:00
Enrico Ros d6e41c1026 AIX: Gemini: parse TTS outputs 2025-05-27 17:20:03 -07:00
Enrico Ros bc1d0ef6e9 AIX: Audio: particle reassembly to speech (no Blob save for now) 2025-05-27 17:19:09 -07:00
Enrico Ros 352d1425ca AIX: Audio: model.part for chatGenerateContent, and 2 impls 2025-05-27 17:09:14 -07:00
Enrico Ros f92941f4a2 AIX: Audio: particle & transmit 2025-05-27 17:07:35 -07:00
Enrico Ros 4b6f6728fa AIX: Gemini: improve TTS support 2025-05-27 15:56:17 -07:00
Enrico Ros d12771d408 Beam: quick button beams existing 2025-05-27 15:34:53 -07:00
Enrico Ros 7a679dd7d8 LLMs: Gemini: update models 2025-05-27 15:14:53 -07:00
Enrico Ros 72ae27e419 AIX: Gemini: configure voice 2025-05-27 15:14:38 -07:00
Enrico Ros b5722ac9f5 LLMs: Gemini: disable tts-only, although likely supported 2025-05-27 15:14:22 -07:00
Enrico Ros 60b7a20b71 LLMs: Visual Audio interface annotations 2025-05-27 14:53:34 -07:00
Enrico Ros 33ea55ec9d LLMs: Anthropic: prettier name 2025-05-27 14:53:18 -07:00
Enrico Ros 294b1c1ea3 AIX: Gemini: output modality 2025-05-27 14:50:53 -07:00
Enrico Ros 75e19914cc AIX: basic output modality pattern 2025-05-27 14:50:10 -07:00
Enrico Ros e24bd418b5 AIX: Gemini: parser checker 2025-05-27 14:24:21 -07:00
Enrico Ros 66c1af8333 AIX: Gemini: add maxTemperature 2025-05-27 14:23:30 -07:00
Enrico Ros a0917b4533 llmSelect: fix insufficient width 2025-05-27 14:11:11 -07:00
Enrico Ros 74731d512f Browsing: improve debug (server-side) 2025-05-27 13:58:49 -07:00
Enrico Ros e0e8a94031 Browsing: improve debug 2025-05-27 13:52:23 -07:00
Enrico Ros 67306ec0f7 Remove usersnap 2025-05-27 11:27:38 -07:00
Enrico Ros a42cfe26e7 ProviderSingleTab: add bypass 2025-05-27 09:22:06 -07:00
Enrico Ros 9c63614367 Roll packages 2025-05-26 16:41:35 -07:00
Enrico Ros ccfc129e44 Partially revert 2894c070 - fixed by the tRPC upgrade 2025-05-26 16:39:37 -07:00
Enrico Ros ad3b500781 tRPC: upgrade to .canary.32 to fix the Vercel cloud infra shift. #805 2025-05-26 16:38:53 -07:00
Enrico Ros 2894c07049 Next Edge/tRPC: server-side delay (improved workaround) for the 'Stream closed' issue only. #805 2025-05-26 14:11:02 -07:00
Enrico Ros e189d3e174 Next: 15.1.8 2025-05-26 11:38:49 -07:00
Enrico Ros b9ead56ec4 Anthropic: naming of Claude models 2025-05-26 10:52:30 -07:00
Enrico Ros 48c4ac18ab AIX: emergency fix for a sudden Vercel/tRPC streaming issue
Suddenly all Vercel builds experienced exceptions and connection terminations.

On 2025-05-22 around 8PM CET, Vercel servers started to log errors on tRPC calls.

This fix waits 1 extra event loop tick. Shall work around the issue until a proper fix is found.
2025-05-22 13:22:38 -07:00
Enrico Ros 48d1bc7635 Anthropic: add Opus4 nd Sonnet4, w, w/o extended thinking 2025-05-22 11:25:31 -07:00
Enrico Ros 9112cef5f3 Gemini: added Flash Preview 05-20, also in the non-thinking variant 2025-05-20 17:32:03 -07:00
Enrico Ros ff0183b7e6 animate-enter: modify to a fade 2025-05-20 17:04:47 -07:00
Enrico Ros 14ef63b4d2 uiCounters: make space for the byok notice 2025-05-20 17:04:32 -07:00
Enrico Ros eac6228dde Stacked bar component 2025-05-20 17:04:16 -07:00
Enrico Ros 0d28934f37 FormInputKey: disable autofocus on mobile 2025-05-20 17:02:37 -07:00
Enrico Ros 57b694a93d Roll packages 2025-05-19 15:30:26 -07:00
Enrico Ros c2a1fcc942 Allow node 23 2025-05-19 15:12:31 -07:00
Enrico Ros 3fdd2fb04d Optima: Clipboard history 2025-05-19 12:34:36 -07:00
Enrico Ros 8d9c8f681e GoodModal: fix wrong promise state 2025-05-19 11:22:39 -07:00
Enrico Ros bdfd8fb526 GoodModal: propagate reason, even for the button 2025-05-19 10:46:35 -07:00
Enrico Ros 0d88217a78 Keystroke: size 2025-05-19 10:32:00 -07:00
Enrico Ros b15e27e1d3 Prod: models arrow 2025-05-19 10:31:54 -07:00
Enrico Ros 7db5d84e4d Prod: users icon 2025-05-19 10:28:37 -07:00
Enrico Ros c0a37d618a Prod: make tips less intrusive 2025-05-19 10:27:34 -07:00
Enrico Ros b20db5ff50 ScrollToBottom: Fix ResizeObserver warning 2025-05-19 08:30:23 -07:00
Enrico Ros 43e8d5639c Mobile: Default to Beam quick button 2025-05-17 11:32:31 -07:00
Enrico Ros 92ce0af012 RenderCode: potentially fix the initial syntax highlight (skipped 2 dom levels) 2025-05-17 11:31:45 -07:00
Enrico Ros fe76cfdd8b InlineTextarea: optional external syncing with the initial text, on change 2025-05-14 16:18:54 -07:00
Enrico Ros 738ff07e6a Optima: Heading: clickable text 2025-05-14 12:34:58 -07:00
Enrico Ros e11d3d7407 Optima: Heading: no bottom margin 2025-05-14 12:34:57 -07:00
Enrico Ros 70dd92f54d Flash: adapt for the Dexie x10 2025-05-14 09:59:32 -07:00
Enrico Ros 75381a2798 Flash: recreate v2-dev DBs with the last known stores/indices 2025-05-14 09:10:26 -07:00
Enrico Ros 29bddbc6ed ChatMessage: tint avatar button 2025-05-13 12:50:44 -07:00
Enrico Ros 2ca9baf6ba ChatMessageList: tint messages 2025-05-13 12:50:32 -07:00
Enrico Ros a796a98cd4 Composer: tint composer 2025-05-13 12:29:57 -07:00
Enrico Ros 02749c290c Composer: tint buttons 2025-05-13 12:29:06 -07:00
Enrico Ros ec13a9664c Tintable buttons 2025-05-13 12:29:06 -07:00
Enrico Ros a6d6f69d4e Composer: Mobile: change Quick button 2025-05-13 12:00:16 -07:00
Enrico Ros 6d4fb2b444 UI options: extract PageSize 2025-05-13 11:36:23 -07:00
Enrico Ros a9e3da8b21 llmSelect: Add Models 2025-05-13 11:26:28 -07:00
Enrico Ros 56adb0aa88 llmSelect: show features 2025-05-13 04:41:35 -07:00
Enrico Ros aa9dc1a06f Optima: Nav: mobile App name support 2025-05-13 03:28:09 -07:00
Enrico Ros e503335026 Optima: Dropdowns: 'compact' icons too 2025-05-13 03:21:55 -07:00
Enrico Ros 350aac79b1 Optima: Dropdowns: 'compact' symbols 2025-05-13 03:14:20 -07:00
Enrico Ros 56a36987c6 Prod: restore mobile composer Bg 2025-05-13 02:51:32 -07:00
Enrico Ros 5ef00eb42a Prod: pad the mobile composer 2025-05-13 02:43:43 -07:00
Enrico Ros 6aa52cf5e6 Prod: don't focus the mobile keyboard on new chat 2025-05-13 02:39:53 -07:00
Enrico Ros 6118d0f940 Prod: drop the border of the last message instead of using a mt:-1 on the composer 2025-05-13 02:03:14 -07:00
Enrico Ros bab7afdfba Chat cleanup: readonly array 2025-05-13 01:35:55 -07:00
Enrico Ros 865cf0652b jsonUtils: relax validation to undefined (off by default) 2025-05-13 01:26:09 -07:00
Enrico Ros 7126a952b9 Gemini: Update models 2025-05-12 14:48:19 -07:00
Enrico Ros 66eb325779 OpenAI: update scores 2025-05-12 14:24:04 -07:00
Enrico Ros a55411c150 Auto-launch debugger on error/crit/DEV if the env var is set. 2025-05-12 14:09:25 -07:00
Enrico Ros 77eb6fa97c Save one icon. 2025-05-12 01:45:07 -07:00
Enrico Ros 440b13fa48 Save one icon. 2025-05-12 01:45:07 -07:00
Enrico Ros 02f30524a3 AutoBlocks: mark blocks as partial if collapsed 2025-05-12 01:43:04 -07:00
Enrico Ros 26ad736aa1 Composer: aria: new message 2025-05-12 01:36:13 -07:00
Enrico Ros 040588d708 Rename 'Write' to 'Add' 2025-05-08 13:13:33 -07:00
Enrico Ros 5a635bb532 PageCore: optimize and darken on 'full' width 2025-05-04 16:19:12 -07:00
Enrico Ros e8014fccb3 PageCore: default to 'full' width 2025-05-04 16:16:24 -07:00
Enrico Ros 85586fdf58 PageCore: support brighter 2025-05-04 16:15:43 -07:00
Enrico Ros d819e03c79 RenderCode: optimize 2025-05-03 18:55:52 -07:00
Enrico Ros 0f2def82c1 Default to full width 2025-05-03 18:55:50 -07:00
Enrico Ros 634ae0e213 AltBarBeam: test hiding the maximize button 2025-05-03 14:03:43 -07:00
Enrico Ros d0d2051edf Chat: title for beam 2025-05-03 13:56:41 -07:00
Enrico Ros a2e0ddcf81 AltBarBeam: show chat title 2025-05-03 13:56:28 -07:00
Enrico Ros 82be58b54a Optima: cut bar overflow 2025-05-03 13:55:37 -07:00
Enrico Ros ba18891696 Optima: show out name 2025-05-03 13:55:23 -07:00
Enrico Ros f2df042c0a Feature Badge: fix 2025-05-03 13:54:21 -07:00
Enrico Ros 3547119577 jsonUtils: validate objects to be pure JSON with zod 2025-05-02 13:00:11 -07:00
Enrico Ros 66519ac33e Extract dontblurtextarea 2025-05-01 22:56:48 -07:00
Enrico Ros a8ae3aa124 Cherry-pixel 2025-05-01 18:36:06 -07:00
Enrico Ros 28a00bfb29 Composer: undim 2025-05-01 18:19:04 -07:00
Enrico Ros 8c46abbac3 ChipExpander: fix build 2025-05-01 18:02:32 -07:00
Enrico Ros ae7376a708 Shortcuts: unite rendering 2025-05-01 17:56:27 -07:00
Enrico Ros b2d0844959 ChipToggleButton: optional collapse caret 2025-05-01 17:22:40 -07:00
Enrico Ros a88ca25708 ChipToggleButton: support disabled 2025-05-01 17:22:29 -07:00
Enrico Ros 61acf9e56b ChipToggleButton: support variant 2025-05-01 17:22:01 -07:00
Enrico Ros 8b6ffca2cb ChipExpander: publish styles 2025-05-01 17:21:40 -07:00
Enrico Ros 56e3aa3835 Fix beam wrap post 5b562c66 2025-05-01 14:02:18 -07:00
Enrico Ros 11bbe22d80 OptimaBarDropdown: enlarge to 200px 2025-05-01 13:57:30 -07:00
Enrico Ros 777a6bb29d Pmix: elide on missing llms 2025-05-01 13:55:56 -07:00
Enrico Ros 16b91ba63a useLLMSelect: model dialog accelerator 2025-05-01 03:05:59 -07:00
Enrico Ros 0e0ed3d657 Logger: serialize errors 2025-05-01 01:47:49 -07:00
Enrico Ros 910cbb542e Logger: source forwarding 2025-04-30 22:10:17 -07:00
Enrico Ros 17cd63d445 Logger: prefix function for module adaptation 2025-04-30 22:10:17 -07:00
Enrico Ros 89a4283868 Built with Big-AGI 2025-04-30 22:10:17 -07:00
Enrico Ros 195e167414 @Events: disable the HMR dev warning 2025-04-29 00:27:56 -07:00
Enrico Ros d5a4fadebd LLMs: keep user edits 2025-04-28 20:53:08 -07:00
Enrico Ros 21178f4974 @Events: HMR message 2025-04-28 16:00:15 -07:00
Enrico Ros 80b22e6c2d Continue CGRs ignoring missing image dereferences. #733
This is an emergency workaround where formal correctness yields to an
unbroken chat experience.
2025-04-28 11:43:27 -07:00
Enrico Ros 9e02e0aabd Optimize selector performance for selection 2025-04-27 02:32:34 -07:00
Enrico Ros 3572b94e8f Emotion: improve style performance by removing the Checkbox 2025-04-27 02:11:53 -07:00
Enrico Ros c53fbe8c73 Smallfix. 2025-04-25 19:24:49 -07:00
Enrico Ros 16450a347e Prodia: update to v2 2025-04-25 19:03:12 -07:00
Enrico Ros 9a12164082 Image Generation: with ❤ (heartbeats) 2025-04-25 17:05:34 -07:00
Enrico Ros 3ba3ab41d2 Denooise 2025-04-25 16:13:04 -07:00
Enrico Ros 1ed31199ae OpenAI gpt-image-1: support image transforms/edits 2025-04-25 15:38:44 -07:00
Enrico Ros fc9caa79f8 OpenAI gpt-image-1: bring the images to the server as AIX-alike parts (for future proofing) 2025-04-25 15:38:38 -07:00
Enrico Ros 0c19d011cb OpenAI gpt-image-1: edit wiretypes 2025-04-25 15:38:38 -07:00
Enrico Ros b4eddbbc30 Image Edit: wire image inputs to the generator 2025-04-25 15:38:38 -07:00
Enrico Ros f522f5bbc6 AIX: export image dereferencing & inlining 2025-04-25 15:38:38 -07:00
Enrico Ros fde08e6793 Attachments: support image-only 2025-04-25 15:38:38 -07:00
Enrico Ros 4e8e7fa6cf tRPC fetchers: debug wire curl 2025-04-25 15:38:38 -07:00
Enrico Ros a79806e86c tRPC fetchers: support FormData automatically 2025-04-25 15:38:38 -07:00
Enrico Ros 8c0868418c urlUtils: base64 to Blob 2025-04-25 15:38:38 -07:00
Enrico Ros b90919a4df Composer: attach if can edit image 2025-04-25 11:54:21 -07:00
Enrico Ros 7f2842f9ba T2I: capabilities: can edit 2025-04-25 11:52:38 -07:00
Enrico Ros a7f0771ca9 Attachments: support image-only 2025-04-25 11:51:15 -07:00
Enrico Ros 47315ed4a5 Drawing: improve provider names & icon 2025-04-25 01:06:22 -07:00
Enrico Ros d4df2f989b OpenAI gpt-image-1: improve settings 2025-04-25 01:05:13 -07:00
Enrico Ros 3c369e11ae OpenAI gpt-image-1: settings 2025-04-24 23:56:31 -07:00
Enrico Ros 16ba957f3a OpenAI gpt-image-1: settings 2025-04-24 23:13:41 -07:00
Enrico Ros 88a8b10b95 OpenAI gpt-image-1: createImage server/client 2025-04-24 23:13:41 -07:00
Enrico Ros 3063c9950c OpenAI gpt-image-1: exclude from text models 2025-04-24 23:13:41 -07:00
Enrico Ros fdc5845d90 OpenAI gpt-image-1: wiretypes/parser 2025-04-24 23:13:41 -07:00
Enrico Ros 6f66e2a2bb OpenAI gpt-image-1: bits 2025-04-24 23:13:41 -07:00
Enrico Ros a57a41e676 Form Options: immutable 2025-04-24 23:13:41 -07:00
Enrico Ros 5209d2c416 Wire custom error boundary to the inside 2025-04-24 16:46:26 -07:00
Enrico Ros 64138cdcd2 Wire custom error boundary to the outside 2025-04-24 16:46:20 -07:00
Enrico Ros 80a5db3e91 Error resiliency with custom Error boundary 2025-04-24 16:46:06 -07:00
Enrico Ros fb7dfdf341 Error resiliency on duplicating parts that don't exist anymore 2025-04-24 15:32:51 -07:00
Enrico Ros b0f0e35170 Error resiliency on malformed Fragments data 2025-04-24 15:19:03 -07:00
Enrico Ros cb0cc8b370 AIX: FastAPI: relax parser for missing .object and .created 2025-04-24 12:55:31 -07:00
Enrico Ros 9e8a8cb7db LLMs: FastAPI: list heuristics 2025-04-24 12:48:41 -07:00
Enrico Ros c4959776dc textUtils: add humanReadableBytes 2025-04-23 20:34:35 -07:00
Enrico Ros d50d489de7 misc 2025-04-21 17:28:38 -07:00
Enrico Ros 9472de0246 useLLMSelect: style 2025-04-21 11:30:39 -07:00
Enrico Ros 5d2fff8e53 Beam: disable enter animation 2025-04-21 01:21:45 -07:00
Enrico Ros d68ca9df1e PaneTitle: improve ux 2025-04-21 01:21:24 -07:00
Enrico Ros 04a437e9a6 Beam: optimize reactive open hooks (save 1 cycle/desyncs) 2025-04-21 01:03:22 -07:00
Enrico Ros f7fb8c780b RenderMarkdown: disable preprocessing in-progress messages 2025-04-20 18:02:43 -07:00
Enrico Ros a7ebf8a014 Markdown parser: improved safety 2025-04-20 14:40:07 -07:00
Enrico Ros e950ced1a1 Markdown parser: recursion fix for inline latex math 2025-04-20 14:32:57 -07:00
Enrico Ros 15d5a9cb58 CleanerMessage: optimize entrance 2025-04-20 12:32:55 -07:00
Enrico Ros 5c9747d8eb Remove @t3-oss/env-nextjs 2025-04-18 19:14:20 -07:00
Enrico Ros d308739643 TS-based next.config & ~/server/env build 2025-04-18 19:12:14 -07:00
Enrico Ros 6c5db40bd0 Import t3-oss/t3-env: glue
Adapt the code to work for us.
2025-04-18 19:12:14 -07:00
Enrico Ros f3212291dd Import t3-oss/t3-env
We need to change this locally to migrate to Zod4 without third party dependencies.
2025-04-18 18:21:12 -07:00
Enrico Ros 140a829291 LLMs: Gemini: update sorting 2025-04-17 19:02:25 -07:00
Enrico Ros e30d938425 dMessageUtils: Gemini namings 2025-04-17 18:56:48 -07:00
Enrico Ros 521b6a414f Gemini: (geminate :) add the non-thinking variant 2025-04-17 18:39:58 -07:00
Enrico Ros a20d0f970e LLMs: update benchmark scores 2025-04-17 18:36:49 -07:00
Enrico Ros 66c1307112 LLMs: Parameters: support hidden params 2025-04-17 18:31:37 -07:00
Enrico Ros 241a25599f LLMs: Parameters: support initial spec value 2025-04-17 18:31:24 -07:00
Enrico Ros 3f610bf122 AIX: Gemini: full thinking support
Enables the v1alpha API (seems to be more reliable)
2025-04-17 17:25:26 -07:00
Enrico Ros 858cc41a89 Gemini: update models 2025-04-17 17:24:02 -07:00
Enrico Ros 244917faf9 LLMs: support Gemini Thinking Budget 2025-04-17 17:24:02 -07:00
Enrico Ros 08964188ea LLMs: cleanup parameter editor 2025-04-17 17:24:02 -07:00
Enrico Ros 07c96661e7 AIX: Gemini: support thinking budget 2025-04-17 17:24:02 -07:00
Enrico Ros 048940d383 AIX: Gemini: measure reasoning tokens 2025-04-17 17:24:02 -07:00
Enrico Ros 85ad0e1e86 FormSliderControl: start adornment and variant 2025-04-17 17:24:02 -07:00
Enrico Ros f2f06f5d44 Panes: improve title edit 2025-04-17 13:55:37 -07:00
Enrico Ros b8e0ef5340 Feature Badge 2025-04-17 13:37:00 -07:00
Enrico Ros 07608b3fe3 Mic: default timeout 2 -> 5 seconds - too quick (@dc) 2025-04-17 13:36:59 -07:00
Enrico Ros e808509331 Cleanup: allow to un-skip messages 2025-04-16 20:32:05 -07:00
Enrico Ros 2d9a4fccfa ChatDrawer: improve show archived 2025-04-16 18:47:15 -07:00
Enrico Ros 49cf263408 Logger: maximized 2025-04-16 17:45:58 -07:00
Enrico Ros 69e7dc7481 AIX: o3: enable the non-streaming option 2025-04-16 15:36:42 -07:00
Enrico Ros a76fe34a64 LLMs: Groq updates 2025-04-16 15:06:37 -07:00
Enrico Ros 7a3882fe28 LLMs: update CBA ratings 2025-04-16 14:53:48 -07:00
Enrico Ros 6954b79178 AIX: fix o1/3/4 formatting re-enabled 2025-04-16 13:40:12 -07:00
Enrico Ros 2032d045ca More precise text 2025-04-16 13:24:22 -07:00
Enrico Ros e4ee0c3ab6 Flush future utility fn 2025-04-16 13:23:48 -07:00
Enrico Ros 3218d00850 LLMs: OpenAI declutter 2025-04-16 13:13:30 -07:00
Enrico Ros 7f0b4f79ff AIX: o4 support 2025-04-16 13:13:22 -07:00
Enrico Ros 945ba0a34c Beam: debug print 2025-04-16 13:13:10 -07:00
Enrico Ros 676797f0ac LLMs: OpenAI declutter 2025-04-16 12:54:06 -07:00
Enrico Ros 8e89d5dbfc LLMs: OpenAI o3/o4 & full cleanup 2025-04-16 12:52:20 -07:00
Enrico Ros 150eb4e9e2 LLMs: moved some old Azure model defs 2025-04-16 12:45:10 -07:00
Enrico Ros 14766629a2 LLMs: naming: fw-compatible OpenAI ox/gpt-x 2025-04-16 12:44:05 -07:00
Enrico Ros f475c8ae6c ChatDrawer: clear archived if inadvertently set 2025-04-15 23:36:53 -07:00
Enrico Ros 11badbf22c @Events: improve domain helper 2025-04-15 23:30:15 -07:00
Enrico Ros aedc051523 WebSpeech: preserve partial (interim) dictation when truncated ahead of stabilization (e.g. mic transter/timeout) 2025-04-15 17:24:17 -07:00
Enrico Ros b5336eb63c Simple chat archival support 2025-04-14 18:31:30 -07:00
Enrico Ros 0c85ddd82d LLMs: display image generation and search capabilities 2025-04-14 13:50:17 -07:00
Enrico Ros f0386a21c6 LLMs: Perplexity: update desc 2025-04-14 13:35:45 -07:00
Enrico Ros a7518937f0 LLMs: Perplexity: add Deep Research 2025-04-14 13:34:09 -07:00
Enrico Ros 87b012f0be LLMs: OpenAI: 4.1 models. Fixes #792 2025-04-14 11:30:03 -07:00
Enrico Ros 6a7a34c0b0 LLMs: Anthropic: mark as reasoning 2025-04-11 07:58:31 -07:00
Enrico Ros 9ce29138d2 DNT-DT: safety clears (15s timeout + window blur (light)) 2025-04-11 07:41:38 -07:00
Enrico Ros 95a6e09158 Monotonics: ++Aix 2025-04-10 19:57:40 -07:00
Enrico Ros 0962b79149 LLMs: Gemini: update models
Includes AIX: Gemini: predictLongRunning
2025-04-10 19:56:44 -07:00
Enrico Ros 51ba5304a6 Models: double-check OpenRouter tokenizers (disabled) 2025-04-10 19:32:58 -07:00
Enrico Ros d7137d1311 Models: show reasoning models in list 2025-04-10 19:29:02 -07:00
Enrico Ros d8babc91d5 LLMs: Gemini: sorting 2025-04-10 19:24:15 -07:00
Enrico Ros 3649a79f07 LLMs: XAI chat message namings 2025-04-10 19:24:00 -07:00
Enrico Ros 3992acd9d4 AIX: XAI: update models list + enable reasoning tuning 2025-04-10 19:21:16 -07:00
Enrico Ros b6f130e00b AIX: XAI: models list parser update 2025-04-10 18:31:33 -07:00
Enrico Ros 63c475e24f useGlobalShortcuts: support Backspace 2025-04-10 15:24:07 -07:00
Enrico Ros d8d4f4e8f3 useLLMSelect: 10-100x faster on many models 2025-04-10 14:37:35 -07:00
Enrico Ros e4a2bf8b71 Beam: improve un-max button 2025-04-10 13:09:29 -07:00
Enrico Ros 19a1110bcf Beam: move max icon 2025-04-10 13:09:25 -07:00
Enrico Ros 1997599b33 Logger: reduce icon usage 2025-04-10 13:09:03 -07:00
Enrico Ros 467f24022b Logger: support for DEV 2025-04-10 11:51:38 -07:00
Enrico Ros 3147f9b087 Bubble: tryfix on mobile 2025-04-10 09:17:51 -07:00
Enrico Ros 79e5931a45 Fix extension of text selection. Fixes #788 2025-04-10 09:03:01 -07:00
Enrico Ros 1c9cefb61b AIX: OpenAI-compatible: fix missing reasoning tokens 2025-04-10 08:33:09 -07:00
Enrico Ros 318bf80ad6 Flash: enable saving images too 2025-04-06 17:29:02 -07:00
Enrico Ros bc3a757764 Flash: disable auto-backup on restore (confusing/danger of overwrite) 2025-04-06 16:57:10 -07:00
Enrico Ros 31459c0121 Flash: download works on mobile now 2025-04-06 16:46:43 -07:00
Enrico Ros 87bd9ff08b Flash: improve mobile reliability with streaming of the Flash object 2025-04-06 16:00:14 -07:00
Enrico Ros 972284ec20 Settings: show AIX & Logger on Tools 2025-04-06 14:58:07 -07:00
Enrico Ros 8aaec8e13b Optima: make the last dialogs mut-ex 2025-04-06 14:57:52 -07:00
Enrico Ros f668fb85b2 Flash: tryfix for Mobile JSON trunctation 2025-04-06 14:57:12 -07:00
Enrico Ros a5e4ab8f9e AIX Debugger: explain 2025-04-06 14:57:12 -07:00
Enrico Ros a58db6c2bf Optima: parent the AIX Debugger Modal 2025-04-06 14:57:12 -07:00
Enrico Ros cc8c5a4b7c LogViewer: more mobile friendly 2025-04-06 14:56:54 -07:00
Enrico Ros 7027474942 Logger: per-module factory 2025-04-06 13:44:30 -07:00
Enrico Ros 41dd1e4b81 Flash/Backup Data
A new setting to export all the settings in localstorage and IndexedDB into
single 'flash' files for Big-AGI to reload.

This allows to quickly and easily migrate a full installation, including images,
from a v2-dev open installation to another.

This won't likely work across other branches, but it's meant to be forward-compatible.
2025-04-06 00:58:25 -07:00
Enrico Ros dd24b33cce Models list: display free models 2025-04-05 12:01:30 -07:00
Enrico Ros a703edab58 Gemini: update models 2025-04-05 11:10:06 -07:00
Enrico Ros 57346617a5 tools/ai/repo-structure: fix on mac/zsh 2025-04-05 10:49:33 -07:00
Enrico Ros 3a8bfb0bb1 @Events - export the app-wide bus type 2025-04-05 10:49:21 -07:00
Enrico Ros cd4e6f0f5e Roll packages 2025-04-05 10:39:52 -07:00
Enrico Ros 1a3037b756 Roll mono: +Aix. 2025-04-05 10:31:11 -07:00
Enrico Ros ef32834e10 OpenRouter: models list: prevent schema changes from breaking working models. Fixes #787 2025-04-05 10:25:12 -07:00
Enrico Ros a684a0fd3b OpenRouter: models list: ignore missing fields on 'openrouter/auto'. Fixes #787 2025-04-05 10:24:40 -07:00
Enrico Ros 44505d0e44 @Events
Introducing the Events module with per-Domain extensibility.

Depends on @Logger.
Required eventemitter3.

A pleasure to extend, and start using both for Subsystems and AGI events.
2025-04-04 14:19:10 -07:00
Enrico Ros d1589cf665 We need one space here for md presentation 2025-04-03 11:52:06 -07:00
Enrico Ros 4a7b4fbabf Document how to launch tools/ai/repo-structure.sh from anywhere 2025-04-03 11:49:43 -07:00
Enrico Ros ac1b3d7938 Simple repo ls for AI context 2025-04-03 11:41:25 -07:00
Enrico Ros 1686e662b4 DBlob: remove unused 2025-04-02 13:14:27 -07:00
Enrico Ros 67c97e7bd2 Vector Clock removal. 2025-04-02 11:09:34 -07:00
Enrico Ros 805c925e0d LLMs: Disable emojis on isLatest 2025-04-01 00:14:55 -07:00
Enrico Ros 8ffba9cdb5 Star your favorite models 2025-03-30 11:16:47 -07:00
Enrico Ros 5d5290f69d Replace the plantuml-encoder package with our custom implementation 2025-03-29 18:27:31 -07:00
Enrico Ros 563403a7f8 Logger: up debug console 2025-03-29 16:02:47 -07:00
Enrico Ros 5cbf013a8e Conversations: count hook 2025-03-28 15:19:49 -07:00
Enrico Ros 8bee761bb4 Quick edit: update text. #782 2025-03-28 10:47:17 -07:00
Enrico Ros 8bc482abe9 Attachments: rename to Note 2025-03-28 00:19:40 -07:00
Enrico Ros 51fd83cd7f Logger: in Optima modals (shift+ctrl+g) 2025-03-27 13:29:28 -07:00
Enrico Ros a0811c6d25 Logger: auto-capture unhandled 2025-03-27 13:29:28 -07:00
Enrico Ros 77e8497100 Logger: core framework 2025-03-27 13:29:27 -07:00
Enrico Ros b46aaa388b Logger: uuid id 2025-03-27 13:29:27 -07:00
Enrico Ros eeeba2febe ReAct mode: fix /draw 2025-03-26 09:27:21 -07:00
Enrico Ros 75921d08d1 OpenRouter: show reasoning 2025-03-26 09:25:25 -07:00
Enrico Ros 7764d18a8b Gemini: update models 2025-03-26 08:51:41 -07:00
Enrico Ros 797293ad8d Drawer: show open beams 2025-03-24 14:18:26 -07:00
Enrico Ros 7c7f1bcd5f Fix SQL auto-mime detection 2025-03-21 10:06:06 -07:00
Enrico Ros 50a430b353 Diffs: increase contrast 2025-03-20 10:40:50 -07:00
Enrico Ros 5b562c6671 Panels: remove minSize={20} for a reflow issue
When this property is set, a re-layout (force reflow) is performed by the browser even with a simple hovering of the separator.

Since we may have very large walls of text/images, we need to minimize relayouts, so for now, we set a min size on the contained scrollable container instead of preventing the resize.

See also this upstream issues: https://github.com/bvaughn/react-resizable-panels/issues/337
2025-03-20 07:59:22 -07:00
Enrico Ros cb0bf2d2e7 Incognito: better pane titles 2025-03-20 07:57:31 -07:00
Enrico Ros 0b042bb2b5 Incognito: better chat background 2025-03-20 07:57:28 -07:00
Enrico Ros b91fbeb978 Dark: better beam 2025-03-20 07:57:20 -07:00
Enrico Ros d0b84e7ca3 Roll packages deep 2025-03-19 21:35:34 -07:00
Enrico Ros 0edeeb54b4 Roll packages 2025-03-19 21:32:35 -07:00
Enrico Ros e1b2a28f7d Remove unneeded package 2025-03-19 21:31:03 -07:00
Enrico Ros 347c7be899 LLMs: o1-pro 2025-03-19 17:23:08 -07:00
Enrico Ros c71d88d3bf Attachments: change order/text 2025-03-19 16:47:58 -07:00
Enrico Ros 0d4cbe462f Change height for stacked desktop openings. 2025-03-19 16:36:25 -07:00
Enrico Ros a05110cd93 Update browse.router.ts 2025-03-19 09:33:33 -07:00
Enrico Ros 8f6ebe8301 Clean: improve icon 2025-03-18 07:29:59 -07:00
Enrico Ros 818775a12b Dark mode: increase code contrast 2025-03-18 06:45:29 -07:00
Enrico Ros 80b60cdaa8 roll packages 2025-03-17 06:11:54 -07:00
Enrico Ros 69118df912 Remove eventsource-parser.
The code is still laying around in AIX for a second.
2025-03-17 06:09:24 -07:00
Enrico Ros ff65382e06 AIX: 146x faster SSE Demuxer - hand-rolled optimized
The 14,600% speedup compared to eventsource-parser comes from Gemini Image Generation use cases.
2025-03-17 06:07:25 -07:00
Enrico Ros 420b8c49c6 AIX: profiler: force fallback which is class scoped
The 'performance' API is global and conflicting between calls.
2025-03-17 06:07:24 -07:00
Enrico Ros 0f9c02e249 AIX: demuxers: extract eventsource-parser 2025-03-17 02:35:43 -07:00
Enrico Ros 4890a90641 AIX: cleanup/centralize security on Production builds 2025-03-17 00:16:55 -07:00
Enrico Ros 653f0991e0 AIX: really disable Profiling on production builds 2025-03-16 23:51:30 -07:00
Enrico Ros a40efb4780 AIX: Debugger: add Profiler 2025-03-16 23:49:28 -07:00
Enrico Ros feea74268d AIX: Profiler: edge-runtime fallback for missing performance classes
Note: could become the default, for
compatibility, or when in non-dev mode.
2025-03-16 23:16:33 -07:00
Enrico Ros 631582ccbb UX: swap two chat actions 2025-03-16 22:23:14 -07:00
Enrico Ros 4f048a9907 AIX: profiler is inactive by default on both Client and Server
To turn it on, either|or:
- server side: aix.router.ts: DEBUG_LOG_PROFILER_ON_SERVER=true
- client side: DEV BUILD + "debug mode" + DEBUG_LOG_PROFILER_ON_CLIENT=true to show on the console
2025-03-16 22:15:52 -07:00
Enrico Ros a8752ccde0 AIX: dynamic Profiler
This requires EITHER:
- on the server-side, in aix.router.ts, set DEBUG_LOG_PROFILER=true;
- on the client side, and only for Development builds, this is automatic in "Debug Mode"
2025-03-16 22:11:37 -07:00
Enrico Ros feafad0d77 AIX: yield ❤|while awaiting 2025-03-16 18:54:34 -07:00
Enrico Ros 6faa468ed3 AIX: ❤|awaited ops 2025-03-16 17:43:07 -07:00
Enrico Ros ab55804039 AIX: images in the last assistant fragment (e.g. generated by AI) get sent at storage quality (e.g. 98% WebP) 2025-03-16 07:26:22 -07:00
Enrico Ros 05d9bb3bab Gemini: store compressed images. Save 80% at 98% quality (png -> webp) 2025-03-16 07:07:04 -07:00
Enrico Ros 39ae8cd250 Gemini: Image Generation additional details 2025-03-16 06:43:45 -07:00
Enrico Ros 5d34e3eb88 Gemini: Image Generation does not support the system prompt 2025-03-16 06:26:33 -07:00
Enrico Ros ee20441307 Gemini: render generated images 2025-03-16 06:04:19 -07:00
Enrico Ros b12920ae67 AIX: max dev messages size: 4096 2025-03-16 05:54:45 -07:00
Enrico Ros f9ab682559 AIX: reassembler: improve (unawaited) error catching 2025-03-16 05:47:10 -07:00
Enrico Ros d042f7b396 AIX: Asynchronous Reassembler 2025-03-16 04:56:38 -07:00
Enrico Ros d8e4c8a78c Disable animation during debug 2025-03-16 04:40:58 -07:00
Enrico Ros 1e2dcce664 Merge pull request #777 from darthalex2014/Multi-key---lite
AIX: AI: multi-key support. function getRandomKeyFromMultiKey
2025-03-16 02:41:00 -07:00
Enrico Ros ab4af50daf UX: rename the Performance toggle, and strongly advise it off 2025-03-16 02:29:54 -07:00
Enrico Ros 26c83764d9 BEAM: quitting reason 2025-03-16 02:28:40 -07:00
Enrico Ros 85ac64dea1 BEAM: selfish when solo. 2025-03-16 02:28:40 -07:00
Enrico Ros 7305c9d354 AIX: withDecimator 2025-03-16 02:28:40 -07:00
Enrico Ros b99f8e6b14 AIX: chatGenerate "maybe asynchronous" callbacks support 2025-03-16 02:17:21 -07:00
Enrico Ros eb7e2ab92a 1px is 1px 2025-03-16 02:17:20 -07:00
Enrico Ros f7edbfb5af Remove one unnecessary/invisible animation, one optional animation 2025-03-16 02:17:20 -07:00
Alex(GoD) 7c918e4735 AIX: AI: multi-key support. function getRandomKeyFromMultiKey
getRandomKeyFromMultiKey
2025-03-16 10:41:14 +05:00
Enrico Ros 7d4d1e13a0 Gemini: fix file URI 2025-03-15 06:43:34 -07:00
Enrico Ros dbe58e30c4 Gemini: fix build - this hasn't landed yet 2025-03-15 06:41:34 -07:00
Enrico Ros d2aa97b889 Gemini: wires for image generation 2025-03-15 06:38:09 -07:00
Enrico Ros 0eac3e3aca Gemini: document the next steps for Grounding 2025-03-15 04:49:02 -07:00
Enrico Ros 75d61d0604 Gemini: support setting the civic integrity block threshold 2025-03-15 04:45:43 -07:00
Enrico Ros 2f7b053f96 Gemini: grounding basic support (not on in the UI) #773
What's left:
- figure out how to turn this on/off
- figure out which models can or cannot use it, without having too much to maintain
- figure out the runtime implementation
- parse the annotation ranges
- render the original icons
- figure out how to escape the Vertex rewriting of URLs
2025-03-15 04:38:12 -07:00
Enrico Ros 5ab5a85b73 Gemini: hotfix image output 2025-03-15 03:56:16 -07:00
Enrico Ros 1d7da8fa8c Gemini: allow using 2.0 Experimental - Flash Image Generation 2025-03-15 03:26:00 -07:00
Enrico Ros 727b2edf74 Gemini: improve the parser 2025-03-15 03:25:40 -07:00
Enrico Ros 6caff0ca59 Gemini: citations (recitation detection) support: disable by default: poor websites 2025-03-15 02:24:20 -07:00
Enrico Ros b41f930d08 Gemini: improve response parsing (excl. candidates) 2025-03-15 01:19:41 -07:00
Enrico Ros 5a70d926cb Gemini: report the actual model used for the generation, not what was requested 2025-03-15 00:41:12 -07:00
Enrico Ros dbfe7b734c Gemini: gemma-3 does not support input images 2025-03-15 00:30:25 -07:00
Enrico Ros 8acf5df3aa Gemini: improve wiretypes 2025-03-15 00:23:39 -07:00
Enrico Ros f3b882ca2f AIX: update readme 2025-03-14 23:55:23 -07:00
Enrico Ros 94adf3cda6 Azure: instructions on how to add models customizations via code. #774 2025-03-14 23:23:17 -07:00
Enrico Ros bfacaa6cf8 Azure: full cleanup, supports for any OpenAI model (via auto weak mapping) #774
Also, separate from the OpenAI models, but still resolve params in the OpenAI set.
2025-03-14 23:15:08 -07:00
Enrico Ros 0033debb90 Azure: upgrade the chat generation API version (keep the deployments list), #774 2025-03-14 23:13:34 -07:00
Enrico Ros 20f2bda6ed Azure: auto-fix temperature and max_tokens, #774 2025-03-14 23:12:42 -07:00
Enrico Ros bcc278c9cf OpenAI: contrary to the docs, the Search Preview models don't support image inputs 2025-03-14 22:10:23 -07:00
Enrico Ros 75ccac221d Composer: change debounce deadlines 2025-03-13 17:31:53 -07:00
Enrico Ros d90dd90a4a Optimization: render the message labels less 2025-03-13 17:26:33 -07:00
Enrico Ros d9156ce66c Composer: improve from useDebouncer 2025-03-13 17:10:57 -07:00
Enrico Ros 61457681e1 LLMs: Gemini fix & update (Gemma 3, see notes).
Somehow the developer instruction is not enabled for Gemma3-IT, and we got this message:
"Gemini: Bad Request - Developer instruction is not enabled for models/gemma-3-27b-it"

So we convert any System message to a User message instead (see the hotfix)
2025-03-12 04:02:06 -07:00
Enrico Ros bf5019108e LLMs: Ollama: update 2025-03-12 04:02:05 -07:00
Enrico Ros 622edec2fb Improve Void fragments render 2025-03-12 03:31:07 -07:00
Enrico Ros dac02f81c0 Improve Annotations render 2025-03-12 03:25:00 -07:00
Enrico Ros d8037ebd8d Improve Chat short model names 2025-03-12 03:24:44 -07:00
Enrico Ros fba1bac8d2 OpenAI: move models 2025-03-12 03:24:32 -07:00
Enrico Ros 510fbd293b Blocks: separate Void Parts from Content parts 2025-03-12 02:26:33 -07:00
Enrico Ros ab8c974e6f Options: improve 2025-03-12 02:26:32 -07:00
Enrico Ros 870f5afcfb Options: framework and parser 2025-03-12 02:15:36 -07:00
Enrico Ros 6192bda94f DVoidWebCitation: fix immutability
Actually I don't know why I need to declare those
as readonly arrays in the base objects.
2025-03-12 01:59:44 -07:00
Enrico Ros 3f701fcee3 Void Annotations: render 2025-03-12 01:36:30 -07:00
Enrico Ros 524d049d74 AIX: Perplexity: parse citations 2025-03-12 01:35:46 -07:00
Enrico Ros 983e964e36 AIX: OpenAI: parse annotations/citations 2025-03-12 01:35:46 -07:00
Enrico Ros 84f989d6da AIX: Annotation particles reassembly 2025-03-12 01:35:46 -07:00
Enrico Ros 49356fa769 AIX: Annotation particles transmission 2025-03-12 01:35:46 -07:00
Enrico Ros 2a6a03da64 DMessage: Void Annotation fragments 2025-03-12 01:35:46 -07:00
Enrico Ros fd17860dd8 Add auto-render of domain icons 2025-03-12 01:35:46 -07:00
Enrico Ros 46fea48b6e UrlUtils: add URL domain/prettyHref extraction 2025-03-12 01:35:28 -07:00
Enrico Ros 54ef248df5 AIX: OpenAI: support for web_search_options 2025-03-11 19:18:00 -07:00
Enrico Ros 2dfb8990d2 AIX: support for Search Context & Location 2025-03-11 19:17:45 -07:00
Enrico Ros a50ac8167b OpenAI: Context & Location search parameters 2025-03-11 19:15:15 -07:00
Enrico Ros 86baab6858 AIX: Debugger: don't show particles by default (heavy) 2025-03-11 19:13:44 -07:00
Enrico Ros 67c18bb0af Utils: webGeolocation 2025-03-11 18:52:12 -07:00
Enrico Ros c4584c27ef OpenAI: update sorting 2025-03-11 17:48:26 -07:00
Enrico Ros 0022439bba OpenAI: support all Search Preview (online) models. 2025-03-11 17:38:37 -07:00
Enrico Ros 5a81ef573c Roll packages deep 2025-03-11 11:20:45 -07:00
Enrico Ros 6f7ea5c7df Roll packages 2025-03-11 11:19:19 -07:00
Enrico Ros 926452bd55 Follow Enter/newline preferences for edits. Fixes #760. Fixes #770. Closes #771. 2025-03-10 20:25:10 -07:00
Enrico Ros b5eeb6945c Vector clocks note added to Data Ownership. 2025-03-10 20:20:58 -07:00
Enrico Ros 241ba623cc Vector clock device IDs 2025-03-10 20:00:05 -07:00
Enrico Ros cbd3099fa5 YT Transcript: warn of broken downloads 2025-03-10 16:43:51 -07:00
Enrico Ros 49e12e2a0b stores/{ui, ux-labs}: move 2025-03-10 12:15:21 -07:00
Enrico Ros 4b405af0e4 Release Notes
Release notes
2025-03-07 18:30:12 -08:00
Enrico Ros 578ef40106 Tech: show shipped 2025-03-07 16:33:03 -08:00
Enrico Ros f6e76b0fb9 Version update 1.92 2025-03-07 16:30:53 -08:00
Enrico Ros 17549bfe29 Cleanups: remove magic emojis 2025-03-07 15:32:22 -08:00
Enrico Ros 7915aed388 SearchParams: for future URL state 2025-03-06 03:06:08 -08:00
Enrico Ros e26c23e238 DNT-DT: wire up 2025-03-05 19:11:22 -08:00
Enrico Ros fb5da15245 DND-DT: GlobalDragOverlay
Note: to enable, add the overlay to OptimaLayout
2025-03-05 19:11:22 -08:00
Enrico Ros 0021e4f354 DND-DT: composer without zIndex 2025-03-05 19:11:22 -08:00
Enrico Ros afa850231c PostHog: dynamic loading to reduce bundle size 2025-03-05 19:11:22 -08:00
Enrico Ros 935dc7ddaf anim: add fade In 2025-03-05 18:27:29 -08:00
Enrico Ros ac08eec0e4 restore React Strict mode 2025-03-05 18:27:29 -08:00
Enrico Ros 5deb062e5f DND-DT: move 2025-03-05 15:02:24 -08:00
Enrico Ros 8e33fdbae5 Panes: allow delete (disabled) 2025-03-05 14:44:36 -08:00
Enrico Ros 403e6fbe37 Resize handler: noop basically 2025-03-05 14:26:55 -08:00
Enrico Ros 071c43997e MOTD: use TimeAgo 2025-03-05 14:03:38 -08:00
Enrico Ros 04f9512c2a Vercel: serverless functions timeout to 60
Do it the proper way, as the vercel_Production file only caused troubles.
2025-03-05 13:49:48 -08:00
Enrico Ros b9bc4421a3 Panes: adapt API 2025-03-05 13:05:25 -08:00
Enrico Ros b2efd5af0a InlineTextarea: centered text 2025-03-05 13:05:18 -08:00
Enrico Ros 264a2f9449 MOTD: fix 2025-03-05 13:02:34 -08:00
Enrico Ros 561959e960 Panes: fading 2025-03-05 12:59:51 -08:00
Enrico Ros 41a5f9a775 Panes: Edit Title, close, close others 2025-03-05 12:59:47 -08:00
Enrico Ros 9a61e04293 Panes: adapt to API change 2025-03-05 12:59:40 -08:00
Enrico Ros 3f1e01c6f9 Panes: rename removeNonFocused 2025-03-05 12:58:12 -08:00
Enrico Ros 12eabf86cf Allow MOTD dismissal, but for every hash key. 2025-03-04 17:15:17 -08:00
Enrico Ros 82d39d3256 Env-vars: document Message of the day and new variables. 2025-03-04 16:33:16 -08:00
Enrico Ros a1921e6fa4 Add the capability to display a MOTD - message of the day. For 2025-03-04 16:32:53 -08:00
Enrico Ros a5463fabe5 Further rationalize Build env var access 2025-03-04 15:59:33 -08:00
Enrico Ros 26f71ddedd PostHog: add functionality and documentation if you want to use it 2025-03-04 15:44:36 -08:00
Enrico Ros bdc2f7e8e1 PostHog: add packages 2025-03-04 13:56:38 -08:00
Enrico Ros 2083be39da roll packages 2025-03-03 15:26:03 -08:00
Enrico Ros 521419a5aa Title: Move overlay 2025-02-28 04:34:27 -08:00
Enrico Ros 5bf9270d5d Multi-pane Titles 2025-02-28 04:18:03 -08:00
Enrico Ros 2b55921830 AIX: options override: be safe without the ref 2025-02-28 02:15:49 -08:00
Enrico Ros 707ffa162e AIX: Debugger: enable any context 2025-02-27 22:32:29 -08:00
Enrico Ros 19848da7c3 AIX: Debugger: wire more 2025-02-27 22:32:02 -08:00
Enrico Ros 334df849b3 AIX: Debugger: first version 2025-02-27 22:30:38 -08:00
Enrico Ros 801d34692b AIX: Debugger: wire Aix Client 2025-02-27 22:29:56 -08:00
Enrico Ros 0aa70f2b80 AIX: Debugger: reactive store 2025-02-27 22:26:51 -08:00
Enrico Ros 5ad11a8b75 AIX: Debugger: rename Dispatch Request loopback 2025-02-27 22:25:37 -08:00
Enrico Ros 3f1bed3b6e Anthropic: auto-limit the thinking budget 2025-02-27 18:35:03 -08:00
Enrico Ros ca3668dd60 Gemini: cleanup models 2025-02-27 18:33:17 -08:00
Enrico Ros b3ae2b1cbc Gemini: remove non existing models from our definitions, and add a check 2025-02-27 18:33:15 -08:00
Enrico Ros f6abca0663 Gemini: update models 2025-02-27 18:33:13 -08:00
Enrico Ros 084ff69239 Anthropic: update header docs 2025-02-27 18:33:10 -08:00
Enrico Ros 8d31be462a OpenAI: fix model order 2025-02-27 16:55:46 -08:00
Enrico Ros 6d010c0ef1 Metrics: show the speed section also if the wait exceeded 10 seconds 2025-02-27 16:55:39 -08:00
Enrico Ros dfc37fb2d4 Metrics: require at least 40 tokens to compute speed (and it's a very low bar 2025-02-27 16:48:39 -08:00
Enrico Ros 56cd7b0b4f Metrics: compensate reasoning tokens 2025-02-27 16:44:23 -08:00
Enrico Ros 0060739bd2 Metrics: hmm 2025-02-27 16:44:10 -08:00
Enrico Ros e98f86d878 Metrics: improve render 2025-02-27 16:35:04 -08:00
Enrico Ros 1683790315 Metrics: render tok/s and wait 2025-02-27 16:31:39 -08:00
Enrico Ros 3c32c906de Metrics: store dtStart and vTOutInner where available 2025-02-27 16:11:38 -08:00
Enrico Ros d8c9c50743 OpenAI: official 4.5 support 2025-02-27 15:44:43 -08:00
Enrico Ros 2fc6febfaf LLM types: small sort 2025-02-27 15:10:44 -08:00
Enrico Ros f49c679005 Optima Dropdown: faster, better style 2025-02-27 01:20:53 -08:00
Enrico Ros 67206a3c4d AppChat: improve borders 2025-02-27 01:20:46 -08:00
Enrico Ros ed23f1d243 roll: misc deep 2025-02-26 19:55:37 -08:00
Enrico Ros 3b8c6c8c06 roll: Lock NextJS to 15.1 2025-02-26 19:53:04 -08:00
Enrico Ros e0c956e3e7 roll: Types for React 19 2025-02-26 19:48:43 -08:00
Enrico Ros 6efff8b285 React: fix useRef for React 19 2025-02-26 19:46:18 -08:00
Enrico Ros 4422c6c803 Incognito: improve appearance 2025-02-26 18:33:24 -08:00
Enrico Ros 511b9241f5 FormLabelStart: optimize 2025-02-25 13:09:12 -08:00
Enrico Ros 89549ebeef LLM Params Editor: support simplify 2025-02-25 13:00:53 -08:00
Enrico Ros bdb24f6da1 Fragments: fix types 2025-02-25 05:00:36 -08:00
Enrico Ros d7bc03f0a9 AIX: Dispatch/CGR: adapters for Thinking Blocks (only Anthropic is implemented)
Note: the ModelAux/reasoning block is only sent if there's a signature or there is redacted data.

We could even further reduce its sending to only Anthropic llms in CGR.
2025-02-25 04:37:38 -08:00
Enrico Ros 64c18e3f68 Fragments: have to deal with this string[] 2025-02-25 04:34:46 -08:00
Enrico Ros 7bba7e0c32 AIX: TRR particle reassembler fix 2025-02-25 04:34:17 -08:00
Enrico Ros e48b3f0f8e Render Block parts 2025-02-25 03:39:37 -08:00
Enrico Ros 31da502123 AIX: Anthropic: parser: S/NS TRR particles 2025-02-25 03:39:09 -08:00
Enrico Ros 9c64bbdd60 AIX: Anthropic: parser: exhaustive checks 2025-02-25 03:21:20 -08:00
Enrico Ros f4c1b0c1da AIX: TRR particle transmitter/reassembler 2025-02-25 03:21:20 -08:00
Enrico Ros c761e9fe38 AIX: mirror the Aux fragment 2025-02-25 03:19:58 -08:00
Enrico Ros e66aaaf98a Fragments: finalize the Aux fragment 2025-02-25 03:19:58 -08:00
Enrico Ros 58b5811d9e Fragments: small fix 2025-02-25 02:38:26 -08:00
Enrico Ros 3b3429d77a LLMs: document interfaces 2025-02-25 01:32:37 -08:00
Enrico Ros 98eb1a6694 Chat AI: keep last Thinking block only (default) 2025-02-25 00:26:22 -08:00
Enrico Ros 91929a3217 Chat AI settings: renames 2025-02-24 23:30:45 -08:00
Enrico Ros 5eecbc43be Chat AI settings: categories 2025-02-24 23:27:59 -08:00
Enrico Ros 609502c545 LLMs: don't control temperature when controlling Anthropic's Thinking Budget (temp=1) 2025-02-24 19:52:41 -08:00
Enrico Ros d0b420f9a1 AIX: Anthropic: wire Response: Thinking/RedactedThinking blocks - NOT matched by AixWire_Particles AND NOR AixWire_Parts 2025-02-24 19:40:48 -08:00
Enrico Ros 1222c53a1a AIX: Anthropic: wire Request: Thinking blocks 2025-02-24 18:57:19 -08:00
Enrico Ros 7b2d51e6c9 AIX: Anthropic: adapter support for the Thinking Budget 2025-02-24 18:57:19 -08:00
Enrico Ros 46cb286839 AIX: Anthropic: framework support for Thinking Budget (nullable number) 2025-02-24 18:57:19 -08:00
Enrico Ros 2e6f0c06fb AIX: Anthropic: adapter misc (Documents, unused for now)
This pairs with the Citations mechanism, that's not yet added to the wires.
2025-02-24 18:57:19 -08:00
Enrico Ros 31c138dacb AIX: improve user-visible message 2025-02-24 18:57:19 -08:00
Enrico Ros e428683ec7 LLMs: Anthropic: add the Thinking variant 2025-02-24 18:57:07 -08:00
Enrico Ros b6462225a7 LLMs: define, edit, and optionally spec the vendor model parameter 'Anthropic thinking budget' 2025-02-24 15:54:12 -08:00
Enrico Ros dfc110ca05 LLMs: enable model variants 2025-02-24 15:21:18 -08:00
Enrico Ros f55bd26f2e Anthropic: improve flags composition 2025-02-24 14:11:14 -08:00
Enrico Ros 603b6b90df Anthropic: 3.7 dMessageUtils 2025-02-24 12:55:52 -08:00
Enrico Ros 2c132ae2cf Anthropic: auto-created-date 2025-02-24 12:55:52 -08:00
Enrico Ros c7f4ad5a31 Anthropic: update 3.7 output size 2025-02-24 12:55:52 -08:00
Enrico Ros b9d5593895 Anthropic: update models 2025-02-24 12:55:52 -08:00
Enrico Ros 6a833fc141 LLM Options: just slight better display 2025-02-24 12:55:52 -08:00
Enrico Ros 4e1ad84831 CloseablePopup: memo 2025-02-24 00:48:31 -08:00
Enrico Ros e90bcdf1a3 ERC: fix overlapping menus and non-closing menus on rmb click 2025-02-24 00:48:23 -08:00
Enrico Ros dfbb346180 BeamView: comment for LLMs 2025-02-23 22:41:12 -08:00
Enrico Ros 2d5b97f68f Draw: fix 2025-02-23 15:55:06 -08:00
Enrico Ros 32826f1e4d Draw: improve # 2025-02-23 15:41:04 -08:00
Enrico Ros b1ed1d624a Draw: image settings 2025-02-23 15:40:55 -08:00
Enrico Ros 06c4040334 No tips on draw 2025-02-23 15:24:25 -08:00
Enrico Ros b71c389f5c Uniform model icons 2025-02-23 14:37:11 -08:00
Enrico Ros 5557de6dc3 Fragments: support placeholders with purpose 2025-02-23 03:15:27 -08:00
Enrico Ros ccdcd24d22 LLMs: fix 'buttons can wrap' 2025-02-23 03:13:06 -08:00
Enrico Ros c410a655ea Fix latext/markdown rendering: preserve leading space when re-encoding for 'remark-math'. Fixes #763 2025-02-23 01:49:31 -08:00
Enrico Ros 2fd84ae57c Nav: disable incomplete 2025-02-23 01:31:37 -08:00
Enrico Ros b760b717ef Imagine: fix prompt and algo 2025-02-23 01:00:08 -08:00
Enrico Ros acf9bd8663 AppChat: Draw: "draw options" on desktop 2025-02-23 01:00:08 -08:00
Enrico Ros 7327f1440e AppChat: Draw: support N images 2025-02-23 01:00:08 -08:00
Enrico Ros 87d8c10905 AppChat: Draw: suspend other elements 2025-02-23 00:35:12 -08:00
Enrico Ros ee45f3cae9 AppChat: Draw: inline enhancements 2025-02-23 00:15:11 -08:00
Enrico Ros 195255ce9a roll packages 2025-02-23 00:15:11 -08:00
Enrico Ros 0e4fda0c5a Draw/Provider: share style 2025-02-22 23:16:53 -08:00
Enrico Ros f1babdee60 Draw/Provider: rename 2025-02-22 23:16:53 -08:00
Enrico Ros a703d85688 T2I settings: remove popup, overflows on mobile 2025-02-22 23:16:53 -08:00
Enrico Ros 0cd677cb39 T2I settings: use chips for the active service 2025-02-22 23:16:53 -08:00
Enrico Ros 9fe11fb6e2 Add FormChipControl: swappable for the Radio Control 2025-02-22 23:16:53 -08:00
Enrico Ros 58451b17dc Optima: export dropdown slotProps 2025-02-22 23:16:53 -08:00
Enrico Ros cba924a31a Phosphor: add settings 2025-02-22 23:16:53 -08:00
Enrico Ros 74e50d1cb2 Beam: don't re-run when ctrl+enter when editing 2025-02-22 23:16:53 -08:00
Enrico Ros bd1c01b4e1 Remove unused 2025-02-22 20:34:27 -08:00
Enrico Ros 541fa4aa28 Code Icon 2025-02-22 20:34:15 -08:00
Enrico Ros 4dd03c7bd6 Fix port 2025-02-22 20:34:07 -08:00
Enrico Ros 3a2de83920 Auto-scale side menu 2025-02-22 19:55:21 -08:00
Enrico Ros 2ef5d339c6 Misc simplify 2025-02-22 19:42:13 -08:00
Enrico Ros 6355098703 Backport smallie 2025-02-22 19:33:34 -08:00
Enrico Ros a10a953097 Big-AGI logos 2025-02-20 17:33:32 -08:00
Enrico Ros 99293d9841 Optima: large UI cleanups 2025-02-20 16:33:18 -08:00
Enrico Ros 6d409e4df5 Optima: Side Paneling 2025-02-20 16:33:11 -08:00
Enrico Ros 2fceef4f0c Fix max/fullscreen icons 2025-02-20 15:49:44 -08:00
Enrico Ros 7577e64085 Show last used chat mode in dev settings. 2025-02-20 14:41:40 -08:00
Enrico Ros 4a9750865f LLM domain capabilities checking: warn about proceeding with a LLM without requirements, but don't bail 2025-02-20 14:31:41 -08:00
Enrico Ros fba0685266 LLM domain autoconfiguration includes the function calling detection 2025-02-20 14:27:55 -08:00
Enrico Ros e3fa1c740d Reconfigure Code/Fast if not present after a full reconfig. 2025-02-20 14:26:59 -08:00
Enrico Ros de190f6d41 LLM Attachments: stay in tooltip 2025-02-20 14:26:19 -08:00
Enrico Ros 7a5bc39376 Gemini: fix model capabilities 2025-02-20 14:26:02 -08:00
Enrico Ros c0b67653de RenderCode: fix fullscreen 2025-02-20 14:25:48 -08:00
Enrico Ros c6b1bd2f3a Advanced AI settings: improve all settings 2025-02-20 13:57:53 -08:00
Enrico Ros ae5c30af6b FormLabelStart: support warnings 2025-02-20 13:51:39 -08:00
Enrico Ros a513378d73 autoChatFollowUps: code model only 2025-02-20 13:51:30 -08:00
Enrico Ros 5b63c12958 Gemini: thinking models do not do FC 2025-02-20 13:51:30 -08:00
Enrico Ros f3fec33085 Code model editing. 2025-02-20 13:11:08 -08:00
Enrico Ros 3a071af42d LLMs: get from domain 2025-02-20 12:49:35 -08:00
Enrico Ros a06a863745 Revert "Mic: Enter/Ctrl+Enter interceptors to Send/Beam"
This reverts commit 93f2cf4bce.
2025-02-20 12:47:14 -08:00
Enrico Ros 93f2cf4bce Mic: Enter/Ctrl+Enter interceptors to Send/Beam 2025-02-20 12:34:10 -08:00
Enrico Ros 0b70728f04 Mic: disable focus on the Composer Textarea while active 2025-02-20 12:32:03 -08:00
Enrico Ros b12f422db6 Shortcuts: Esc comes first 2025-02-20 12:31:55 -08:00
Enrico Ros 13681deaa1 Nav: strings 2025-02-20 12:05:05 -08:00
Enrico Ros d2d43af0df Nav: breadcrumbs 2025-02-20 12:05:05 -08:00
Enrico Ros 500f053afd Settings: update 2025-02-20 12:05:04 -08:00
Enrico Ros 8cf9b06d7b Remove App.pl 2025-02-20 10:32:58 -08:00
Enrico Ros 88002fd78b Rename TenantSlug 2025-02-20 09:16:30 -08:00
Enrico Ros c4684d2dab Fw compat key name 2025-02-20 08:43:44 -08:00
Enrico Ros e46a244fea Move GA 2025-02-20 08:19:08 -08:00
Enrico Ros c940de6cd7 Perplexity: update models 2025-02-19 19:52:23 -08:00
Enrico Ros c391ecc7a9 Ollama: update models 2025-02-19 19:52:23 -08:00
Enrico Ros d65ad7324d OpenAI: small text updates 2025-02-19 18:35:11 -08:00
Enrico Ros a68ffd5339 Groq: update models pricing 2025-02-19 18:26:50 -08:00
Enrico Ros 59736d19af Deepseek: update prices 2025-02-19 18:20:44 -08:00
Enrico Ros 9967f09566 Alibaba: fix pricing 2025-02-19 16:01:11 -08:00
Enrico Ros 3d7e4ebb71 Alibaba Cloud support, incl Qwen Max, Plus, Turbo. Fixes #759 2025-02-19 15:54:34 -08:00
Enrico Ros c9457f7610 Block Editor: set FORCE_ENTER_IS_NEWLINE=undefined in the code to disable Shift+Enter to save, and follow the App preferences instead. Fixes #760. 2025-02-19 14:21:48 -08:00
Enrico Ros 13aef1fd89 xAI: update models 2025-02-18 14:12:57 -08:00
Enrico Ros a9548747cd Shortcuts: fix jumpiness 2025-02-18 13:51:26 -08:00
Enrico Ros 0da4cd6eb1 Empty Inline Links renderer 2025-02-18 10:44:12 -08:00
Enrico Ros 083246bea1 FireworksAI: small doc change 2025-02-18 00:15:47 -08:00
Enrico Ros 9f372ebd72 FireworksAI: support via custom OpenAI on https://api.fireworks.ai/inference 2025-02-18 00:11:18 -08:00
Enrico Ros cdf4c96ed6 Notice on approximate tokenizer 2025-02-17 22:50:46 -08:00
Enrico Ros c757b57e07 GA: application build stats 2025-02-17 22:50:46 -08:00
Enrico Ros 6629585b32 GA: remove @next/third-parties/google 2025-02-17 22:12:21 -08:00
Enrico Ros ad96d6ce66 Dockerfile: deployment type 2025-02-17 21:26:09 -08:00
Enrico Ros 5877dc1e24 Dockerfile: build information 2025-02-17 21:09:04 -08:00
Enrico Ros 908a6b808b Dockerfile: new env=value format 2025-02-17 20:30:18 -08:00
Enrico Ros fbd41fae7f roll residuals 2025-02-17 19:25:42 -08:00
Enrico Ros f9ff37c820 roll packages 2025-02-17 19:18:19 -08:00
Enrico Ros eed91491aa Types: immutable (deeply) 2025-02-17 19:13:38 -08:00
Enrico Ros 6faf9db2ba Azure: add note about AI Foundry. #757 2025-02-17 08:30:21 -08:00
Enrico Ros 713fd7fc22 Azure: rename to Azure OpenAI. #757 2025-02-17 08:29:46 -08:00
Enrico Ros d86ce3ac2f AIX: capitalize dialect in exceptions 2025-02-17 08:17:21 -08:00
Enrico Ros 076163ccfd Diagram - improve title 2025-02-16 01:35:48 -08:00
Enrico Ros 8f74c26f77 Space between radios 2025-02-16 01:35:25 -08:00
Enrico Ros 1b37ed61e3 Update text 2025-02-16 01:14:51 -08:00
Enrico Ros c6a421e61b Panel: Zero improvement 2025-02-15 14:26:54 -08:00
Enrico Ros 550a60f4af Panes: Zero notices 2025-02-15 14:04:24 -08:00
Enrico Ros 01a6901bfe Panes: add an empty split when not branching 2025-02-15 13:48:18 -08:00
Enrico Ros e655aa5bbd Pane Manager: cleanup 2025-02-15 13:17:10 -08:00
Enrico Ros f02409c5a9 Stores: cleanup 2025-02-15 13:12:31 -08:00
Enrico Ros 8524473488 o1: re-enable streaming now that OAI supports it 2025-02-15 12:48:28 -08:00
Enrico Ros 0b039c6453 Add Toggle 2025-02-14 15:00:29 -08:00
Enrico Ros 62250abe8b Improve multichat on mobile 2025-02-14 14:21:13 -08:00
Enrico Ros 5b0fc66cb1 Groq: update models 2025-02-14 13:38:32 -08:00
Enrico Ros ffa15c274b chat-store: merge (not replace) conversations from storage 2025-02-14 13:13:17 -08:00
Enrico Ros 09596000d7 Improve multichat icon 2025-02-14 13:05:05 -08:00
Enrico Ros 8e7a5e7d60 LLMs: improve autoconfig 2025-02-14 01:36:21 -08:00
Enrico Ros fc6d485fa3 LLMs: adapt PersonaSelector 2025-02-13 22:55:16 -08:00
Enrico Ros 0ed2e7e175 LLMs: remove useChatLLM for good 2025-02-13 22:55:16 -08:00
Enrico Ros cb0a54fe2b LLMs: bits 2025-02-13 21:06:00 -08:00
Enrico Ros d9cf91d2f0 LLMs: port useFormRadioLlmType 2025-02-13 21:03:10 -08:00
Enrico Ros 3ec820f212 LLMs: ModelsList for domains 2025-02-13 20:49:19 -08:00
Enrico Ros 474f743d28 LLMs: roll models 2025-02-13 20:39:22 -08:00
Enrico Ros 3f1b508752 LLMs: update the select 2025-02-13 20:38:49 -08:00
Enrico Ros 2c49a1d8b9 LLMs: port the llm dropdown 2025-02-13 19:46:09 -08:00
Enrico Ros ab441659b2 LLMs: port select and options 2025-02-13 18:56:56 -08:00
Enrico Ros 84d843b356 LLMs: per-domain configuration 2025-02-13 18:13:16 -08:00
Enrico Ros 9b3af38326 Models: update benchmark scores 2025-02-13 15:23:51 -08:00
Enrico Ros 8226a638d9 ModelAux: disable button (prob no effect) 2025-02-13 15:06:10 -08:00
Enrico Ros 4cd2c5878c LLMs: rename .service.types 2025-02-12 13:42:49 -08:00
Enrico Ros 8242198068 LLMs: extract assignments slice 2025-02-12 10:36:04 -08:00
Enrico Ros 59be5dc807 Update MCT 2025-02-12 09:46:39 -08:00
Enrico Ros de6b6012ba AiFn: disabled summarize 2025-02-12 09:33:58 -08:00
Enrico Ros 5928c84cf4 Chat AI: change utility model 2025-02-12 01:37:35 -08:00
Enrico Ros b393469584 LLM Select: ensure a min width of 96px, and break words if push comes to shove 2025-02-12 01:24:08 -08:00
Enrico Ros 6f5cef3a6c Wizard: support 'defaults' 2025-02-12 01:03:38 -08:00
Enrico Ros 5234d78719 LocalAI: fix a p > div 2025-02-11 22:52:11 -08:00
Enrico Ros aebe64ef3d Wizard: support Local vendors 2025-02-11 22:51:53 -08:00
Enrico Ros 224a40dcb7 Wizard: improve first time experience 2025-02-11 20:39:19 -08:00
Enrico Ros 5ddb6bf718 LocalAI: large UI improvement 2025-02-11 20:39:04 -08:00
Enrico Ros 11cb61874d Ollma: improve type 2025-02-11 20:38:56 -08:00
Enrico Ros 00ed22ad28 Mistral: improve 2025-02-11 20:38:44 -08:00
Enrico Ros e263922b43 Anthropic: minor status message update 2025-02-11 19:47:00 -08:00
Enrico Ros a4172a74d1 StorageUtils: improve display 2025-02-11 19:44:10 -08:00
Enrico Ros b1fb2aeeb3 Wizard: improve selectors 2025-02-11 19:26:20 -08:00
Enrico Ros 4f3c2b7b8c Reconfigure All Models on hash changes 2025-02-11 17:06:16 -08:00
Enrico Ros ec493ee91b Wizard: Models 2025-02-11 17:04:11 -08:00
Enrico Ros 2200bb9ee8 Anthropic: less intrusive fallback message 2025-02-11 14:59:06 -08:00
Enrico Ros 588129436d Push down: cml background 2025-02-11 13:22:49 -08:00
Enrico Ros fed51d9959 Add icon 2025-02-11 12:41:40 -08:00
Enrico Ros e6af5e77f8 Models modal: simplify (disable the 'all services' button) 2025-02-11 12:40:29 -08:00
Enrico Ros 2eb230d366 Models list: verbiage 2025-02-11 12:39:47 -08:00
Enrico Ros a66ecd7660 Modal: add darken bottom 2025-02-11 12:39:38 -08:00
Enrico Ros 46a9459b7d LocalAI: mark one more 2025-02-11 12:23:11 -08:00
Enrico Ros 0a34dae6c0 Models config: improve add service ux 2025-02-11 12:23:03 -08:00
Enrico Ros 2209a76f25 Models config: improve costs display again 2025-02-11 12:22:50 -08:00
Enrico Ros ba2e27dc7e Models config: improve costs display 2025-02-11 11:57:06 -08:00
Enrico Ros 5f5cedb428 Models config: small ux fix 2025-02-11 11:34:54 -08:00
Enrico Ros a4da127078 Merge pull request #754 from jayrepo/patch-2
link ssl3 for builder
2025-02-11 08:49:29 -08:00
Jay Chen 109d0ffab6 link ssl3 for builder 2025-02-11 23:19:52 +08:00
Enrico Ros 3af2eb1b59 Gemini: update models 2025-02-10 00:45:41 -08:00
Enrico Ros 51d3f37058 Small ux hint 2025-02-09 21:41:28 -08:00
Enrico Ros 3b76018db9 LLMs: OpenAI: decouple reasoning effort an restore markdown 2025-02-09 21:33:45 -08:00
Enrico Ros 271d42c09f OpenAI: restore markdown even of missing developer messages 2025-02-09 21:26:56 -08:00
Enrico Ros ddfb7f0e88 Attach: auto-detect simplify (one button instead of N) 2025-02-07 04:24:36 -08:00
Enrico Ros 3cb8ce1b3b Attach: auto-detect URLs 2025-02-07 04:02:54 -08:00
Enrico Ros 42b00f4942 Ollama: JSON mode is dangerous, say it. Fixes #749 2025-02-03 17:30:34 -08:00
Enrico Ros 749c7ce796 OpenAI: chatgpt-4o-latest doesn't support tools 2025-02-03 11:40:58 -08:00
Enrico Ros 27ff214d04 ChatDrawer: sync once a minute so we don't get unexpected regroup flashes 2025-02-02 09:59:23 -08:00
Enrico Ros 46ff3c293a Fix link 2025-02-02 09:52:52 -08:00
Enrico Ros c034e9f2ee Link FAQs 2025-02-02 09:48:23 -08:00
Enrico Ros b2c5cebc08 Update help-faq.md 2025-02-02 09:26:08 -08:00
Enrico Ros 0017a6b0f9 Create help-faq.md 2025-02-02 09:17:28 -08:00
Enrico Ros a2c9df06de Quick update 2025-02-01 09:18:22 -08:00
Enrico Ros 4152510452 Update README.md 2025-01-31 20:27:26 -08:00
Enrico Ros d253f7279a LocalAI: improve naming, interfaces 2025-01-31 20:01:04 -08:00
Enrico Ros b186caa1d0 Mo ar re al 2025-01-31 18:14:43 -08:00
Enrico Ros f99ac2f471 Update README.md 2025-01-31 18:11:58 -08:00
Enrico Ros 409af6e23e Update README.md 2025-01-31 18:09:04 -08:00
Enrico Ros 36d81e027b Composer: fix dependency 2025-01-31 16:57:02 -08:00
Enrico Ros 2a0cb6125a Thinking: auto-detect blocks 2025-01-31 16:54:16 -08:00
Enrico Ros b65ef1289a Gemini: undocumented safety 2025-01-31 16:35:55 -08:00
Enrico Ros e67f1fb974 Perplexity: add Sonar Reasoning 2025-01-31 15:17:42 -08:00
Enrico Ros 292d7c9e05 Beam: brain-ready 2025-01-31 14:46:42 -08:00
Enrico Ros 617cb79299 DeepSeek: reasoning hint 2025-01-31 14:43:27 -08:00
Enrico Ros dbad11ad9a OpenAI o3: namings. Support complete. 2025-01-31 12:58:27 -08:00
Enrico Ros 04cb6d2538 OpenAI o3: max_completion_tokens and developer message 2025-01-31 12:56:33 -08:00
Enrico Ros b6ff3852a0 OpenAI o3: strip images 2025-01-31 12:55:53 -08:00
Enrico Ros 70a68bb676 OpenAI: models change visibility 2025-01-31 12:50:39 -08:00
Enrico Ros e04fc80b62 OpenAI: models sorting 2025-01-31 12:50:27 -08:00
Enrico Ros 35d63e7894 OpenAI: sorted models 2025-01-31 12:33:58 -08:00
Enrico Ros 9e71358ae2 OpenAI o3: models update 2025-01-31 12:27:35 -08:00
Enrico Ros 0891b103e0 Ctrl+L: attach web link 2025-01-31 12:00:16 -08:00
Enrico Ros 2480904929 Docs: add a Data Ownership guide 2025-01-31 11:13:33 -08:00
Enrico Ros da903d1879 Fix Mobile Open Pane unnecessary padding 2025-01-31 09:23:03 -08:00
Enrico Ros eafc009ff0 Optima: optimize, add 'gone' functionality 2025-01-31 08:51:08 -08:00
Enrico Ros 3023bcaf95 /tools folder 2025-01-30 22:03:15 -08:00
Enrico Ros 2d29953318 1.92.0-RC1 2025-01-30 22:02:58 -08:00
Enrico Ros 6b9ec4bc05 Fix Autocomplete issue 2025-01-30 16:43:59 -08:00
Enrico Ros 540176059a Add Mistral-3 (24B) 2025-01-30 14:57:18 -08:00
Enrico Ros 9051354c58 Mistral: hide symlinks 2025-01-30 13:00:47 -08:00
Enrico Ros 26985aeacb Mistral: update models 2025-01-30 12:58:42 -08:00
Enrico Ros c2a84c7f93 Autocomplete the tags 2025-01-29 14:06:21 -08:00
Enrico Ros 51975f6748 Ollama: match vision support 2025-01-29 13:21:51 -08:00
Enrico Ros 6fdc16c33f Ollama: update models 2025-01-29 13:10:22 -08:00
Enrico Ros ed4f347563 Ollama: add description 2025-01-29 12:41:46 -08:00
Enrico Ros a1cdb3b273 OpenPipe: extract models file 2025-01-29 11:52:32 -08:00
Enrico Ros 8b8088b74a Azure: move models function 2025-01-29 11:52:19 -08:00
Enrico Ros 94e9f2678d Together: note 2025-01-29 11:49:45 -08:00
Enrico Ros 05965e749a OpenRouter: extract models functions 2025-01-29 11:49:39 -08:00
Enrico Ros 1a9cea263f Relax status check for Azure Openai. Fixes #744 2025-01-29 10:39:59 -08:00
Enrico Ros 966c402ecc Deepseek: fix assistant message alternation 2025-01-28 23:50:24 -08:00
Enrico Ros d5e0a3e4f6 Deepseek: better namings 2025-01-28 23:41:53 -08:00
Enrico Ros 2fafca7dfd OpenRouter: support reasoning sideband 2025-01-28 23:41:42 -08:00
Enrico Ros bfbd1bcfed OpenRouter: update visibility 2025-01-28 23:20:19 -08:00
Enrico Ros c1d476a991 Together: update models 2025-01-28 23:20:12 -08:00
Enrico Ros f7b78ca855 Together: update models 2025-01-28 23:08:12 -08:00
Enrico Ros 0e1429b604 Together: update parsers 2025-01-28 23:05:22 -08:00
Enrico Ros 57f2ca6460 Groq: update models 2025-01-28 22:19:49 -08:00
Enrico Ros e1d8dabd3d Groq: extract models 2025-01-28 21:57:38 -08:00
Enrico Ros d498287f76 Fix env var parsing 2025-01-28 21:43:36 -08:00
Enrico Ros 8a3026e43e Roll packages 2025-01-28 21:40:03 -08:00
Enrico Ros 133f26c691 Optimize: StatusBar 2025-01-28 21:37:23 -08:00
Enrico Ros 9b169d1f43 Pmix: reduce verbosity 2025-01-28 21:27:37 -08:00
Enrico Ros 2c331f9a65 useDebugHook: invert params 2025-01-28 21:27:26 -08:00
Enrico Ros b9e8559002 Optimize BlockPartModelAux 2025-01-28 21:14:15 -08:00
Enrico Ros a8f843fea5 Fix reasoning chip outline 2025-01-28 21:11:22 -08:00
Enrico Ros a0da3b564f Beam: fix z-index 2025-01-28 21:10:48 -08:00
Enrico Ros bdc5e09ecc ChatMessageList: cleanup 2025-01-25 10:12:43 -08:00
Enrico Ros d88e16dccf CSV: switch to a newer lib 2025-01-24 00:35:47 -08:00
Enrico Ros 77680fcdc9 CSV: Improve Buttons 2025-01-24 00:35:43 -08:00
Enrico Ros 6afcc42c38 Beam: fix exposition of multiple fragments. 2025-01-23 14:57:25 -08:00
Enrico Ros 0bf7b86217 Reasoning style. 2025-01-23 14:48:05 -08:00
Enrico Ros fa306338aa Largely disable the NoVoid duplications (fragments, messages, conversations) 2025-01-23 13:42:33 -08:00
Enrico Ros 5921a099d9 Roll packages 2025-01-23 13:31:01 -08:00
Enrico Ros e6dd1f0c48 Inline Thinking Fragments 2025-01-23 09:54:18 -08:00
Enrico Ros ae8602a769 Gemini: show thoughts 2025-01-23 08:36:46 -08:00
Enrico Ros 8d86636a95 Gemini: access that can switch to v1alpha 2025-01-23 08:25:49 -08:00
Enrico Ros 87a9191013 Gemini: update Models 2025-01-23 08:25:49 -08:00
Enrico Ros e847933c3c AIX: Gemini: dispatch vndGeminiShowThoughts 2025-01-23 08:25:49 -08:00
Enrico Ros ad7280c065 LLMs: Parameters: add Google CoT 2025-01-23 08:24:42 -08:00
Enrico Ros b124bac190 LLMs: Parameters: apply initial values where Defined 2025-01-23 08:24:32 -08:00
Enrico Ros 6f926f4849 Perplexity: update models 2025-01-23 06:38:35 -08:00
Enrico Ros 48df9d4af6 Deep roll. 2025-01-21 18:35:01 -08:00
Enrico Ros a5d0c183a7 Roll packages 2025-01-21 18:28:31 -08:00
Enrico Ros 37354484c2 Cherry-picked the 1.16.9 release, to update GitHub desc. 2025-01-21 18:22:09 -08:00
Enrico Ros eeae13d4ba v1-dev has been fully obsoleted and removed. v1-stable is the stable v1, for Docker and big-agi.com, while v2-dev will soon become the new stable and replace v1-stable. 2025-01-21 18:19:46 -08:00
Enrico Ros c84b474632 Improve bug reporting. Require where. 2025-01-21 17:41:35 -08:00
Enrico Ros a207030899 Move puppeteer-core to non-dev dependency. Fixes #732 2025-01-21 17:38:56 -08:00
Enrico Ros b97e28ad3b YouTube transcripts: improve module 2025-01-21 17:35:16 -08:00
Enrico Ros b307adda99 DeepSeek: don't disable FC 2025-01-20 20:44:15 -08:00
Enrico Ros 069421f47a Roll AIX for reasoning & temperature changes 2025-01-20 09:31:43 -08:00
Enrico Ros 8f1a11757f DeepSeek: update Reasoner Short name. Fixes #726 2025-01-20 09:13:46 -08:00
Enrico Ros 3fa5f07f51 RenderPlainText: opti 2025-01-20 09:13:46 -08:00
Enrico Ros 8b8a200b83 Render ModelAux Void (reasoning) fragments 2025-01-20 09:13:46 -08:00
Enrico Ros 2c87d3e714 AIX & Fragments: ModelAux Void Part 2025-01-20 08:15:28 -08:00
Enrico Ros ddf3b54917 AIX: Deepseek: parse and transmit reasoning text 2025-01-20 07:28:10 -08:00
Enrico Ros 846da8e17d AIX: client, hotfix-no-temperature 2025-01-20 07:13:08 -08:00
Enrico Ros 0d0d414fc8 AIX: dispatches: optional temperature 2025-01-20 07:13:08 -08:00
Enrico Ros 0c01bce460 DeepSeek: R1 model 2025-01-20 07:13:08 -08:00
Enrico Ros 37c83ce039 LLMs: disable temperature editing by hotfix-interface 2025-01-20 07:13:08 -08:00
Enrico Ros 9e504d577e LLMs: hotfix no-temperature and nullable temperature 2025-01-20 06:58:54 -08:00
Enrico Ros ab70692c49 Mermaid: stop double quoting 2025-01-16 22:39:18 -08:00
Enrico Ros d48f594147 SVG: fix <xml>\n<svg> parsing 2025-01-16 22:39:08 -08:00
Enrico Ros 3e4e634c97 useLLMChain: make the callback optional 2025-01-15 20:04:45 -08:00
Enrico Ros 0e17a0bcd0 Roll packages deep 2025-01-15 19:29:04 -08:00
Enrico Ros 32e0d32dea Roll packages 2025-01-15 19:27:44 -08:00
Enrico Ros 1ecf355346 Relocate YouTubeURLInput 2025-01-15 19:26:13 -08:00
Enrico Ros 2ff15b54af OptimaDrawerHeader: support title click 2025-01-15 19:24:25 -08:00
Enrico Ros 30ac3f8c0a AIX: OpenRouter: multi-key support. See #653 2025-01-15 01:11:20 -08:00
595 changed files with 40358 additions and 12354 deletions
+20
View File
@@ -0,0 +1,20 @@
---
description: Increment the AIX monotonic version number
allowed-tools: Bash(git add:*),Bash(git status:*),Bash(git commit:*),Edit,Write
model: haiku
disable-model-invocation: true
---
Increment `Monotonics.Aix` in `src/common/app.release.ts` and commit it.
**Pre-flight checks (MUST pass or abort):**
1. Run `git branch --show-current` - MUST be on `main` branch
2. Run `git status src/common/app.release.ts` - file MUST be unmodified (no changes on this specific file)
**Execute:**
1. Read current `Monotonics.Aix` value from `src/common/app.release.ts`
2. Increment by 1
3. Update ONLY that line
4. Run: `git add src/common/app.release.ts && git commit -m "Roll AIX"`
Confirm new version number.
@@ -0,0 +1,31 @@
---
description: Sync Anthropic API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for Anthropic: message wire types `src/modules/aix/server/dispatch/wiretypes/anthropic.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/anthropic.messageCreate.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/anthropic.parser.ts`.
IMPORTANT: we only support the Messages API (message create). We do NOT support other APIs such as the older Completions API.
We support Anthropic caching natively, and want to make sure tools and state (crafting the history) are also done well.
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources:**
- Docs API: https://docs.claude.com/en/api/messages
- Release notes: https://docs.claude.com/en/release-notes/api
- Tools use: https://docs.claude.com/en/docs/agents-and-tools/tool-use/overview
- Handling stop reasons: https://docs.claude.com/en/api/handling-stop-reasons
**Alternative Sources if primary blocked:**
- Anthropic TypeScript SDK: https://github.com/anthropics/anthropic-sdk-typescript
- Anthropic Python SDK: https://github.com/anthropics/anthropic-sdk-python
- Recent news and announcements: Web Search for "anthropic api changelog" or "new claude api" or "new claude api pricing"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
+30
View File
@@ -0,0 +1,30 @@
---
description: Sync Google Gemini API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for Google Gemini: message wire types `src/modules/aix/server/dispatch/wiretypes/gemini.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/gemini.generateContent.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/gemini.parser.ts`.
IMPORTANT: we only support the generateContent API, not other Gemini APIs such as embeddings, etc.
Caching is only supported when implicit, we do not explicitly manage Gemini Caches. Same for file uploads and other systems.
Image generation happens through models, i.e. 'Gemini 2.5 Flash - Nano Banana' generates images using AIX from generateContent (chat input).
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources:**
- Docs API 1/2: https://ai.google.dev/api/generate-content
- Docs API 2/2: https://ai.google.dev/api/caching#Content
- Release notes: https://ai.google.dev/gemini-api/docs/changelog
**Alternative Sources if primary blocked:**
- Google AI JavaScript SDK: https://github.com/googleapis/js-genai (check latest commits, README, type definitions)
Recent news and announcements: Web Search for "gemini api changelog" or "nwe gemini api updates" or "new gemini api pricing"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
+34
View File
@@ -0,0 +1,34 @@
---
description: Sync OpenAI API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Please take a look at my API code for OpenAI: message wire types `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts`, assembly of the request messages (adapters) `src/modules/aix/server/dispatch/chatGenerate/adapters/openai.chatCompletions.ts`, and parsing of the response in streaming or not `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.parser.ts`.
IMPORTANT: we prioritize the new Responses API, while Chat Completions is still supported but legacy.
We do NOT support other APIs such as Realtime (incl. websockets), etc.
We also do not support Agentic APIs (Agent SDK, AgentKit, ChatKit, Assistants API etc), as we perform similar functionality in AIX (server or client side).
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources:**
- Responses API (AIX prioritizes it): https://platform.openai.com/docs/api-reference/responses/create
- Chat Completions API: https://platform.openai.com/docs/api-reference/chat/create
- Changelog: https://platform.openai.com/docs/changelog
- Models: https://platform.openai.com/docs/models
- Pricing (use Copy Page button to download markdown): https://platform.openai.com/docs/pricing
**Alternative Sources if primary blocked:**
- OpenAI Node.js SDK: https://github.com/openai/openai-node
- OpenAI Python SDK: https://github.com/openai/openai-python
- OpenAI OpenAPI spec: https://github.com/openai/openai-openapi
Recent news and announcements: Web Search for "openai api changelog" or "openai new models" or "openai new prices"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully and look if there are any discrepancies in the protocols, the available API surface, the structure of the messages, functionality, logic, etc.
Make sure you look deep in the fields of the requests and responses, especially required fields, streaming event types, and any new response shapes.
Please point out all of the differences in the API whether it's in the final parsing and reassembly of the streaming message, or the protocol changed, etc.
Prioritize breaking changes and new capabilities that would improve the user experience.
@@ -0,0 +1,20 @@
---
description: Update Alibaba model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/alibaba.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models & Pricing: https://www.alibabacloud.com/help/en/model-studio/models
- Billing Guide: https://www.alibabacloud.com/help/en/model-studio/billing-for-model-studio
**Fallbacks if blocked:**
- Search "alibaba model studio latest pricing", "alibaba latest models", "qwen models pricing", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,20 @@
---
description: Update Anthropic model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/anthropic/anthropic.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models: https://docs.claude.com/en/docs/about-claude/models/overview
- Pricing: https://claude.com/pricing#api
- Deprecations: https://docs.claude.com/en/docs/about-claude/model-deprecations
**Fallbacks if blocked:** Check Anthropic TypeScript SDK at https://github.com/anthropics/anthropic-sdk-typescript, search "anthropic models latest pricing", "anthropic latest models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,22 @@
---
description: Update DeepSeek model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/deepseek.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Pricing: https://api-docs.deepseek.com/quick_start/pricing
- Model List: https://api-docs.deepseek.com/api/list-models
- Release Notes: https://api-docs.deepseek.com/updates (check for version updates like V3.2-Exp)
**Note:** DeepSeek frequently releases new versions with significant pricing changes. Always check release notes first.
**Fallbacks if blocked:** Search "deepseek api latest pricing", "deepseek latest models", "deepseek models list" or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,21 @@
---
description: Update Gemini model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/gemini/gemini.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.types.ts`, `src/modules/llms/server/llm.server.types.ts`, and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models: https://ai.google.dev/gemini-api/docs/models
- Pricing: https://ai.google.dev/gemini-api/docs/pricing
- Changelog: https://ai.google.dev/gemini-api/docs/changelog
**Fallbacks if blocked:** Check Google AI JS SDK at https://github.com/googleapis/js-genai, search "gemini models latest pricing", "gemini latest models", or search GitHub for latest model prices and context windows
**Important:**
- Ignore context windows (auto-determined at runtime) and training cutoffs (not supported)
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review, do NOT remove comments
- Flag broken links or unexpected content
@@ -0,0 +1,19 @@
---
description: Update Groq model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/groq.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models: https://console.groq.com/docs/models
- Pricing: https://groq.com/pricing/
**Fallbacks if blocked:** Search "groq models latest pricing", "groq latest models", "groq api models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,24 @@
---
description: Update Mistral model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/mistral.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models: https://docs.mistral.ai/getting-started/models/models_overview/
- Pricing: https://mistral.ai/pricing#api-pricing
- Changelog: https://docs.mistral.ai/getting-started/changelog/
**Fallbacks if blocked:**
- Search "mistral [model-name] latest pricing", "mistral api latest pricing", "mistral latest models", or search GitHub for latest model prices and context windows
- Cross-reference: pricepertoken.com, helicone.ai, artificialanalysis.ai
- Check Mistral API list models response
- As last resort: Use Chrome DevTools MCP to render pricing table
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,40 @@
---
description: Update Ollama model definitions with latest featured models
---
Update `src/modules/llms/server/ollama/ollama.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Automated Workflow:**
```bash
# 1. Fetch the HTML
curl -s "https://ollama.com/library?sort=featured" -o /tmp/ollama-featured.html
# 2. Parse it with the script
node .claude/scripts/parse-ollama-models.js > /tmp/ollama-parsed.txt 2>&1
# 3. Review the parsed output
cat /tmp/ollama-parsed.txt
```
The parser outputs: `modelName|pulls|capabilities|sizes`
- Example: `deepseek-r1|66200000|tools,thinking|1.5b,7b,8b,14b,32b,70b,671b`
**Primary Sources:**
- Model Library: https://ollama.com/library?sort=featured
- Parser script: `.claude/scripts/parse-ollama-models.js`
**Fallbacks if blocked:** Check https://github.com/ollama/ollama, search "ollama featured models", "ollama latest models", or search GitHub for latest model info
**Important:**
- Skip models below 50,000 pulls (parser does this automatically)
- Sort them in the EXACT same order as the source (featured models)
- Extract tags: 'tools' → hasTools, 'vision' → hasVision, 'embedding' → isEmbeddings (note the 's'), 'thinking' → tags only
- Extract 'b' tags (1.5b, 7b, 32b) to tags field
- Set today's date (YYYYMMDD format) for newly added models only
- Update OLLAMA_LAST_UPDATE constant to today's date
- Do NOT change dates of existing models
- Review the full model list for additions, removals, and changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments and newlines to make diffs easy to review
@@ -0,0 +1,26 @@
---
description: Update OpenAI model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/openai.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Manual hint:** For pricing page, expand all tables before copying content.
**Primary Sources:**
- Models: https://platform.openai.com/docs/models (use Copy Page button)
- Pricing: https://platform.openai.com/docs/pricing (expand tables first)
**Known Issue:** OpenAI docs block automated access (403 Forbidden). Manual browser access required.
**Fallbacks if blocked:**
- Search "openai models latest pricing", "openai latest models" for third-party aggregators, or search GitHub for latest model prices and context windows
- OpenAI Node SDK (https://github.com/openai/openai-node) has limited model metadata only
- As last resort: Use Chrome DevTools MCP to navigate and extract from official docs
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,19 @@
---
description: Update OpenPipe model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/openpipe.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Base Models: https://docs.openpipe.ai/base-models
- Pricing: https://docs.openpipe.ai/pricing/pricing
**Fallbacks if blocked:** Search "openpipe models latest pricing", "openpipe latest models", "openpipe base models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,20 @@
---
description: Update Perplexity model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/perplexity.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models: https://docs.perplexity.ai/getting-started/models
- Pricing: https://docs.perplexity.ai/getting-started/pricing
- Changelog: https://docs.perplexity.ai/changelog/changelog
**Fallbacks if blocked:** Search "perplexity api latest pricing", "perplexity latest models", or search GitHub for latest model prices and context windows
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,23 @@
---
description: Update xAI model definitions with latest pricing and capabilities
---
Update `src/modules/llms/server/openai/models/xai.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.data.ts` for context only. Focus on the model file, do not descend into other code.
**Primary Sources:**
- Models & Pricing: https://docs.x.ai/docs/models?cluster=us-east-1#detailed-pricing-for-all-grok-models
**Known Issue:** docs.x.ai blocks automated access (403 Forbidden). Use fallbacks below.
**Fallbacks if blocked:**
- Search "xai grok latest pricing", "xai latest models", "xai api models", or search GitHub for latest model prices and context windows
- Random sites? https://the-rogue-marketing.github.io/grok-api-latest-llms-pricing-october-2025/ (find a newer version), https://langdb.ai/app/providers/xai/ (browse by model, limited coverage)
- As last resort: Use Chrome DevTools MCP to access docs.x.ai
**Important:**
- Review the full model list for additions, removals, and price changes
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
+81
View File
@@ -0,0 +1,81 @@
#!/usr/bin/env node
/**
* Parse Ollama featured models from HTML
*
* Usage:
* 1. Fetch HTML: curl -s "https://ollama.com/library?sort=featured" -o /tmp/ollama-featured.html
* 2. Parse: node .claude/scripts/parse-ollama-models.js
*
* Outputs: pipe-delimited format: modelName|pulls|capabilities|sizes
* Example: deepseek-r1|66200000|tools,thinking|1.5b,7b,8b,14b,32b,70b,671b
*/
const fs = require('fs');
const htmlPath = process.argv[2] || '/tmp/ollama-featured.html';
if (!fs.existsSync(htmlPath)) {
console.error(`Error: HTML file not found at ${htmlPath}`);
console.error('Please fetch it first with:');
console.error(' curl -s "https://ollama.com/library?sort=featured" -o /tmp/ollama-featured.html');
process.exit(1);
}
const html = fs.readFileSync(htmlPath, 'utf8');
// Split into model sections - each starts with <a href="/library/
const modelSections = html.split(/<a href="\/library\//);
const models = [];
for (let i = 1; i < modelSections.length; i++) {
const section = modelSections[i].substring(0, 5000); // Large enough window to capture all data
// Extract model name (first quoted string)
const nameMatch = section.match(/^([^"]+)"/);
if (!nameMatch) continue;
const name = nameMatch[1];
// Extract pulls using x-test-pull-count
const pullsMatch = section.match(/x-test-pull-count>([^<]+)</);
let pulls = 0;
if (pullsMatch) {
const pullStr = pullsMatch[1].replace(/,/g, '');
if (pullStr.includes('M')) {
pulls = Math.floor(parseFloat(pullStr) * 1000000);
} else if (pullStr.includes('K')) {
pulls = Math.floor(parseFloat(pullStr) * 1000);
} else {
pulls = parseInt(pullStr);
}
}
// Extract capabilities (tools, vision, embedding, thinking, cloud)
const capabilities = [];
const capabilityRegex = /x-test-capability[^>]*>([^<]+)</g;
let capMatch;
while ((capMatch = capabilityRegex.exec(section)) !== null) {
capabilities.push(capMatch[1].trim());
}
// Extract sizes (1.5b, 7b, etc.)
const sizes = [];
const sizeRegex = /x-test-size[^>]*>([^<]+)</g;
let sizeMatch;
while ((sizeMatch = sizeRegex.exec(section)) !== null) {
sizes.push(sizeMatch[1].trim());
}
// Only include models with 50K+ pulls
if (pulls >= 50000) {
models.push({ name, pulls, capabilities, sizes });
}
}
// Output in pipe-delimited format (in the order they appear on the page)
models.forEach(m => {
const caps = m.capabilities.join(',');
const tags = m.sizes.join(',');
console.log(`${m.name}|${m.pulls}|${caps}|${tags}`);
});
console.error(`\nTotal models with 50K+ pulls: ${models.length}`);
+35
View File
@@ -0,0 +1,35 @@
{
"permissions": {
"allow": [
"Bash(cat:*)",
"Bash(cp:*)",
"Bash(find:*)",
"Bash(git branch:*)",
"Bash(git describe:*)",
"Bash(git log:*)",
"Bash(git log:*)",
"Bash(git show:*)",
"Bash(grep:*)",
"Bash(ls:*)",
"Bash(mkdir:*)",
"Bash(node:*)",
"Bash(npm install)",
"Bash(npm install:*)",
"Bash(npm run:*)",
"Bash(npx tsc:*)",
"Bash(rg:*)",
"Bash(rm:*)",
"Bash(sed:*)",
"WebFetch",
"WebFetch(domain:big-agi.com)",
"WebSearch",
"mcp__chrome-devtools",
"mcp__github",
"mcp__ide__getDiagnostics"
],
"deny": [
"Read(node_modules)",
"Read(node_modules/**)"
]
}
}
+19 -2
View File
@@ -5,14 +5,29 @@ labels: [ 'type: bug' ]
body:
- type: markdown
attributes:
value: Thank you for reporting a bug.
value: Thank you for reporting a bug. Please help us by providing accurate environment information.
- type: dropdown
attributes:
label: Environment
description: (required) Where are you experiencing this issue?
options:
- Big-AGI Pro (big-agi.com)
- Self-deployed from GitHub
- Docker container (specify in description)
- Local development
- Other
validations:
required: true
- type: textarea
attributes:
label: Description
description: (required) Please provide a clear description. Please also provide the steps to reproduce.
description: (required) Please provide a clear description and **steps to reproduce**.
placeholder: 'Concise description + steps to reproduce.'
validations:
required: true
- type: textarea
attributes:
label: Device and browser
@@ -20,10 +35,12 @@ body:
placeholder: 'Device: (e.g., iPhone 16, Pixel 9, PC, Macbook...), OS: (e.g., iOS 17, Windows 12), Browser: (e.g., Chrome 119, Safari 18, Firefox..)'
validations:
required: true
- type: textarea
attributes:
label: Screenshots and more
placeholder: 'Attach screenshots, or add any additional context here.'
- type: checkboxes
attributes:
label: Willingness to Contribute
@@ -32,7 +32,6 @@ assignees: enricoros
- [ ] verify deployment on Vercel
- [ ] verify container on GitHub Packages
- [ ] update the GitHub release
- [ ] push as stable `git push opensource main:main-stable`
- Announce:
- [ ] Discord announcement
- [ ] Twitter announcement
@@ -51,7 +50,7 @@ To familiarize yourself with the application, the following are the Website and
```
- paste the URL: https://big-agi.com
- drag & drop: [README.md](https://raw.githubusercontent.com/enricoros/big-AGI/v2-dev/README.md)
- drag & drop: [README.md](https://raw.githubusercontent.com/enricoros/big-AGI/main/README.md)
```markdown
I am announcing a new version, 1.2.3.
+57
View File
@@ -0,0 +1,57 @@
name: Claude Code DM
on:
issues:
types: [opened, assigned]
issue_comment:
types: [created]
pull_request_review:
types: [submitted]
pull_request_review_comment:
types: [created]
jobs:
claude-dm:
if: |
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) ||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude'))
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
contents: read
pull-requests: write
issues: write
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code DM Response
id: claude
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# Security: Only users with write access can trigger (DMs allow code execution)
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'
# disabling opus for now claude-opus-4-1-20250805
claude_args: |
--model claude-sonnet-4-5-20250929
--max-turns 100
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm install),Bash(npm install:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools,SlashCommand"
+71
View File
@@ -0,0 +1,71 @@
name: Claude Code Auto-Triage Issues
on:
issues:
types: [ opened, assigned ]
jobs:
claude-issue-triage:
# Optional: Skip for bot users and direct mentions in the body (handled by claude-dm.yml)
if: |
github.event.issue.user.type != 'Bot' &&
!contains(github.event.issue.body, '@claude')
runs-on: ubuntu-latest
timeout-minutes: 20
permissions:
contents: read
issues: write
pull-requests: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Analyze issue and provide help
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# Security: Allow any user to trigger triage (automated issue help is safe)
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: '*'
# track_progress: true # Enables tracking comments
prompt: |
REPO: ${{ github.repository }}
ISSUE NUMBER: #${{ github.event.issue.number }}
A user has reported an issue. Please help them by:
1. Deep think about the issue:
**Understand the problem**: Analyze the issue description and any error messages
**Search for context**:
- Use the repository's CLAUDE.md for high level guidance and especially kb/ documentation
- Look in relevant code files, including kb/ documentation
**Use web search**: When potentially outside Big-AGI (e.g. user configuration), search the web for similar errors or related issues
**Provide a solution**:
- Provide multiple solutions if uncertain, and say so
- If you can fix it in code, propose the fix
- If possible also suggest fixes or workarounds for immediate relief
- Reference specific files and line numbers
- Test selectively and even npm install and run build if needed to verify the solution
2. Always add the 'claude-triage' issue label to indicate this issue was triaged by Claude
3. Comment with:
- Very brief thank you note, if applicable
- Initial assessment
- Next steps or clarification needed
- Link duplicates if found
If you're uncertain, say so and suggest next steps.
Be welcoming, helpful, professional, solution-focused and no-BS.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--model claude-sonnet-4-5-20250929
--max-turns 60
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm install),Bash(npm install:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools,SlashCommand"
+77
View File
@@ -0,0 +1,77 @@
name: Claude Code PR Review
on:
pull_request:
types: [ opened, synchronize, ready_for_review ]
# Limit branches
branches: [ main, dev, v1 ]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
jobs:
claude-pr-review:
# Skip draft PRs
# Optional: filter authors: github.event.pull_request.user.login != 'enricoros'
if: |
github.event.pull_request.draft == false
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
pull-requests: write
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run PR Review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# Security: Allow any user to trigger reviews (read-only PR analysis is safe)
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: '*'
# track_progress: true # Enables tracking comments
# This setting allows Claude to read CI results on PRs
additional_permissions: |
actions: read
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Potential bugs or issues
- Adherence to Big-AGI architecture and design patterns
- Code quality and best practices, including TypeScript types, error handling, and edge cases
- Performance considerations: bundle size, React patterns, streaming efficiency
- Security concerns if applicable
Use the repository's CLAUDE.md for guidance on style and conventions.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
Use `gh pr review comment` for inline suggestions on specific lines.
IMPORTANT: After completing your review, always add the 'claude-review' label to the PR to indicate it was reviewed by Claude:
gh pr edit ${{ github.event.pull_request.number }} --add-label "claude-review"
Be constructive, helpful, no-BS, and specific with file:line references.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--model claude-sonnet-4-5-20250929
--max-turns 100
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm install),Bash(npm install:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools"
+18 -11
View File
@@ -12,11 +12,9 @@ name: Create and publish Docker images
on:
push:
branches:
- v2-dev
#- v1-dev # Disabled because this is not needed anymore
#- v1-stable # Disabled as the v* tag is used for stable releases
- main # Primary branch (Big-AGI Open)
tags:
- 'v*' # Trigger on version tags (e.g., v1.7.0)
- 'v2.*' # Stable releases (v2.0.0, v2.1.0, etc.)
env:
REGISTRY: ghcr.io
@@ -55,14 +53,21 @@ jobs:
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=development,enable=${{ github.ref == 'refs/heads/v2-dev' }} # For v2-dev branch
type=raw,value=stable,enable=${{ github.ref == 'refs/heads/v1-stable' }}
type=ref,event=tag # Use the tag name as a tag for tag builds
type=semver,pattern={{version}} # Generate semantic versioning tags for tag builds
type=sha,format=short,prefix=sha- # Just in case none of the above applies
# Development: main branch
type=raw,value=development,enable=${{ github.ref == 'refs/heads/main' }}
# Latest: v2.x releases (safe default)
type=raw,value=latest,enable=${{ startsWith(github.ref, 'refs/tags/v2.') }}
# Stable: v2.x releases (alias)
type=raw,value=stable,enable=${{ startsWith(github.ref, 'refs/tags/v2.') }}
# Version tags (v2.0.0, 2.0.0)
type=ref,event=tag
type=semver,pattern={{version}}
labels: |
org.opencontainers.image.title=Big-AGI
org.opencontainers.image.description=Generative AI suite powered by state-of-the-art models
org.opencontainers.image.title=Big-AGI Open
org.opencontainers.image.description=Big-AGI Open - Multi-model AI workspace for experts who need to think broader, decide smarter, and build with confidence.
org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}
org.opencontainers.image.documentation=https://big-agi.com
@@ -77,6 +82,8 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
build-args: |
NEXT_PUBLIC_GA4_MEASUREMENT_ID=${{ secrets.GA4_MEASUREMENT_ID }}
NEXT_PUBLIC_BUILD_HASH=${{ github.sha }}
NEXT_PUBLIC_BUILD_REF_NAME=${{ github.ref_name }}
# Enable build cache (future)
#cache-from: type=gha
#cache-to: type=gha,mode=max
+241
View File
@@ -0,0 +1,241 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
```bash
# Targeted Code Quality (safe while dev server runs)
npx tsc --noEmit # Type check without building
npx eslint src/path/to/file.ts # Lint specific file
npm run lint # Lint entire project
```
## Architecture Overview
Big-AGI is a Next.js 15 application with a modular architecture built for advanced AI interactions. The codebase follows a three-layer structure with distinct separation of concerns.
### Core Directory Structure
```
/app/api/ # Next.js App Router (API routes only, mostly -> /src/server/)
/pages/ # Next.js Pages Router (file-based, mostly -> /src/apps/)
/src/
├── apps/ # Feature applications (self-contained modules)
├── modules/ # Reusable business logic and integrations
├── common/ # Shared infrastructure and utilities
└── server/ # Backend API layer with tRPC
/kb/ # Knowledge base for modules, architectures
```
### Key Technologies
- **Frontend**: Next.js 15, React 18, Material-UI Joy, Emotion (CSS-in-JS)
- **State Management**: Zustand with localStorge/IndexedDB (single cell) persistence
- **API Layer**: tRPC with React Query for type-safe communication
- **Runtime**: Edge Runtime for AI operations, Node.js for data processing
### Apps Architecture Pattern
Each app in `/src/apps/` is a self-contained feature module:
- Main component (`App*.tsx`)
- Local state store (`store-app-*.ts`)
- Feature-specific components and layouts
- Runtime configurations
Example apps: `chat/`, `call/`, `beam/`, `draw/`, `personas/`, `settings-modal/`
### Modules Architecture Pattern
Modules in `/src/modules/` provide reusable business logic:
- **`aix/`** - AI communication framework for real-time streaming
- **`beam/`** - Multi-model AI reasoning system (scatter/gather pattern)
- **`blocks/`** - Content rendering (markdown, code, images, etc.)
- **`llms/`** - Language model abstraction supporting 16 vendors
### Key Subsystems & Their Patterns
#### 1. AIX - Real-time AI Communication
**Location**: `/src/modules/aix/`
**Pattern**: Client-server streaming architecture with provider abstraction
- **Client** → tRPC → **Server****AI Providers**
- Handles streaming/non-streaming responses with batching and error recovery
- Particle-based streaming: `AixWire_Particles``ContentReassembler``DMessage`
- Provider-agnostic through adapter pattern (OpenAI, Anthropic, Gemini protocols)
#### 3. Beam - Multi-Model Reasoning
**Location**: `/src/modules/beam/`
**Pattern**: Scatter/Gather for parallel AI processing
- **Scatter**: Multiple models (rays) process input in parallel
- **Gather**: Fusion algorithms combine outputs
- Real-time UI updates via vanilla Zustand stores
- BeamStore per conversation via ConversationHandler
#### 4. Conversation Management
**Location**: `/src/common/stores/chat/` and `/src/common/chat-overlay/`
**Pattern**: Overlay architecture with handler per conversation
- `ConversationHandler` orchestrates chat, beam, ephemerals
- Per-chat stores: `PerChatOverlayStore` + `BeamStore`
- Message structure: `DMessage``DMessageFragment[]`
- Supports multi-pane with independent conversation states
### Storage System
Big-AGI uses a local-first architecture with Zustand + IndexedDB:
- **Zustand** stores for in-memory state management
- **localStorage** for persistent settings/all storage (via Zustand persist middleware)
- **IndexedDB** for persistent chat-only storage (via Zustand persist middleware) on a single key-val cell
- **Local-first** architecture with offline capability
- **Migration system** for upgrading data structures across versions
Key storage patterns:
- Stores use `createIDBPersistStorage()` for IndexedDB persistence
- Version-based migrations handle data structure changes
- Partialize/merge functions control what gets persisted
- Rehydration logic repairs and upgrades data on load
Located in `/src/common/stores/` with stores like:
- `chat/store-chats.ts`: Conversations and messages
- `llms/store-llms.ts`: Model configurations
### Layout System ("Optima")
The Optima layout system provides:
- **Responsive design** adapting desktop/mobile
- **Drawer/Panel/Toolbar** composition
- **Split-pane support** for multi-conversation views
- **Portal-based rendering** for flexible component placement
Located in `/src/common/layout/optima/`
### State Management Patterns
1. **Global Stores** (Zustand with IndexedDB persistence)
- `store-chats`: Conversations and messages
- `store-llms`: Model configurations
- `store-ux-labs`: UI preferences and labs features
2. **Per-Instance Stores** (Vanilla Zustand)
- `store-beam_vanilla`: Beam scatter/gather state
- `store-perchat_vanilla`: Chat overlay state
- High-performance, no React integration
3. **Module Stores**
- Feature-specific configuration and state
- Example: `store-module-beam`, `store-module-t2i`
### User Flows & Interdependencies
#### Chat Message Flow
1. User input → `Composer``DMessage` creation
2. `ConversationHandler.messageAppend()` → Store update
3. `_handleExecute()` / `ConversationHandler.executeChatMessages()` → AIX client request
4. AIX streaming → `ContentReassembler` → UI updates
5. Zustand auto-persistence → IndexedDB
#### Beam Multi-Model Flow
1. User triggers Beam → `BeamStore.open()` state update
2. Scatter: Parallel `aixChatGenerateContent()` to N models
3. Real-time ray updates → UI progress
4. Gather: User selects fusion → Combined output
5. Result → New message in conversation
### Development Patterns
#### Module Integration
- Each module exports its functionality through index files
- Modules register with central registries (e.g., `vendors.registry.ts`)
- Configuration objects define module behavior
- Type-safe integration through strict TypeScript interfaces
#### Component Patterns
- **Controlled components** with clear prop interfaces
- **Hook-based logic** extraction for reusability
- **Portal rendering** for overlays and modals
- **Suspense boundaries** for async operations
#### API Patterns
- **tRPC routers** for type-safe API endpoints
- **Zod schemas** for runtime validation
- **Middleware** for request/response processing
- **Edge functions** for performance-critical AI operations
## Security Considerations
- API keys stored client-side in localStorage (user-provided)
- Server-side API keys in environment variables only
- XSS protection through proper content escaping
- No credential transmission to third parties
## Knowledge Base
Architecture and system documentation is available in the `/kb/` knowledge base:
@kb/KB.md
## Common Development Tasks
### Testing & Quality
- Run `npm run lint` before committing
- Type-check with `npx tsc --noEmit`
- Test critical user flows manually
### Adding a New LLM Vendor
1. Create vendor in `/src/modules/llms/vendors/[vendor]/`
2. Implement `IModelVendor` interface
3. Register in `vendors.registry.ts`
4. Add environment variables to `env.ts` (if server-side keys needed)
### Debugging Storage Issues
- Check IndexedDB: DevTools → Application → IndexedDB → `app-chats`
- Monitor Zustand state: Use Zustand DevTools
- Check migration logs in console during rehydration
## Code Examples
### AIX Streaming Pattern
```typescript
// Efficient streaming with decimation
aixChatGenerateContent_DMessage(
llmId,
request,
{ abortSignal, throttleParallelThreads: 1 },
async (update, isDone) => {
// Real-time UI updates
}
);
```
### Model Registry Pattern
```typescript
// Registry pattern for extensibility
const MODEL_VENDOR_REGISTRY: Record<ModelVendorId, IModelVendor> = {
openai: ModelVendorOpenAI,
anthropic: ModelVendorAnthropic,
// ... 14 more vendors
};
```
## Server Architecture
The server uses a split architecture with two tRPC routers:
### Edge Network (`trpc.router-edge`)
Distributed edge runtime for low-latency AI operations:
- **AIX** - AI streaming and communication
- **LLM Routers** - Direct vendor integrations (OpenAI, Anthropic, Gemini, Ollama)
- **External Services** - ElevenLabs (TTS), Google Search, YouTube transcripts
Located at `/src/server/trpc/trpc.router-edge.ts`
### Cloud Network (`trpc.router-cloud`)
Centralized server for data processing operations:
- **Browse** - Web scraping and content extraction
- **Trade** - Import/export functionality (ChatGPT, markdown, JSON)
Located at `/src/server/trpc/trpc.router-cloud.ts`
**Key Pattern**: Edge runtime for AI (fast, distributed), Cloud runtime for data ops (centralized, Node.js)
+21 -5
View File
@@ -1,6 +1,6 @@
# Base
FROM node:22-alpine AS base
ENV NEXT_TELEMETRY_DISABLED 1
ENV NEXT_TELEMETRY_DISABLED=1
# Dependencies
FROM base AS deps
@@ -14,7 +14,7 @@ COPY src/server/prisma ./src/server/prisma
RUN sh -c '[ ! -e /lib/libssl.so.3 ] && ln -s /usr/lib/libssl.so.3 /lib/libssl.so.3 || echo "Link already exists"'
# Install dependencies, including dev (release builds should use npm ci)
ENV NODE_ENV development
ENV NODE_ENV=development
RUN npm ci
@@ -22,16 +22,32 @@ RUN npm ci
FROM base AS builder
WORKDIR /app
# Deployment type marker
ENV NEXT_PUBLIC_DEPLOYMENT_TYPE=docker
# Optional build version arguments at build time
ARG NEXT_PUBLIC_BUILD_HASH
ENV NEXT_PUBLIC_BUILD_HASH=${NEXT_PUBLIC_BUILD_HASH}
ARG NEXT_PUBLIC_BUILD_REF_NAME
ENV NEXT_PUBLIC_BUILD_REF_NAME=${NEXT_PUBLIC_BUILD_REF_NAME}
# Optional argument to configure GA4 at build time (see: docs/deploy-analytics.md)
ARG NEXT_PUBLIC_GA4_MEASUREMENT_ID
ENV NEXT_PUBLIC_GA4_MEASUREMENT_ID=${NEXT_PUBLIC_GA4_MEASUREMENT_ID}
# Optional argument to configure PostHog at build time (see: docs/deploy-analytics.md)
ARG NEXT_PUBLIC_POSTHOG_KEY
ENV NEXT_PUBLIC_POSTHOG_KEY=${NEXT_PUBLIC_POSTHOG_KEY}
# Copy development deps and source
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# link ssl3 for latest Alpine
RUN sh -c '[ ! -e /lib/libssl.so.3 ] && ln -s /usr/lib/libssl.so.3 /lib/libssl.so.3 || echo "Link already exists"'
# Build the application
ENV NODE_ENV production
ENV NODE_ENV=production
RUN npm run build
# Reduce installed packages to production-only
@@ -53,8 +69,8 @@ COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder --chown=nextjs:nodejs /app/src/server/prisma ./src/server/prisma
# Minimal ENV for production
ENV NODE_ENV production
ENV PATH $PATH:/app/node_modules/.bin
ENV NODE_ENV=production
ENV PATH=$PATH:/app/node_modules/.bin
# Run as non-root user
USER nextjs
+1 -1
View File
@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2023-2024 Enrico Ros
Copyright (c) 2023-2025 Enrico Ros
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
+115 -39
View File
@@ -1,41 +1,107 @@
# BIG-AGI 🧠
# Big-AGI Open 🧠
Welcome to big-AGI, the AI suite for professionals that need function, form,
simplicity, and speed. Powered by the latest models from 12 vendors and
open-source servers, `big-AGI` offers best-in-class Chats,
[Beams](https://github.com/enricoros/big-AGI/issues/470),
and [Calls](https://github.com/enricoros/big-AGI/issues/354) with AI personas,
visualizations, coding, drawing, side-by-side chatting, and more -- all wrapped in a polished UX.
This is the open-source foundation of **Big-AGI**, ___the multi-model AI workspace for experts___.
Stay ahead of the curve with big-AGI. 🚀 Pros & Devs love big-AGI. 🤖
Big-AGI is the multi-model AI workspace for experts: Engineers architecting systems. Founders making decisions. Researchers validating hypotheses.
You need to think broader, decide faster, and build with confidence, then you need Big-AGI.
[![Official Website](https://img.shields.io/badge/BIG--AGI.com-%23096bde?style=for-the-badge&logo=vercel&label=launch)](https://big-agi.com)
It comes packed with **world-class features** like Beam, and is praised for its **best-in-class AI chat UX**.
**As an independent, non-VC-funded project, Pro subscriptions at $10.99/mo fund development for everyone, including the free and open-source tiers.**
> 🚀 Big-AGI 2 is launching Q4 2024. Be the first to experience it before the public release.
>
> 👉 [Apply for Early Access](https://y2rjg0zillz.typeform.com/to/ZSADpr5u?utm_source=gh-2&utm_medium=readme&utm_campaign=ea2)
**What makes Big-AGI different:**
**Intelligence**: with [Beam & Merge](https://big-agi.com/beam) for multi-model de-hallucination, native search, and bleeding-edge AI models like Nano Banana, or GPT-5 Pro -
**Control**: with personas, data ownership, requests inspection, unlimited usage with API keys, and *no vendor lock-in* -
and **Speed**: with a local-first, over-powered, zero-latency, madly optimized web app.
Or fork & run on Vercel
**Who uses Big-AGI:**
Loved by engineers, founders, researchers, self-hosters, and IT departments for its power, reliability, and transparency.
<img width="830" height="370" alt="image" src="https://github.com/user-attachments/assets/513c4f77-0970-4a56-b23b-1416c8246174" />
Choose Big-AGI because you don't need another clone or slop - you need an AI tool that scales with you.
## Get Started
| Tier | Best For | What You Get | Setup |
|------------------------------------------------------|-------------------|---------------------------------------------------------------|-------------|
| Big-AGI Open (self-host) | **IT** | First to get new models support. Maximum control and privacy. | 5-30 min |
| [big-agi.com](https://big-agi.com) Free | **Everyone** | Full core experience, improved Beam, new Personas, best UX. | **2 min**\* |
| **[big-agi.com](https://big-agi.com) Pro** $10.99/mo | **Professionals** | Everything + **Sync** across unlimited devices + 1GB storage | **2 min**\* |
\*: **Configuration requires your API keys**. *Big-AGI does not charge for model usage or limit your access*.
**Why Pro?** As an independent project, Pro subscriptions fund all development. Early subscribers shape the roadmap directly.
<a href="https://big-agi.com">
<img width="210" height="68" alt="image" src="https://github.com/user-attachments/assets/b2f8a7b8-415f-4c92-b228-4f5a54fe2bdd" />
</a>
**Self-host and developers** (full control)
- Develop locally or self-host with Docker on your own infrastructure [guide](docs/installation.md)
- Or fork & run on Vercel:
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI&env=OPENAI_API_KEY&envDescription=Backend%20API%20keys%2C%20optional%20and%20may%20be%20overridden%20by%20the%20UI.&envLink=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-AGI%2Fblob%2Fmain%2Fdocs%2Fenvironment-variables.md&project-name=big-AGI)
### New Version
[//]: # (**For the latest Big-AGI:**)
This repository contains two main versions:
[//]: # (- [**Big-AGI Open**]&#40;https://github.com/enricoros/big-AGI/tree/main&#41; - Open Source, latest models and features &#40;main branch&#41;)
- Big-AGI 2: next-generation, bringing the most advanced AI experience
- `v2-dev`: V2 development branch, the exciting one, future default
- Big-AGI Stable: as deployed on big-agi.com
- `v1-dev`: V1 development branch (this branch)
- `v1-stable`: Current stable version
[//]: # (- [**Big-AGI Pro**]&#40;https://big-agi.com&#41; - Hosted with Cloud Sync)
Note: After the V2 release in Q4, `v2-dev` will become the default branch and `v1-dev` will reach EOL.
---
### Quick links: 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2) 👉 [installation](docs/installation.md) 👉 [documentation](docs/README.md)
## Our Philosophy
### What's New in 1.16.1...1.16.8 · Sep 13, 2024 (patch releases)
We're an independent, non-VC-funded project with a simple belief: **AI should elevate you, not replace you**.
- 1.16.8: OpenAI ChatGPT-4o Latest (o1-preview and o1-mini are supported in Big-AGI 2)
This is why we built Big-AGI to be **local-first**, madly optimized to 0-latency, launched multi-model first to
defeat hallucinations, designed Beam around the **humans in the loop**, re-wrote frameworks and abstractions
so you **are not vendor locked-in**, and obsessed over a powerful UI that works, just works.
NOTE: this is a powerful tool - if you need a toy UI or clone, this ain't it.
## What's New in 2.0 · Oct 31, 2025 · Open
👉 **[See the full changelog](https://big-agi.com/changes)**
- **Big-AGI Open** is ready and more productive and faster than ever, with:
- **Beam 2**: multi-modal, program-based, follow-ups, save presets
- Top-notch AI models support including **agentic models** and **reasoning models**
- **Image Generation** and editing with Nano Banana and gpt-image-1
- **Web Search** with citations for supported models
- **UI** & Mobile UI overhaul with peeking and side panels
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
- Built for the future, madly optimized
<img width="830" height="385" alt="image" src="https://github.com/user-attachments/assets/ad52761d-7e3f-44d8-b41e-947ce8b4faa1" />
### Open links: 👉 [changelog](https://big-agi.com/changes) 👉 [installation](docs/installation.md) 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2) 👉 [documentation](docs/README.md)
**For teams and institutions:** Need shared prompts, SSO, or managed deployments? Reach out at enrico@big-agi.com. We're actively collecting requirements from research groups and IT departments.
<details>
<summary>5,000 Commits Milestone</summary>
Hit 5k commits last week. That's a lot of code.
Recent work has been intense:
- Chain of thought reasoning across multiple LLMs: **OpenAI o3** and o1, **DeepSeek R1**, **Gemini 2.0 Flash Thinking**, and more
- Beam is real - ~35% of our users run it daily to compare models
- New AIX framework lets us scale features we couldn't before
- UI is faster than ever. Like, terminal-fast
The new architecture is solid and the speed improvements are real.
![5000e-830px](https://github.com/user-attachments/assets/42f7420b-9331-421b-9a18-2e653aaa7d9b)
</details>
<details>
<summary>What's New in 1.16.1...1.16.10 · 2024-2025 (patch releases)</summary>
- 1.16.10: OpenRouter models support
- 1.16.9: Docker Gemini fix, R1 models support
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
- 1.16.6: Groq support for Llama 3.1 models
- 1.16.5: GPT-4o Mini support
@@ -48,7 +114,10 @@ Note: After the V2 release in Q4, `v2-dev` will become the default branch and `v
- 1.16.2: Updates to Beam
- 1.16.1: Support for the new OpenAI GPT-4o 2024-05-13 model
### What's New in 1.16.0 · May 9, 2024 · Crystal Clear
</details>
<details>
<summary>What's New in 1.16.0 · May 9, 2024 · Crystal Clear</summary>
- [Beam](https://big-agi.com/blog/beam-multi-model-ai-reasoning) core and UX improvements based on user feedback
- Chat cost estimation 💰 (enable it in Labs / hover the token counter)
@@ -59,14 +128,20 @@ Note: After the V2 release in Q4, `v2-dev` will become the default branch and `v
- Models update: **Anthropic**, **Groq**, **Ollama**, **OpenAI**, **OpenRouter**, **Perplexity**
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more [#517](https://github.com/enricoros/big-AGI/issues/517), [507](https://github.com/enricoros/big-AGI/pull/507)
#### 3,000 Commits Milestone · April 7, 2024
</details>
<details>
<summary>3,000 Commits Milestone · April 7, 2024</summary>
![big-AGI Milestone](https://github.com/enricoros/big-AGI/assets/32999/47fddbb1-9bd6-4b58-ace4-781dfcb80923)
- 🥇 Today we <b>celebrate commit 3000</b> in just over one year, and going stronger 🚀
- 📢️ Thanks everyone for your support and words of love for Big-AGI, we are committed to creating the best AI experiences for everyone.
### What's New in 1.15.0 · April 1, 2024 · Beam
</details>
<details>
<summary>What's New in 1.15.0 · April 1, 2024 · Beam</summary>
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
@@ -76,6 +151,8 @@ Note: After the V2 release in Q4, `v2-dev` will become the default branch and `v
- 1.15.1: Support for Gemini Pro 1.5 and OpenAI Turbo models
- Beast release, over 430 commits, 10,000+ lines changed: [release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.15.0), and changes [v1.14.1...v1.15.0](https://github.com/enricoros/big-AGI/compare/v1.14.1...v1.15.0)
</details>
<details>
<summary>What's New in 1.14.1 · March 7, 2024 · Modelmorphic</summary>
@@ -146,9 +223,9 @@ https://github.com/enricoros/big-AGI/assets/1590910/a6b8e172-0726-4b03-a5e5-10cf
</details>
For full details and former releases, check out the [changelog](docs/changelog.md).
For full details and former releases, check out the [archived versions changelog](docs/changelog.md).
## 👉 Key Features
## 👉 Key Features
| ![Advanced AI](https://img.shields.io/badge/Advanced%20AI-32383e?style=for-the-badge&logo=ai&logoColor=white) | ![100+ AI Models](https://img.shields.io/badge/100%2B%20AI%20Models-32383e?style=for-the-badge&logo=ai&logoColor=white) | ![Flow-state UX](https://img.shields.io/badge/Flow--state%20UX-32383e?style=for-the-badge&logo=flow&logoColor=white) | ![Privacy First](https://img.shields.io/badge/Privacy%20First-32383e?style=for-the-badge&logo=privacy&logoColor=white) | ![Advanced Tools](https://img.shields.io/badge/Fun%20To%20Use-f22a85?style=for-the-badge&logo=tools&logoColor=white) |
|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
@@ -158,14 +235,14 @@ For full details and former releases, check out the [changelog](docs/changelog.m
You can easily configure 100s of AI models in big-AGI:
| **AI models** | _supported vendors_ |
|:--------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Opensource Servers | [LocalAI](https://localai.io/) (multimodal) · [Ollama](https://ollama.com/) |
| Local Servers | [LM Studio](https://lmstudio.ai/) |
| Multimodal services | [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
| Language services | [Anthropic](https://anthropic.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) |
| Image services | [Prodia](https://prodia.com/) (SDXL) |
| Speech services | [ElevenLabs](https://elevenlabs.io) (Voice synthesis / cloning) |
| **AI models** | _supported vendors_ |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Opensource Servers | [LocalAI](https://localai.io/) (multimodal) · [Ollama](https://ollama.com/) |
| Local Servers | [LM Studio](https://lmstudio.ai/) |
| Multimodal services | [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Anthropic](https://anthropic.com) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
| Language services | [Alibaba](https://www.alibabacloud.com/en/product/modelstudio) · [DeepSeek](https://deepseek.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) · [xAI](https://x.ai/) |
| Image services | OpenAI · Google Gemini |
| Speech services | [ElevenLabs](https://elevenlabs.io) (Voice synthesis / cloning) |
Add extra functionality with these integrations:
@@ -174,7 +251,6 @@ Add extra functionality with these integrations:
| Web Browse | [Browserless](https://www.browserless.io/) · [Puppeteer](https://pptr.dev/)-based |
| Web Search | [Google CSE](https://programmablesearchengine.google.com/) |
| Code Editors | [CodePen](https://codepen.io/pen/) · [StackBlitz](https://stackblitz.com/) · [JSFiddle](https://jsfiddle.net/) |
| Sharing | [Paste.gg](https://paste.gg/) (Paste chats) |
| Tracking | [Helicone](https://www.helicone.ai) (LLM Observability) |
[//]: # (- [x] **Flow-state UX** for uncompromised productivity)
@@ -241,4 +317,4 @@ the [Third-Party Notices](src/modules/3rdparty/THIRD_PARTY_NOTICES.md).
---
2023-2024 · Enrico Ros x [Big-AGI](https://big-agi.com) · Like this project? Leave a star! 💫⭐
2023-2025 · Enrico Ros x [Big-AGI](https://big-agi.com) · Like this project? Leave a star! 💫⭐
+22 -7
View File
@@ -2,23 +2,38 @@ import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
import { appRouterCloud } from '~/server/trpc/trpc.router-cloud';
import { createTRPCFetchContext } from '~/server/trpc/trpc.server';
import { posthogCaptureServerException } from '~/server/posthog/posthog.server';
const handlerNodeRoutes = (req: Request) => fetchRequestHandler({
endpoint: '/api/cloud',
router: appRouterCloud,
req,
createContext: createTRPCFetchContext,
onError:
process.env.NODE_ENV === 'development'
? ({ path, error }) => console.error(`❌ tRPC-cloud failed on ${path ?? 'unk-path'}: ${error.message}`)
: undefined,
onError: async function({ path, error, type, ctx }) {
// -> DEV error logging
if (process.env.NODE_ENV === 'development')
console.error(`❌ tRPC-cloud failed on ${path ?? 'unk-path'}: ${error.message}`);
// -> Capture node errors
await posthogCaptureServerException(error, {
domain: 'trpc-onerror',
runtime: 'nodejs',
endpoint: path ?? 'unknown',
method: req.method,
url: req.url,
additionalProperties: {
errorCode: error.code,
errorType: type,
},
});
},
});
// NOTE: the following statement breaks the build on non-pro deployments, and conditionals don't work either
// so we resorted to raising the timeout from 10s to 25s in the vercel.json file instead
// export const maxDuration = 25;
// so we resorted to raising the timeout from 10s to 60s in the vercel.json file instead
export const maxDuration = 60;
export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';
export { handlerNodeRoutes as GET, handlerNodeRoutes as POST };
+1 -1
View File
@@ -1,6 +1,6 @@
# Very simple docker-compose file to run the app on http://localhost:3000 (or http://127.0.0.1:3000).
#
# For more examples, such runnin big-AGI alongside a web browsing service, see the `docs/docker` folder.
# For more examples, such running big-AGI alongside a web browsing service, see the `docs/docker` folder.
version: '3.9'
+31 -20
View File
@@ -2,35 +2,47 @@
Information you need to get started, configure, and use big-AGI productively.
👉 **[Changelog](https://big-agi.com/changes)** - See what's new
## Getting Started
Guides for basic big-AGI features:
Essential guides:
- **[Enabling Microphone for Speech Recognition](help-feature-microphone.md)**: Instructions to
allow speech recognition in browsers and apps.
- **[FAQ](help-faq.md)**: Common questions and answers
- **[Enabling Microphone](help-feature-microphone.md)**: Configure speech recognition in your browser
## AI Model Configuration
## AI Services
Detailed guides to configure AI models and advanced features in big-AGI.
How to set up AI models and features in big-AGI.
> 👉 The following applies to users of big-AGI.com, as the public instance is empty and requires user configuration.
- **Cloud AI Services**:
- **[Azure OpenAI](config-azure-openai.md)**
- **[OpenRouter](config-openrouter.md)**
- Easy API key setup: **Anthropic**, **Deepseek**, **Google AI**, **Groq**, **Mistral**, **OpenAI**, **OpenPipe**, **Perplexity**, **TogetherAI**, **xAI**
- Easy API key configuration:
[Alibaba](https://bailian.console.alibabacloud.com/?apiKey=1#/api-key),
[Anthropic](https://console.anthropic.com/settings/keys),
[Deepseek](https://platform.deepseek.com/api_keys),
[Google Gemini](https://aistudio.google.com/app/apikey),
[Groq](https://console.groq.com/keys),
[Mistral](https://console.mistral.ai/api-keys/),
[OpenAI](https://platform.openai.com/api-keys),
[OpenPipe](https://app.openpipe.ai/settings),
[Perplexity](https://www.perplexity.ai/settings/api),
[TogetherAI](https://api.together.xyz/settings/api-keys),
[xAI](http://x.ai/api)
- **[Azure OpenAI](config-azure-openai.md)** guide
- **FireworksAI** ([API keys](https://fireworks.ai/account/api-keys), via custom OpenAI endpoint: https://api.fireworks.ai/inference)
- **[OpenRouter](config-openrouter.md)** guide
- **Local AI Integrations**:
- **[LocalAI](config-local-localai.md)**
- **[LM Studio](config-local-lmstudio.md)**
- **[Ollama](config-local-ollama.md)**
- [LocalAI](config-local-localai.md), [LM Studio](config-local-lmstudio.md), [Ollama](config-local-ollama.md)
- **Enhanced AI Features**:
- **[Web Browsing](config-feature-browse.md)**: Enable web page download through third-party services or your own cloud (advanced)
- **[Web Browsing](config-feature-browse.md)**: Enable web page download through third-party services or your own cloud
- **Web Search**: Google Search API (see '[Environment Variables](environment-variables.md)')
- **Image Generation**: DALL·E 3 and 2, or Prodia API for Stable Diffusion XL
- **Image Generation**: GPT Image (gpt-image-1), DALL·E 3 and 2
- **Voice Synthesis**: ElevenLabs API for voice generation
## Deployment & Customization
@@ -39,13 +51,14 @@ Detailed guides to configure AI models and advanced features in big-AGI.
For deploying a custom big-AGI instance:
- **[Installation Guide](installation.md)**: Set up your own big-AGI instance
- **[Installation Guide](installation.md)**, including:
- Set up your own big-AGI instance
- Source build or pre-built options
- Local, cloud, or on-premises deployment
- **Advanced Setup**:
- **[Source Code Customization Guide](customizations.md)**: Modify the source code
- **[Source Code Customization](customizations.md)**: Modify the source code
- **[Access Control](deploy-authentication.md)**: Optional, add basic user authentication
- **[Database Setup](deploy-database.md)**: Optional, enables "Chat Link Sharing"
- **[Reverse Proxy](deploy-reverse-proxy.md)**: Optional, enables custom domains and SSL
@@ -53,10 +66,8 @@ For deploying a custom big-AGI instance:
## Community & Support
Connect with the growing big-AGI community:
- Check the [changelog](https://big-agi.com/changes) for the latest updates
- Visit our [GitHub repository](https://github.com/enricoros/big-AGI) for source code and issue tracking
- Check the latest updates and features on [Changelog](changelog.md) or the in-app [News](https://get.big-agi.com/news)
- Connect with us and other users on [Discord](https://discord.gg/MkH4qj2Jp9) for discussions, help, and sharing your experiences with big-AGI
- Join our [Discord](https://discord.gg/MkH4qj2Jp9) for discussions and help
Thank you for choosing big-AGI. We're excited to give you the best tools to amplify yourself.
Let's build something great.
+19 -7
View File
@@ -1,18 +1,30 @@
## Changelog
## Archived Versions - Changelog
This is a high-level changelog. Calls out some of the high level features batched
by release.
- For the live changelog, see [big-agi.com/changes](https://big-agi.com/changes)
- For the live roadmap, please see [the GitHub project](https://github.com/users/enricoros/projects/4/views/2)
### 1.17.0 - Jun 2024
> NOTE: with the release of 2.0.0 we switching to [big-agi.com/changes](https://big-agi.com/changes) for the
> continuously updated changelog.
- milestone: [1.17.0](https://github.com/enricoros/big-agi/milestone/17)
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
### What's New in 2 · Oct 31, 2025 · Open
### What's New in 1.16.1...1.16.8 · Sep 13, 2024 (patch releases)
- **Big-AGI Open** is ready and more productive and faster than ever, with:
- **Beam 2**: multi-modal, program-based, follow-ups, save presets
- Top-notch AI models support including **agentic models** and **reasoning models**
- **Image Generation** and editing with Nano Banana and gpt-image-1
- **Web Search** with citations for supported models
- **UI** & Mobile UI overhaul with peeking and side panels
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
- Built for the future, madly optimized
- 1.16.8: OpenAI ChatGPT-4o Latest (o1-preview and o1-mini are supported in Big-AGI 2)
### What's New in 1.16.1...1.16.9 · Jan 21, 2025 (patch releases)
- 1.16.10: OpenRouter models support
- 1.16.9: Docker Gemini fix, R1 models support
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
- 1.16.7: OpenAI support for GPT-4o 2024-08-06
- 1.16.6: Groq support for Llama 3.1 models
- 1.16.5: GPT-4o Mini support
@@ -46,7 +58,7 @@ by release.
### What's New in 1.15.0 · April 1, 2024 · Beam
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
- Managed Deployments **Auto-Configuration**: simplify the UI mdoels setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
- Managed Deployments **Auto-Configuration**: simplify the UI models setup with backend-set models. [#436](https://github.com/enricoros/big-AGI/issues/436)
- Message **Starring ⭐**: star important messages within chats, to attach them later. [#476](https://github.com/enricoros/big-AGI/issues/476)
- Enhanced the default Persona
- Fixes to Gemini models and SVGs, improvements to UI and icons
+48 -28
View File
@@ -14,7 +14,7 @@ If you have an `API Endpoint` and `API Key`, you can configure big-AGI as follow
1. Launch the `big-AGI` application
2. Go to the **Models** settings
3. Add a Vendor and select **Azure OpenAI**
- Enter the Endpoint (e.g., 'https://your-openai-api-1234.openai.azure.com/')
- Enter the Endpoint (e.g., 'https://your-resource-name.openai.azure.com')
- Enter the API Key (e.g., 'fd5...........................ba')
The deployed models are now available in the application. If you don't have a configured
@@ -23,6 +23,36 @@ Azure OpenAI service instance, continue with the next section.
In addition to using the UI, configuration can also be done using
[environment variables](environment-variables.md).
## Server Configuration
For server deployments, set these environment variables:
```bash
AZURE_OPENAI_API_ENDPOINT=https://your-resource-name.openai.azure.com
AZURE_OPENAI_API_KEY=your-api-key
```
This enables Azure OpenAI for all users without requiring individual API keys. For more details, see [environment-variables.md](environment-variables.md).
## Azure OpenAI API Versions
Azure OpenAI supports both traditional deployment-based API and the next-generation v1 API:
### Next-Generation v1 API (Default)
- **Enabled by default** for GPT-5-like models (GPT-5, GPT-6, o3, o4, etc.)
- Uses direct `/openai/v1/responses` endpoint without deployment IDs
- Optimized for advanced reasoning models and new features
- Can be disabled by setting `AZURE_OPENAI_DISABLE_V1=true`
### Traditional Deployment-Based API
- Uses `/openai/deployments/{deployment-name}/...` endpoints
- Required for older models and when v1 API is disabled
- Needs deployment ID for all API calls
### Known Limitations
- **Web Search Tool**: Azure OpenAI does not support the `web_search_preview` tool that's available in OpenAI's API
- Models with web search capabilities will have this feature automatically disabled on Azure
## Setting Up Azure
### Step 1: Azure Account & Subscription
@@ -34,18 +64,7 @@ In addition to using the UI, configuration can also be done using
- Fill in the required fields and click on **Create**
- Note down the **Subscription ID** (e.g., `12345678-1234-1234-1234-123456789012`)
### Step 2: Apply for Azure OpenAI Service
We'll now be creating "OpenAI"-specific resources on Azure. This requires to 'apply',
and acceptance should be quick (even as low as minutes).
1. Visit [Azure OpenAI Service](https://aka.ms/azure-openai)
2. Click on **Apply for access**
- Fill in the required fields (including the subscription ID) and click on **Apply**
Once your application is accepted, you can create OpenAI resources on Azure.
### Step 3: Create Azure OpenAI Resource
### Step 2: Create Azure OpenAI Resource
For more information, see [Azure: Create and deploy OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
@@ -55,31 +74,32 @@ For more information, see [Azure: Create and deploy OpenAI](https://learn.micros
![Creating an OpenAI service](pixels/config-azure-openai-create.png)
- Select the subscription
- Select a resource group or create a new one
- Select the region. Note that the region determines the available models.
> For instance, **Canada East** offers GPT-4-32k models, For the full list, see [GPT-4 models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
- Select the region. **Important**: The region determines which models are available.
> Popular regions like **East US**, **West Europe**, and **Australia East** typically have the best model availability. For the latest model availability by region, see [Azure OpenAI Model Availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
- Name the service (e.g., `your-openai-api-1234`)
- Select a pricing tier (e.g., `S0` for standard)
- Select: "All networks, including the internet, can access this resource."
- Click on **Review + create** and then **Create**
After creating the resource, you can access the API Keys and Endpoints. At any point, you can go to
the OpenAI Service instance page to get this information.
After creating the resource, you can access the API Keys and Endpoints:
- Click on **Go to resource**
- Click on **Develop**
- Copy the `Endpoint`, called "Language API", e.g. 'https://your-openai-api-1234.openai.azure.com/'
- Copy `KEY 1`
1. Click on **Go to resource** (or navigate to your Azure OpenAI resource)
2. In the left sidebar, under **Resource Management**, click on **Keys and Endpoint**
3. Copy the required information:
- **Endpoint**: e.g., 'https://your-resource-name.openai.azure.com/'
- **Key**: Copy either KEY 1 or KEY 2 (both work identically)
### Step 4: Deploy Models
### Step 3: Deploy Models
By default, Azure OpenAI resource instances don't have models available. You need to deploy the models you want to use.
1. Click on **Model Deployments > Manage Deployments**
2. Click on **+Create New Deployment**
![Deploying a model](pixels/config-azure-openai-deploy.png)
- Select the model you want to deploy
- Optionally select a version
- name the model, e.g., `gpt4-32k-0613`
1. In your Azure OpenAI resource, click on **Model deployments** in the left sidebar
2. Click on **Create new deployment**
3. Fill in the deployment details:
- **Select a model**: Choose from available models
- **Model version**: Select the latest version or a specific one
- **Deployment name**: Give it a meaningful name
4. Click **Deploy**
Repeat as necessary for each model you want to deploy.
+27 -6
View File
@@ -31,17 +31,14 @@ At time of writing, big-AGI has only 2 operations that run on Node.js Functions:
browsing (fetching web pages) and sharing. They both can exceed 10 seconds, especially
when fetching large pages or waiting for websites to be completed.
We provide `vercel_PRODUCTION.json` to raise the duration to 25 seconds (from a default of 10), to use it,
make sure to rename it to `vercel.json` before build.
From the Vercel Project > Settings > General > Build & Development Settings,
you can for instance set the build command to:
```bash
mv vercel_PRODUCTION.json vercel.json; next build
next build
```
### Change the Personas
### Change the Personas (v1.x only)
Edit the `src/data.ts` file to customize personas. This file houses the default personas. You can add, remove, or modify these to meet your project's needs.
@@ -55,6 +52,21 @@ Adapt the UI to match your project's aesthetic, incorporate new features, or exc
- [ ] Modify `src/common/app.config.tsx` to alter the application's name
- [ ] Update `src/common/app.nav.tsx` to revise the navigation bar
### Add a Message of the Day
You can display a temporary announcement banner at the top of the app using the `NEXT_PUBLIC_MOTD` environment variable.
- Set this variable in your deployment environment
- The message supports template variables:
- `{{app_build_hash}}`: Current git commit hash
- `{{app_build_pkgver}}`: Package version
- `{{app_build_time}}`: Build timestamp as date
- `{{app_deployment_type}}`: Deployment type (local, docker, vercel, etc.)
- Users can dismiss the message (until next page refresh)
- Use it for version announcements, maintenance notices, or feature highlights
Example: `NEXT_PUBLIC_MOTD=🚀 New features available in {{app_build_pkgver}}! Try the improved Beam.`
## Testing & Deployment
Test your application thoroughly using local development (refer to README.md for local build instructions). Deploy using your preferred hosting service. big-AGI supports deployment on platforms like Vercel, Docker, or any Node.js-compatible service, especially those supporting NextJS's "Edge Runtime."
@@ -65,7 +77,16 @@ Test your application thoroughly using local development (refer to README.md for
## Debugging
We introduced the `/info/debug` page that provides a detailed overview of the application's environment, including the API keys, environment variables, and other configuration settings.
The application includes a client-side logging system. You can view recent logs via the UI (Settings > Tools > Logs).
For deeper debugging during development:
1. **Debug Page**: Access the `/info/debug` page for an overview of the application's environment, configuration, API status, and environment variables available to the client.
2. **Conditional Breakpoints**: To automatically pause execution in your browser's developer tools when critical errors (`error`, `critical`, `DEV` levels) are logged to the console, set the following environment variable in your local `.env.local` file and restart your development server:
```bash
NEXT_PUBLIC_DEBUG_BREAKS=true
```
This allows you to inspect the application state at the exact moment an important error occurs. This feature only works in development mode (`npm run dev`) and requires the environment variable to be explicitly set to `true`.
<br/>
+40 -23
View File
@@ -2,8 +2,9 @@
The open-source big-AGI project provides support for the following analytics services:
- **Vercel Analytics**: automatic when deployed to Vercel
- **Google Analytics 4**: manual setup required
- **PostHog Analytics**: manual setup required
- **Vercel Analytics**: automatic when deployed to Vercel
The following is a quick overview of the Analytics options for the deployers of this open-source project.
big-AGI is deployed to many large-scale and enterprise though various ways (custom builds, Docker, Vercel, Cloudflare, etc.),
@@ -11,6 +12,36 @@ and this guide is for its customization.
## Service Configuration
### Google Analytics 4
- Why: user engagement and retention, performance insights, personalization, content optimization
- What: https://support.google.com/analytics/answer/11593727
Google Analytics 4 (GA4) is a powerful tool for understanding user behavior and engagement.
This can help optimize big-AGI, understanding which features are needed/users and which aren't.
To enable Google Analytics 4, you need to set the `NEXT_PUBLIC_GA4_MEASUREMENT_ID` environment variable
before starting the local build or the docker build (i.e. at build time), at which point the
server/container will be able to report analytics to your Google Analytics 4 property.
As of Feb 27, 2024, this feature is in development.
### PostHog Analytics
- Why: feature usage tracking, user journeys, conversion optimization, product analytics
- What: page views, page leave events, user interactions, and deployment context
PostHog provides comprehensive product analytics with privacy controls. It helps understand how users interact with big-AGI's features, identify opportunities for improvement, and optimize the user experience.
To enable PostHog, set the `NEXT_PUBLIC_POSTHOG_KEY` environment variable at build time. PostHog is configured with tracking optimization and privacy in mind:
- Uses a proxy endpoint (`/a/ph`) to avoid ad blockers
- Respects user opt-out preferences via local storage
- Tracks only essential information without PII
- Adds deployment context for better segmentation
The implementation follows PostHog's best practices for Next.js applications and includes manual page view tracking for proper single-page application support.
### Vercel Analytics
- Why: understand coarse traction, and identify deployment issues - all without tracking individual users
@@ -31,33 +62,19 @@ const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) => <>
</>;
```
When big-AGI is served on Vercel hosts, the ```process.env.NEXT_PUBLIC_VERCEL_URL``` environment variable is trueish, and
When big-AGI is served on Vercel hosts, the `process.env.NEXT_PUBLIC_VERCEL_URL` environment variable is trueish, and
analytics will be sent by default to the Vercel Analytics service which is deployed by Vercel IF configured from the
Vercel project dashboard.
In summary: to turn it on: activate the `Analytics` service in the Vercel project dashboard.
### Google Analytics 4
- Why: user engagement and retention, performance insights, personalization, content optimization
- What: https://support.google.com/analytics/answer/11593727
Google Analytics 4 (GA4) is a powerful tool for understanding user behavior and engagement.
This can help optimize big-AGI, understanding which features are needed/users and which aren't.
To enable Google Analytics 4, you need to set the `NEXT_PUBLIC_GA4_MEASUREMENT_ID` environment variable
before starting the local build or the docker build (i.e. at build time), at which point the
server/container will be able to report analytics to your Google Analytics 4 property.
As of Feb 27, 2024, this feature is in development.
## Configurations
| Scope | Default | Description / Instructions |
|-----------------------------------------------------------------------------------------|------------------|-------------------------------------------------------------------------------------------------------------------------|
| Your source builds of big-AGI | None | **Vercel**: enable Vercel Analytics from the dashboard. · **Google Analytics**: set environment variable at build time. |
| Your docker builds of big-AGI | None | **Vercel**: n/a. · **Google Analytics**: set environment variable at `docker build` time. |
| [big-agi.com](https://big-agi.com) | Vercel + Google | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
| [official Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements. |
| Scope | Default | Description / Instructions |
|-------------------------------------------------------------------------------------------------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Your **Source** builds of big-AGI | None | **Google Analytics**: set environment variable at build time · **PostHog**: set environment variable at build time · **Vercel**: enable Vercel Analytics from the dashboard |
| Your **Docker** builds of big-AGI | None | (**Vercel**: n/a) · **Google Analytics**: set environment variable at `docker build` time · **PostHog**: set environment variable at `docker build` time. |
| [get.big-agi.com](https://get.big-agi.com) (**Big-AGI 1.x Legacy**) | Vercel + Google + PostHog | The main website ([privacy policy](https://big-agi.com/privacy)) hosted for free for anyone. |
| [prebuilt Docker packages](https://github.com/enricoros/big-AGI/pkgs/container/big-agi) (**Big-AGI 1.x**, 'latest' tag) | Google Analytics | **Vercel**: n/a · **Google Analytics**: set to the big-agi.com Google Analytics for analytics and improvements · **PostHog**: n/a |
Note: this information is updated as of Feb 27, 2024 and can change at any time.
Note: this information is updated as of March 3, 2025 and can change at any time.
+6
View File
@@ -31,6 +31,12 @@ file.
### Official Images: [ghcr.io/enricoros/big-agi](https://github.com/enricoros/big-agi/pkgs/container/big-agi)
#### Available Tags
- **`:latest`** / **`:stable`** - Latest stable release (recommended)
- **`:development`** - Main branch (bleeding edge)
- **`:v2.0.0`** - Specific versions
#### Run using *docker* 🚀
```bash
+18 -10
View File
@@ -3,7 +3,7 @@
This document provides an explanation of the environment variables used in the big-AGI application.
**All variables are optional**; and _UI options_ take precedence over _backend environment variables_,
which take place over _defaults_. This file is kept in sync with [`../src/server/env.mjs`](../src/server/env.mjs).
which take place over _defaults_. This file is kept in sync with [`../src/server/env.ts`](../src/server/env.ts).
### Setting Environment Variables
@@ -23,6 +23,8 @@ MDB_URI=
OPENAI_API_KEY=
OPENAI_API_HOST=
OPENAI_API_ORG_ID=
ALIBABA_API_HOST=
ALIBABA_API_KEY=
AZURE_OPENAI_API_ENDPOINT=
AZURE_OPENAI_API_KEY=
ANTHROPIC_API_KEY=
@@ -54,16 +56,16 @@ GOOGLE_CSE_ID=
ELEVENLABS_API_KEY=
ELEVENLABS_API_HOST=
ELEVENLABS_VOICE_ID=
# Text-To-Image: Prodia
PRODIA_API_KEY=
# Backend HTTP Basic Authentication (see `deploy-authentication.md` for turning on authentication)
HTTP_BASIC_AUTH_USERNAME=
HTTP_BASIC_AUTH_PASSWORD=
# Frontend variables
# Frontend variables
NEXT_PUBLIC_MOTD=
NEXT_PUBLIC_GA4_MEASUREMENT_ID=
NEXT_PUBLIC_POSTHOG_KEY=
NEXT_PUBLIC_PLANTUML_SERVER_URL=
```
@@ -88,8 +90,13 @@ requiring the user to enter an API key
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
| `ALIBABA_API_HOST` | The Alibaba AI OpenAI-compatible endpoint | Optional |
| `ALIBABA_API_KEY` | The API key for Alibaba AI | Optional |
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path | Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
| `AZURE_OPENAI_DISABLE_V1` | Disables the next-generation v1 API for GPT-5-like models (set to 'true' to disable) | Optional, defaults to enabled |
| `AZURE_OPENAI_API_VERSION` | API version for traditional deployment-based endpoints | Optional, defaults to '2025-04-01-preview' |
| `AZURE_DEPLOYMENTS_API_VERSION` | API version for the deployments listing endpoint | Optional, defaults to '2023-03-15-preview' |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as AWS Bedrock | Optional |
| `DEEPSEEK_API_KEY` | The API key for Deepseek AI | Optional |
@@ -127,8 +134,6 @@ Enable the app to Talk, Draw, and Google things up.
| `ELEVENLABS_API_KEY` | ElevenLabs API Key - used for calls, etc. |
| `ELEVENLABS_API_HOST` | Custom host for ElevenLabs |
| `ELEVENLABS_VOICE_ID` | Default voice ID for ElevenLabs |
| **Text-To-Image** | [Prodia](https://prodia.com/) is a reliable image generation service |
| `PRODIA_API_KEY` | Prodia API Key - used with '/imagine ...' |
| **Google Custom Search** | [Google Programmable Search Engine](https://programmablesearchengine.google.com/about/) produces links to pages |
| `GOOGLE_CLOUD_API_KEY` | Google Cloud API Key, used with the '/react' command - [Link to GCP](https://console.cloud.google.com/apis/credentials) |
| `GOOGLE_CSE_ID` | Google Custom/Programmable Search Engine ID - [Link to PSE](https://programmablesearchengine.google.com/) |
@@ -142,10 +147,13 @@ Enable the app to Talk, Draw, and Google things up.
The value of these variables are passed to the frontend (Web UI) - make sure they do not contain secrets.
| Variable | Description |
|:----------------------------------|:-----------------------------------------------------------------------------------------|
| `NEXT_PUBLIC_GA4_MEASUREMENT_ID` | The measurement ID for Google Analytics 4. (see [deploy-analytics](deploy-analytics.md)) |
| `NEXT_PUBLIC_PLANTUML_SERVER_URL` | The URL of the PlantUML server, used for rendering UML diagrams. (code in RederCode.tsx) |
| Variable | Description |
|:----------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `NEXT_PUBLIC_DEBUG_BREAKS` | (optional, development) When set to 'true', enables automatic debugger breaks on DEV/error/critical logs in development builds |
| `NEXT_PUBLIC_MOTD` | Message of the Day - displays a dismissible banner at the top of the app (see [customizations](customizations.md) for the template variables). Example: 🔔 Welcome to our deployment! Version {{app_build_pkgver}} built on {{app_build_time}}. |
| `NEXT_PUBLIC_GA4_MEASUREMENT_ID` | (optional) The measurement ID for Google Analytics 4. (see [deploy-analytics](deploy-analytics.md)) |
| `NEXT_PUBLIC_POSTHOG_KEY` | (optional) Key for PostHog analytics. (see [deploy-analytics](deploy-analytics.md)) |
| `NEXT_PUBLIC_PLANTUML_SERVER_URL` | The URL of the PlantUML server, used for rendering UML diagrams. Allows using custom local servers. |
> Important: these variables must be set at build time, which is required by Next.js to pass them to the frontend.
> This is in contrast to the backend variables, which can be set when starting the local server/container.
+99
View File
@@ -0,0 +1,99 @@
# Big-AGI Data Ownership Guide
Big-AGI is a **client-first** web application, which means it prioritizes speed and data ownership compared to cloud apps.
Your *API keys*, *chat history*, and *settings* live in your
browser's [local storage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage), not
on cloud servers.
You can use Big-AGI in two ways:
1. Run it yourself (open-source)
2. Use big-agi.com (hosted service)
This guide explains how the open-source version handles your data. You can verify everything in [the source code](https://github.com/enricoros/big-agi).
## Client-Side Storage
Within Big-AGI almost all chat/keys data is handled client-side in your browser using two
standard browser storage mechanisms:
- **Local Storage**: API keys, settings, and configurations ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage))
- **IndexedDB**: Chat history and larger files ([learn more](https://developer.mozilla.org/en-US/docs/Web/API/IndexedDB_API))
The Big-AGI backend mainly passes requests to AI services (OpenAI, Anthropic, etc.). It doesn't store your data, except for the chat-sharing function if used.
You can see your data in your browser's local storage and IndexedDB - try it yourself:
1. In Chrome: Open DevTools (press F12 on Windows, ⌘ + ⌥ + I on Mac)
2. Click 'Application' > 'Local Storage'
3. See your settings and API keys
![Browser local storage showing API keys and chat data](pixels/data_ownership_local_storage.png)
### What This Means For You
Storing data in your browser means:
- Your data stays on **one device/browser only**
- Clearing browser data **erases your chats** - make backups
- Anyone using your browser can see your chats and keys
- Running your own server needs technical skills
### Local Device Identifier
Big-AGI generates a _device identifier_ that combines timestamp and random components, stored only on your device. This identifier:
- Is used only for the **optional sync functionality** between your devices (not yet ready)
- Helps maintain data consistency when using Big-AGI across multiple devices
- Remains completely local unless you explicitly enable sync
- Is not used for tracking, analytics, or telemetry
- Can be deleted anytime by clearing local storage
- Is fully transparent - see the implementation in `src/common/stores/store-client.ts`
## How Data Flows
AI interactions in Big-AGI, such as chats, AI titles, text to speech, browsing, flow through three components:
1. **Browser** (client/installed App) - Stores your keys & data locally
2. **Backend** (routing server) - Passes requests to AI services
3. **AI Services** - Where the actual AI processing happens
### Self-Deployed Version: Your Infrastructure
You run the server. Your data only leaves when making AI requests.
The keys and chats are under your control and pass through your code, and are sent to
the upstream AI services on a per-request basis.
![data_ownership_local.png](pixels/data_ownership_deployed.png)
### Web Version: Using big-agi.com
Your data passes through the hosted Big-AGI edge network to reach AI services. The keys
and chats pass through Big-AGI's edge network to reach the AI services on a per-request basis,
and then are send to the upstream AI services.
![data_ownership_hosted.png](pixels/data_ownership_hosted.png)
## Security Best Practices
**Basic Security**:
- **Never share API keys**
- **Don't use shared computers**
- Use private browsing for one-off sessions
- Use trusted networks
- Back up your data
**When Running Your Own Server**:
- Use [environment variables](environment-variables.md) for API keys
- Run on trusted infrastructure
- Keep your installation updated
## TL;DR
Your API keys and chats stay in your browser. The server only passes requests to AI services.
Use big-agi.com for convenience, or [run it yourself](installation.md) for full control.
Need help? Join our [Discord](https://discord.gg/MkH4qj2Jp9) or open a [GitHub issue](https://github.com/enricoros/big-agi/issues).
+28
View File
@@ -0,0 +1,28 @@
# Frequently Asked Questions
Quick answers to common questions about Big-AGI. For detailed documentation, see our [Website Docs](https://big-agi.com/docs).
### Versions
<details open>
<summary><b>How do I check my Big-AGI version?</b></summary>
You can see the version in the _News_ section of the app, as per the image below.
![Version location in Big-AGI](https://github.com/user-attachments/assets/cd295094-0114-420f-a5b9-0d762e59b506)
</details>
<details open>
<summary><b>How do I verify my Vercel deployment version?</b></summary>
You can go in the **deployments** section of your Vercel project, and at a quick glance see
what is the latest deployment status, time, and link to the source code.
![Vercel deployments view](https://github.com/user-attachments/assets/664b8c3d-496e-4595-ad5e-898bdb82507c)
Each deployment links directly to its source code commit.
</details>
---
Missing something? [Open an issue](https://github.com/enricoros/big-agi/issues/new) or [join our Discord](https://discord.gg/MkH4qj2Jp9).
+1 -1
View File
@@ -151,6 +151,6 @@ Enjoy all the features of big-AGI without the hassle of infrastructure managemen
Join our vibrant community of developers, researchers, and AI enthusiasts. Share your projects, get help, and collaborate with others.
- [Discord Community](https://discord.gg/MkH4qj2Jp9)
- [Twitter](https://twitter.com/yourusername)
- [Twitter](https://twitter.com/enricoros)
For any questions or inquiries, please don't hesitate to [reach out to our team](mailto:hello@big-agi.com).
+2 -3
View File
@@ -16,6 +16,8 @@ stringData:
OPENAI_API_KEY: ""
OPENAI_API_HOST: ""
OPENAI_API_ORG_ID: ""
ALIBABA_API_HOST: ""
ALIBABA_API_KEY: ""
AZURE_OPENAI_API_ENDPOINT: ""
AZURE_OPENAI_API_KEY: ""
ANTHROPIC_API_KEY: ""
@@ -44,6 +46,3 @@ stringData:
ELEVENLABS_API_KEY: ""
ELEVENLABS_API_HOST: ""
ELEVENLABS_VOICE_ID: ""
# Text-To-Image: Prodia
PRODIA_API_KEY: ""
Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 234 KiB

+35
View File
@@ -0,0 +1,35 @@
# Knowledge Base
Internal documentation for Big-AGI architecture and systems, for use by AI agents and developers.
**Structure:**
- `/kb/modules/` - Core business logic (e.g. AIX)
- `/kb/systems/` - Infrastructure (routing, startup)
## Index
### Modules Documentation
#### AIX - AI Communication Framework
- **[AIX.md](modules/AIX.md)** - AIX streaming architecture documentation
- **[AIX-callers-analysis.md](modules/AIX-callers-analysis.md)** - Analysis of AIX entry points, call chains, common and different rendering, error handling, etc.
### Systems Documentation
#### Core Platform Systems
- **[app-routing.md](systems/app-routing.md)** - Next.js routing, provider stack, and display state hierarchy
- **[LLM-parameters-system.md](systems/LLM-parameters-system.md)** - Language model parameter flow across the system
## Guidelines
### Writing Style
- **Direct and factual** - No marketing language
- **Present tense** - "AIX handles streaming" not "AIX will handle"
- **Active voice** - "The system processes" not "Processing is done by"
- **Concrete examples** - Show actual code/config when helpful, briefly
### Maintenance
- Remove outdated information when detected!
- Keep cross-references current when files move
+144
View File
@@ -0,0 +1,144 @@
# AIX Chat Generation Calls Analysis
This document analyzes all AIX function callers and their patterns for message removal, placeholder handling, and error management.
## AIX Function Architecture
### Three-Tier Call Hierarchy
**Core AIX Functions** (Direct tRPC API callers):
- `aixChatGenerateContent_DMessage_FromConversation` - 8 callers (conversation streaming)
- `aixChatGenerateContent_DMessage` - 6 callers (direct request/response)
- `aixChatGenerateText_Simple` - 12 callers (text-only utilities)
**Utility Layer** (Hooks & Functions):
- Conversation management, persona processing, content generation utilities
**UI Layer** (React Components):
- User-facing interfaces with rich error states and fallback mechanisms
## Core Function Callers Analysis
### Conversation-Based Callers (`_FromConversation`)
| **Caller** | **Context** | **Message Removal** | **Placeholder** | **Error Handling** |
|------------|-------------|-------------------|----------------|-------------------|
| **Chat Persona** | `'conversation'` | `messageWasInterruptedAtStart()``removeMessage()` | None | Error fragments |
| **Beam Scatter** | `'beam-scatter'` | `messageWasInterruptedAtStart()` → empty message | `SCATTER_PLACEHOLDER` | Ray status update |
| **Beam Gather** | `'beam-gather'` | `messageWasInterruptedAtStart()` → clear fragments | `GATHER_PLACEHOLDER` | Re-throw errors |
| **Beam Follow-up** | `'beam-followup'` | `messageWasInterruptedAtStart()` → remove message | `FOLLOWUP_PLACEHOLDER` | Status updates |
| **ScratchChat** | `'scratch-chat'` | `aborted && !fragments` → array removal | `SCRATCH_CHAT_PLACEHOLDER` | Error fragments |
| **Telephone** | `'call'` | None | None | Basic handling |
| **ReAct Agent** | `'chat-react-turn'` | None | None | Append errors |
| **Variform** | `'_DEV_'` | None | None | Throw errors |
### Direct Request Callers (`aixChatGenerateContent_DMessage`)
| **Caller** | **Context** | **Message Removal** | **Error Handling** |
|------------|-------------|-------------------|-------------------|
| **Auto Follow-ups** | `'chat-followup-*'` | `fragmentDelete()` on failure | `fragmentReplace()` with error |
| **Gen CR Diffs** | `'aifn-gen-cr-diffs'` | None | State-based handling |
| **Code Fixup** | `'fixup-code'` | None | Throw errors |
| **Attachment Prompts** | `'chat-attachment-prompts'` | None | Throw errors |
### Text-Only Utilities (`aixChatGenerateText_Simple`)
| **Utility** | **Purpose** | **Error Strategy** | **Called By** |
|-------------|-------------|-------------------|---------------|
| **conversationTitle** | Auto-generate chat titles | Try/catch with fallback | UI components |
| **conversationSummary** | Generate summaries | Try/catch with fallback | Chat drawer |
| **useStreamChatText** | Generic text streaming | Error state management | FlattenerModal |
| **useLLMChain** | Multi-step processing | Step-by-step handling | Persona creation |
| **imaginePromptFromText** | Text → image prompts | Simple propagation | Image generation |
| **aifnBeamGenerateBriefing** | Beam summaries | Null return on error | Beam completion |
| **useAifnPersonaGenIdentity** | Extract persona identity | Query error handling | Persona flows |
| **DiagramsModal** | Generate diagrams | Component error state | Manual generation |
## Message Removal Patterns
### 1. Complete Message Removal
- **Chat Persona**: `messageWasInterruptedAtStart()``messageEditor.removeMessage()`
- **ScratchChat**: `outcome === 'aborted' && !fragments?.length` → array removal
- **Trigger**: Message aborted before any content generated
### 2. Fragment-Level Management
- **Beam Gather**: Clear fragments array but keep message structure
- **Auto Follow-ups**: Delete specific placeholder fragments on failure
- **Purpose**: Maintain message structure while removing failed content
### 3. Empty Message Replacement
- **Beam Scatter**: Replace with `createDMessageEmpty()` but preserve ray structure
- **Purpose**: Keep UI structure intact while indicating failure
### 4. No Removal Strategy
- **Text-only functions**: Use fallback values, error states, or null returns
- **Simple callers**: Propagate errors upstream for handling
## Error Handling by Layer
### UI Layer (Components)
- **Pattern**: Rich error states with user-facing messages
- **Examples**: DiagramsModal, FlattenerModal
- **Features**: Retry mechanisms, fallback UI, loading states
### Utility Layer (Hooks/Functions)
- **Pattern**: Graceful degradation with fallbacks
- **Examples**: conversationTitle, conversationSummary
- **Features**: Silent failures, default values, try/catch blocks
### Core Layer (Direct API)
- **Pattern**: Minimal handling, error propagation
- **Examples**: Code Fixup, Attachment Prompts
- **Features**: Assumes upstream error handling
## Key Implementation Details
### Message Removal Detection
```typescript
// Core detection logic
function messageWasInterruptedAtStart(message: Pick<DMessage, 'generator' | 'fragments'>): boolean {
return message.generator?.tokenStopReason === 'client-abort' && message.fragments.length === 0;
}
```
### Placeholder Management
- **Initialization**: `createPlaceholderVoidFragment(placeholderText)`
- **Replacement**: During streaming updates or on completion
- **Cleanup**: Delete on error to avoid stale content
### Context Patterns
- **Production**: `'conversation'`, `'beam-scatter'`, `'scratch-chat'`
- **Features**: `'chat-followup-*'`, `'fixup-code'`, `'ai-diagram'`
- **Development**: `'_DEV_'`
## Best Practices
### Message Removal
- Use `messageWasInterruptedAtStart()` for consistent detection
- Only remove messages with no content that were client-aborted
- Consider UI context when choosing removal vs. clearing strategy
### Error Handling
- **Fragment-level**: Use `messageEditor.fragmentReplace()` with error fragments
- **Message-level**: Use `messageEditor.removeMessage()` or array removal
- **Status-level**: Update component state for UI feedback
### Placeholder Management
- Initialize with descriptive placeholders using `createPlaceholderVoidFragment()`
- Replace during streaming updates
- Clean up on error to prevent stale content
## Architectural Insights
1. **Layered Error Handling**: Sophistication increases closer to UI
2. **Context Specialization**: Different contexts for different use cases
3. **Streaming vs Non-Streaming**: Conversation functions stream, utilities typically don't
4. **Message vs Fragment Management**: Different strategies for different UI needs
The most sophisticated handling is in **Beam modules** and **Chat Persona** with comprehensive removal logic, while simpler callers rely on upstream error handling.
## Code References
- **Core function**: `src/modules/aix/client/aix.client.ts:aixChatGenerateContent_DMessage_FromConversation`
- **Removal check**: `src/common/stores/chat/chat.message.ts:388:messageWasInterruptedAtStart()`
- **Placeholder creation**: `src/common/stores/chat/chat.fragments.ts:createPlaceholderVoidFragment()`
+189
View File
@@ -0,0 +1,189 @@
# AIX
AIX is a client/server library for integrating advanced AI capabilities into web applications.
## Overview
AIX provides real-time, type-safe communication between a Typescript application and AI providers.
Built with tRPC, it manages the lifecycle of AI-generated content from request to rendering, supporting both streaming and non-streaming AI providers.
## Features
- Content Generation
- Multi-Modal streaming/non-streaming
- Throttled batching and error handling
- Server-side timeout/retry
- Function Calling and Code Execution
- Complex AI Workflows (future)
- Embeddings / Information Retrieval / Image Manipulation (future)
## AIX Providers support
| Service | Chat | Function Calling | Multi-Modal Input | Cont. (1) | Streaming | Idiosyncratic |
|------------|------------|------------------|-------------------|-----------|-----------|---------------|
| Alibaba | ✅ | ✅ | | ✅ | Yes + 📦 | |
| Anthropic | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | |
| Azure | ✅ | ✅ | | ✅ | Yes + 📦 | |
| Deepseek | ✅ | ❌ (rejected) | | ✅ | Yes + 📦 | |
| Gemini | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | Code ex.: ✅ |
| Groq | ✅ | ✅ + Parallel | | ✅ | Yes + 📦 | |
| LM Studio | ✅ | ❌ (not working) | | ❌ | Yes + 📦 | |
| Local AI | ✅ | ✅ | | ❌ | Yes + 📦 | |
| Mistral | ✅ | ✅ | | ✅ | Yes + 📦 | |
| OpenAI | ✅ | ✅ + Parallel | Img: ✅ | ✅ | Yes + 📦 | |
| OpenPipe | ✅ | ✅ | Img: ✅ | ✅ | Yes + 📦 | |
| OpenRouter | ✅ | ❌ (inconsistent) | | ✅ | Yes + 📦 | |
| Perplexity | ✅ | ❌ (rejected) | | ✅ | Yes + 📦 | |
| TogetherAI | ✅ | ✅ | | ✅ | Yes + 📦 | |
| xAI | | | | | | |
| Ollama (2) | ❌ (broken) | ? | | | | |
Notes:
- 1: Continuation marks: a. sends reason=max-tokens (streaming/non-streaming), b. TBA
- 2: Ollama has not been ported to AIX yet due to the custom APIs.
## 1. System Architecture
The subsystem comprises three main components:
1. **Client (e.g. Next.js Frontend)**
- Initiates requests
- Renders AI-generated content in real-time
- Reconstructs streamed data
2. **Server (e.g. Next.js Backend)**
- Acts as an intermediary between client and AI providers
- Handles request preparation, dispatching, and response processing
- Streams responses back to the client
3. **Upstream AI Providers**
- Generate AI content based on requests
### ChatGenerate workflow:
1. Request Initialization: AIX Client prepares and sends request (systemInstruction, messages=AixWire_Parts[], etc.) to AIX Server
2. Dispatch Preparation: AIX Server prepares for upstream communication
3. AI Provider Interaction: AIX Server communicates with AI Provider (streaming or non-streaming)
4. Data Decoding, Transformation and Transmission: AIX Server sends AixWire_Particles to AIX Client
5. Client-side Processing: Client's ContentReassembler processes AixWire_Particles into a list (likely a single) of multi-fragment (DMessageContentFragment[]) messages
6. Completion: AIX Server sends 'done' control message, AIX Client finalizes data update
7. Error Handling: AIX Server sends specific error messages when necessary
## 2. Files and Folders
AIX is organized into the following files and folders:
1. Client-Side (`/client/`):
- `aix.client.ts`: Main client-side entry point for AIX operations.
- `aix.client.chatGenerateRequest.ts`: Handles conversion of chat messages to AIX-compatible format (AixWire_Content, AixWire_Parts, etc.).
2. Server-Side (`/server/`):
- API (`/server/api/`) - Client to Server communication:
- `aix.router.ts`: Defines the tRPC router for AIX operations.
- `aix.wiretypes.ts`: Contains Zod schemas for types and calls incoming from the client (AixWire_Parts, AixWire_Content, AixWire_Tooling, AixWire_API, ...), and outgoing (AixWire_Particles)
- Dispatch (`/server/dispatch/`) - Server to AI Provider communication:
- `/server/dispatch/chatGenerate/`: Content Generation with chat-style inputs:
- `./adapters/`: Adapters for creating API requests for different AI protocols (Anthropic, Gemini, OpenAI).
- `./parsers/`: Parsers for parsing streaming/non-streamin responses from different AI protocols (same 3).
- `chatGenerate.dispatch.ts`: Creates a pipeline to execute Chat Generation to a specific provider.
- `ChatGenerateTransmitter.ts`: Used to serialize and transmit AixWire_Particles to the client.
- `/server/dispatch/wiretypes/`: AI provider Wire Types:
- Type definitions for different AI providers/protocols (Anthropic, Gemini, OpenAI).
- `stream.demuxers.ts`: Handles demuxing of different stream formats.
## 3. Architecture Diagram
```mermaid
sequenceDiagram
participant AIX Client
participant AIX Server
participant PartTransmitter
participant AI Provider
AIX Client ->> AIX Client: Initialize ContentReassembler
AIX Client ->> AIX Client: Convert DMessage*Part to AixWire_Parts
AIX Client ->> AIX Server: Send messages (arrays of AixWire_Parts)
AIX Server ->> AIX Server: Prepare Dispatch (Upstream request, demux, parsing)
alt Dispatch Preparation Error
AIX Server ->> AIX Client: Send `dispatch-prepare` error message
else Dispatch Fetch
AIX Server ->> AI Provider: Send AI-provider specific stream/non-stream request
AIX Server ->> AIX Client: Send 'start' control message
AIX Server ->> PartTransmitter: Initialize part particle serialization
alt Streaming AI Provider
loop Until stream end or error
AI Provider ->> AIX Server: Stream response chunk
AIX Server ->> AIX Server: Demux chunk into DispatchEvents
loop For each AI-provider specific DispatchEvent
AIX Server ->> AIX Server: Parse DispatchEvent
AIX Server ->> PartTransmitter: (Parser) Calls serialization functions
PartTransmitter ->> PartTransmitter: Generate and throttle AixWire_PartParticles
PartTransmitter -->> AIX Server: Yield AixWire_PartParticle
end
AIX Server ->> AIX Client: Send accumulated AixWire_PartParticles
end
AIX Server ->> PartTransmitter: Request any remaining particles
PartTransmitter -->> AIX Server: Yield any final AixWire_PartParticles
AIX Server ->> AIX Client: Send final AixWire_PartParticles (if any)
else Non-Streaming AI Provider
AI Provider ->> AIX Server: Send AI-provider specific complete response
alt AI-provider specific full-response parser
AIX Server ->> AIX Server: Parse full response
AIX Server ->> PartTransmitter: Call particle serialization functions
PartTransmitter ->> PartTransmitter: Generate AixWire_PartParticle
PartTransmitter -->> AIX Server: Yield ALL AixWire_PartParticle
end
AIX Server ->> AIX Client: Send all AixWire_PartParticles
end
AIX Server ->> AIX Client: Send 'done' control message
loop For each received batch of particles
AIX Client ->> AIX Client: ContentReassembler processes particles into DMessage*Part
alt DMessageTextPart
AIX Client ->> AIX Client: Update UI with text content
else DMessageImageRefPart
AIX Client ->> AIX Client: Load and display image
else DMessageToolInvocationPart
AIX Client ->> AIX Client: Process tool invocation (dev only)
else DMessageToolResponsePart
AIX Client ->> AIX Client: Process tool response (dev only)
else DMessageErrorPart
AIX Client ->> AIX Client: Display error message
else DMessageDocPart
AIX Client ->> AIX Client: Process and display document
else DMetaPlaceholderPart
AIX Client ->> AIX Client: Handle placeholder (non-submitted)
end
end
AIX Client ->> AIX Client: Finalize data update
end
alt Error Handling
AIX Server ->> AIX Client: Send 'error' specific control messages
end
note over AIX Server, AI Provider: Server-side Timeout/Retry mechanism
loop Retry on timeout (server-side)
AIX Server ->> AI Provider: Retry request
end
note over AIX Client: Client-side Timeout mechanism
AIX Client ->> AIX Client: Timeout if no response received within set time
```
---
### 2025-03-14 Update
AIX is used in production in Big-AGI and is stable and performant.
The code is tightly coupled with the tRPC framework and the rest of our codebase,
so it is not recommended to use it outside of our ecosystem.
For a great Typescript alternative we recommend the Vercel AI SDK.
+131
View File
@@ -0,0 +1,131 @@
# LLM Parameters System
This document describes how parameters flow through Big-AGI's LLM parameters system, from definition to API invocation.
## System Overview
The LLM parameters system operates across five layers that transform parameters from global definitions to vendor-specific API calls. Each layer serves a specific purpose in the parameter resolution pipeline.
## Parameter Flow Architecture
### Layer 1: Parameter Registry
**File**: `src/common/stores/llms/llms.parameters.ts`
The `DModelParameterRegistry` defines all available parameters with their constraints and metadata. Each parameter includes type information, validation rules, and default behavior.
**Example**: `llmVndOaiReasoningEffort4` defines a 4-value enum with 'medium' as the required fallback.
**Default Value System**: The registry supports multiple default mechanisms:
- `initialValue` - Parameter's base default (e.g., `llmVndOaiRestoreMarkdown: true`)
- `requiredFallback` - Fallback for required parameters (e.g., `llmTemperature: 0.5`)
- `nullable` - Parameters that can be explicitly null to skip API transmission
### Layer 2: Model Specifications
**File**: `src/modules/llms/server/llm.server.types.ts`
Models declare which parameters they support through `parameterSpecs` arrays. Each spec can override registry defaults:
```typescript
parameterSpecs: [
{ paramId: 'llmVndOaiReasoningEffort4' },
{ paramId: 'llmVndAntThinkingBudget', initialValue: 1024 }, // Override default
{ paramId: 'llmVndGeminiThinkingBudget', rangeOverride: [0, 8192] }, // Custom range
]
```
**Parameter Visibility**: The `hidden` flag removes parameters from the UI while keeping them functional. Models can also mark parameters as `required`.
### Layer 3: Client Configuration
The system provides two UI configurators with different scopes:
#### Full Model Configuration Dialog
**File**: `src/modules/llms/models-modal/LLMParametersEditor.tsx`
Shows all non-hidden parameters from model's `parameterSpecs`. Used in the models modal for complete configuration.
#### ChatPanel Quick Controls
**File**: `src/apps/chat/components/layout-panel/ChatPanelModelParameters.tsx`
Shows only parameters that are:
- In model's `parameterSpecs`
- Listed in `_interestingParameters` array
- Not marked as `hidden`
**Value Resolution**: Both UIs use `getAllModelParameterValues()` to merge:
1. **Fallback values** - Required parameters get their `requiredFallback` values
2. **Initial values** - Model's `initialParameters` (populated during model creation)
3. **User values** - User's `userParameters` (highest priority)
### Layer 4: AIX Translation
**File**: `src/modules/aix/client/aix.client.ts`
The AIX client transforms DLLM parameters to wire protocol format. This layer handles parameter precedence rules and name transformations:
```typescript
// Parameter precedence: newer 4-value version takes priority over 3-value
...((llmVndOaiReasoningEffort4 || llmVndOaiReasoningEffort) ?
{ vndOaiReasoningEffort: llmVndOaiReasoningEffort4 || llmVndOaiReasoningEffort } : {})
```
**Client Options**: The system supports parameter overrides through `llmOptionsOverride` and complete replacement via `llmUserParametersReplacement`.
### Layer 5: Vendor Adaptation
**Files**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
Server-side adapters translate AIX parameters to vendor APIs. Each vendor may interpret parameters differently:
- **OpenAI**: `vndOaiReasoningEffort``reasoning_effort`
- **Perplexity**: Reuses OpenAI parameter format
- **OpenAI Responses API**: Maps to structured reasoning config with additional logic
## Parameter Initialization Process
When a model is loaded:
1. **Model Creation**: `modelDescriptionToDLLM()` creates the DLLM with empty `initialParameters`
2. **Initial Value Application**: `applyModelParameterInitialValues()` populates initial values from:
- Model spec `initialValue` (highest priority)
- Registry `initialValue` (fallback)
3. **Runtime Resolution**: `getAllModelParameterValues()` creates final parameter set:
- Required fallbacks (for missing required parameters)
- Initial parameters (model defaults)
- User parameters (user overrides)
## Special Parameter Behaviors
**Hidden Parameters**: Parameters like `llmRef` are marked `hidden: true` in the registry and never appear in the UI, but remain functional for system use.
**Nullable Parameters**: Parameters with `nullable` configuration can be explicitly set to `null` to prevent transmission to the API, distinct from being undefined.
**Range Overrides**: Models can override parameter ranges (e.g., different Gemini models support different thinking budget ranges).
**Parameter Interactions**: The UI implements business logic like disabling web search when reasoning effort is 'minimal'.
## Type Safety Mechanisms
The system maintains type safety through:
- `DModelParameterId` union from registry keys
- `DModelParameterValue<T>` conditional types for values
- `DModelParameterSpec<T>` interfaces for specifications
- Runtime validation via Zod schemas at API boundaries
## Model Variant Pattern
Some vendors use model variants to enable features, for instance:
- **Anthropic**: Creates separate `idVariant: 'thinking'` entries forcing value of hidden parameters
- **Google/OpenAI**: Parameters directly on base models
## Migration and Compatibility
The architecture supports parameter evolution:
- **Version Coexistence**: Both `llmVndOaiReasoningEffort` and `llmVndOaiReasoningEffort4` exist simultaneously
- **Precedence Rules**: Newer parameters take priority during AIX translation
- **Graceful Degradation**: Unknown parameters log warnings but don't break functionality
## Key Implementation Files
- **Registry**: `src/common/stores/llms/llms.parameters.ts`
- **Specifications**: `src/modules/llms/server/llm.server.types.ts`
- **UI Controls**: `src/modules/llms/models-modal/LLMParametersEditor.tsx`
- **AIX Translation**: `src/modules/aix/client/aix.client.ts`
- **Wire Types**: `src/modules/aix/server/api/aix.wiretypes.ts`
- **Vendor Adapters**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
+151
View File
@@ -0,0 +1,151 @@
# Big-AGI Routing & Display States
This document describes the routing architecture and display state hierarchy in Big-AGI, from top-level providers down to component-level states.
## Overview
Big-AGI uses Next.js Pages Router with a provider stack that determines what users see based on application state and configuration.
## Quick Reference: Route Configurations
| Route | Purpose | Key Features |
|-------|---------|--------------|
| `/` | Main chat app | Default application |
| `/call` | Voice interface | Voice-to-voice AI conversations |
| `/personas` | Persona management | Create and manage AI personas |
| ... | | |
## Decision Flow Diagram
The routing decisions follow a hierarchy from system-level provider configuration down to component-level states.
```mermaid
flowchart TD
Start([Navigate to Route]) --> Root[_app.tsx]
Root --> Theme[ProviderTheming]
Theme --> Error[ErrorBoundary]
Error --> Bootstrap[ProviderBootstrapLogic]
Bootstrap --> BootCheck{Bootstrap Checks}
BootCheck -->|News| News[↗️ /news]
BootCheck -->|Continue| Router{Router}
Router -->|/| Chat[Chat App]
Router -->|/personas,/call,/beam...| OtherApps[Other Apps]
Router -->|/news| NewsApp[News App]
Chat --> ChatStates{Chat States}
ChatStates -->|No Models| ZeroModels[🟡 Setup Models]
ChatStates -->|No Conv| ZeroConv[🟡 Select Chat]
ChatStates -->|No Msgs| PersonaGrid[Choose Persona]
ChatStates -->|Ready| Active[🟢 Active Chat]
Active --> Features[Features:<br/>• Chat Bar<br/>• Beam Mode<br/>• Attachments]
style ZeroModels fill:#fff4cc
style ZeroConv fill:#fff4cc
style Active fill:#ccffcc
style Chat fill:#f0f8ff
style OtherApps fill:#f0f8ff
style NewsApp fill:#f0f8ff
```
## Display State Hierarchy
```
_app.tsx (Root)
├── ProviderTheming ← Always Applied
├── ErrorBoundary ← Always Applied
├── ProviderBootstrapLogic ← Always Applied
│ ├── Tiktoken preload & Model auto-config
│ ├── Storage maintenance & cleanup
│ └── News Redirect (if conditions met)
└── Page Component
├── AppChat (/) → Default app
│ ├── CMLZeroModels → If no models configured
│ ├── CMLZeroConversation → If no conversation selected
│ └── PersonaGrid → If conversation empty
└── Other Apps → Personas, Call, Draw, News, Beam
```
## Provider Stack
| Provider | Purpose | Key Functions |
|----------|---------|---------------|
| **ProviderTheming** | UI theme management | Theme switching, CSS variables |
| **ErrorBoundary** | Error handling | Catches and displays errors gracefully |
| **ProviderBootstrapLogic** | App initialization | • Tiktoken preload<br>• Model auto-config<br>• Storage cleanup<br>• News redirect logic |
For detailed initialization sequence and provider functions, see [app-startup-sequence.md](app-startup-sequence.md), if present.
## Application Routes
### Primary Apps
- `/` → AppChat (default)
- `/call` → Voice call interface
- `/beam` → Multi-model reasoning
- `/draw` → Image generation
- `/personas` → Personas app
- `/news` → News/updates
### Zero States
#### Chat App Zero States
**CMLZeroModels**
- **Location**: `/src/apps/chat/components/messages-list/CMLZeroModels.tsx`
- **Triggered**: No LLM sources configured
- **Shows**: Welcome screen with "Setup Models" button
**CMLZeroConversation**
- **Location**: `/src/apps/chat/components/messages-list/CMLZeroConversation.tsx`
- **Triggered**: No conversation selected
- **Shows**: "Select/create conversation" prompt
**PersonaGrid**
- **App**: Chat (when conversation is empty)
- **Triggered**: Conversation exists but has no messages
- **Shows**: Persona selector interface
#### Feature-Specific Zero States
**Beam Tutorial**
- **Feature**: Beam (multi-model reasoning)
- **Component**: `ExplainerCarousel`
- **Triggered**: First-time Beam usage
- **Shows**: Interactive feature walkthrough
## Common Scenarios
### New User First Visit
1. Navigates to `/` → Provider stack loads
2. Bootstrap runs → No news redirect (first visit)
3. Chat loads → **CMLZeroModels** (no models configured)
4. User clicks "Setup Models" → Configuration flow
### Returning User with Saved State
1. Navigates to `/` → Provider stack loads
2. IndexedDB restores state → Previous conversation loaded
3. Chat loads → **Active chat interface** (bypasses all zero states)
4. All messages and context preserved from last session
### Shared Chat Viewer
1. Navigates to `/link/chat/[id]` → Full provider stack
2. Views read-only chat → May see "Import" option
3. If importing → Checks for duplicates, creates new local conversation
## Storage System
Big-AGI uses a local-first architecture:
- **Zustand** for reactive state management
- **IndexedDB** for persistent storage via Zustand persist middleware
- **Version-based migrations** for data structure upgrades
Key stores:
- `app-chats`: Conversations and messages (IndexedDB)
- `app-llms`: Model configurations (IndexedDB)
- `app-ui`: UI preferences (localStorage)
-85
View File
@@ -1,85 +0,0 @@
import { readFile } from 'node:fs/promises';
// Build information
process.env.NEXT_PUBLIC_BUILD_HASH = 'big-agi-2-dev';
process.env.NEXT_PUBLIC_BUILD_PKGVER = JSON.parse('' + await readFile(new URL('./package.json', import.meta.url))).version;
process.env.NEXT_PUBLIC_BUILD_TIMESTAMP = new Date().toISOString();
console.log(` 🧠 \x1b[1mbig-AGI\x1b[0m v${process.env.NEXT_PUBLIC_BUILD_PKGVER} (@${process.env.NEXT_PUBLIC_BUILD_HASH})`);
// Non-default build types
const buildType =
process.env.BIG_AGI_BUILD === 'standalone' ? 'standalone'
: process.env.BIG_AGI_BUILD === 'static' ? 'export'
: undefined;
buildType && console.log(` 🧠 big-AGI: building for ${buildType}...\n`);
/** @type {import('next').NextConfig} */
let nextConfig = {
reactStrictMode: true,
// [exports] https://nextjs.org/docs/advanced-features/static-html-export
...buildType && {
output: buildType,
distDir: 'dist',
// disable image optimization for exports
images: { unoptimized: true },
// Optional: Change links `/me` -> `/me/` and emit `/me.html` -> `/me/index.html`
// trailingSlash: true,
},
// [puppeteer] https://github.com/puppeteer/puppeteer/issues/11052
// NOTE: we may not be needing this anymore, as we use '@cloudflare/puppeteer'
serverExternalPackages: ['puppeteer-core'],
webpack: (config, { isServer }) => {
// @mui/joy: anything material gets redirected to Joy
config.resolve.alias['@mui/material'] = '@mui/joy';
// @dqbd/tiktoken: enable asynchronous WebAssembly
config.experiments = {
asyncWebAssembly: true,
layers: true,
};
// fix warnings for async functions in the browser (https://github.com/vercel/next.js/issues/64792)
if (!isServer) {
config.output.environment = { ...config.output.environment, asyncFunction: true };
}
// prevent too many small chunks (40kb min) on 'client' packs (not 'server' or 'edge-server')
// noinspection JSUnresolvedReference
if (typeof config.optimization.splitChunks === 'object' && config.optimization.splitChunks.minSize) {
// noinspection JSUnresolvedReference
config.optimization.splitChunks.minSize = 40 * 1024;
}
return config;
},
// Note: disabled to check whether the project becomes slower with this
// modularizeImports: {
// '@mui/icons-material': {
// transform: '@mui/icons-material/{{member}}',
// },
// },
// Uncomment the following leave console messages in production
// compiler: {
// removeConsole: false,
// },
};
// Validate environment variables, if set at build time. Will be actually read and used at runtime.
// This is the reason both this file and the servr/env.mjs files have this extension.
await import('./src/server/env.mjs');
// conditionally enable the nextjs bundle analyzer
if (process.env.ANALYZE_BUNDLE) {
const { default: withBundleAnalyzer } = await import('@next/bundle-analyzer');
nextConfig = withBundleAnalyzer({ openAnalyzer: true })(nextConfig);
}
export default nextConfig;
+139
View File
@@ -0,0 +1,139 @@
import type { NextConfig } from 'next';
import { execSync } from 'node:child_process';
import { readFileSync } from 'node:fs';
// Build information: from CI, or git commit hash
let buildHash = process.env.NEXT_PUBLIC_BUILD_HASH || process.env.GITHUB_SHA || process.env.VERCEL_GIT_COMMIT_SHA; // Docker or custom, GitHub Actions, Vercel
try {
// fallback to local git commit hash
if (!buildHash)
buildHash = execSync('git rev-parse --short HEAD').toString().trim();
} catch {
// final fallback
buildHash = '2-dev';
}
// The following are used by/available to Release.buildInfo(...)
process.env.NEXT_PUBLIC_BUILD_HASH = (buildHash || '').slice(0, 10);
process.env.NEXT_PUBLIC_BUILD_PKGVER = JSON.parse('' + readFileSync(new URL('./package.json', import.meta.url))).version;
process.env.NEXT_PUBLIC_BUILD_TIMESTAMP = new Date().toISOString();
process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE = process.env.NEXT_PUBLIC_DEPLOYMENT_TYPE || (process.env.VERCEL_ENV ? `vercel-${process.env.VERCEL_ENV}` : 'local'); // Docker or custom, Vercel
console.log(` 🧠 \x1b[1mbig-AGI\x1b[0m v${process.env.NEXT_PUBLIC_BUILD_PKGVER} (@${process.env.NEXT_PUBLIC_BUILD_HASH})`);
// Non-default build types
const buildType =
process.env.BIG_AGI_BUILD === 'standalone' ? 'standalone' as const
: process.env.BIG_AGI_BUILD === 'static' ? 'export' as const
: undefined;
buildType && console.log(` 🧠 big-AGI: building for ${buildType}...\n`);
/** @type {import('next').NextConfig} */
let nextConfig: NextConfig = {
reactStrictMode: true,
// [exports] https://nextjs.org/docs/advanced-features/static-html-export
...(buildType && {
output: buildType,
distDir: 'dist',
// disable image optimization for exports
images: { unoptimized: true },
// Optional: Change links `/me` -> `/me/` and emit `/me.html` -> `/me/index.html`
// trailingSlash: true,
}),
// [puppeteer] https://github.com/puppeteer/puppeteer/issues/11052
// NOTE: we may not be needing this anymore, as we use '@cloudflare/puppeteer'
serverExternalPackages: ['puppeteer-core'],
webpack: (config: any, { isServer }: { isServer: boolean }) => {
// @mui/joy: anything material gets redirected to Joy
config.resolve.alias['@mui/material'] = '@mui/joy';
// @dqbd/tiktoken: enable asynchronous WebAssembly
config.experiments = {
asyncWebAssembly: true,
layers: true,
};
// fix warnings for async functions in the browser (https://github.com/vercel/next.js/issues/64792)
if (!isServer) {
config.output.environment = { ...config.output.environment, asyncFunction: true };
}
// prevent too many small chunks (40kb min) on 'client' packs (not 'server' or 'edge-server')
// noinspection JSUnresolvedReference
if (typeof config.optimization.splitChunks === 'object' && config.optimization.splitChunks.minSize) {
// noinspection JSUnresolvedReference
config.optimization.splitChunks.minSize = 40 * 1024;
}
return config;
},
// Optional Analytics > PostHog
skipTrailingSlashRedirect: true, // required to support PostHog trailing slash API requests
async rewrites() {
return [
{
source: '/a/ph/static/:path*',
destination: 'https://us-assets.i.posthog.com/static/:path*',
},
{
source: '/a/ph/:path*',
destination: 'https://us.i.posthog.com/:path*',
},
{
source: '/a/ph/decide',
destination: 'https://us.i.posthog.com/decide',
},
{
source: '/a/ph/flags',
destination: 'https://us.i.posthog.com/flags',
},
];
},
// Note: disabled to check whether the project becomes slower with this
// modularizeImports: {
// '@mui/icons-material': {
// transform: '@mui/icons-material/{{member}}',
// },
// },
// Uncomment the following leave console messages in production
// compiler: {
// removeConsole: false,
// },
};
// Validate environment variables, if set at build time. Will be actually read and used at runtime.
import { verifyBuildTimeVars } from '~/server/env';
verifyBuildTimeVars();
// PostHog error reporting with source maps for production builds
import { withPostHogConfig } from '@posthog/nextjs-config';
if (process.env.POSTHOG_API_KEY && process.env.POSTHOG_ENV_ID) {
console.log(' 🧠 \x1b[1mbig-AGI\x1b[0m: building with PostHog issue reporting and source maps...');
nextConfig = withPostHogConfig(nextConfig, {
personalApiKey: process.env.POSTHOG_API_KEY,
envId: process.env.POSTHOG_ENV_ID,
host: 'https://us.i.posthog.com', // backtrace upload host
verbose: false,
sourcemaps: {
enabled: process.env.NODE_ENV === 'production',
project: 'big-agi',
version: process.env.NEXT_PUBLIC_BUILD_HASH,
deleteAfterUpload: false, // false: leave them in the tree, which would also help debugging of open-source installs
},
});
}
// conditionally enable the nextjs bundle analyzer
import withBundleAnalyzer from '@next/bundle-analyzer';
if (process.env.ANALYZE_BUNDLE) {
nextConfig = withBundleAnalyzer({ openAnalyzer: true })(nextConfig) as NextConfig;
}
export default nextConfig;
+2850 -1324
View File
File diff suppressed because it is too large Load Diff
+48 -62
View File
@@ -1,6 +1,6 @@
{
"name": "big-agi",
"version": "1.91.0",
"version": "2.0.0",
"private": true,
"author": "Enrico Ros <enrico.ros@gmail.com>",
"repository": "https://github.com/enricoros/big-agi",
@@ -27,85 +27,71 @@
"@emotion/cache": "^11.14.0",
"@emotion/react": "^11.14.0",
"@emotion/server": "^11.11.0",
"@emotion/styled": "^11.14.0",
"@mui/icons-material": "^5.16.14",
"@mui/joy": "^5.0.0-beta.51",
"@mui/material": "^5.16.14",
"@next/bundle-analyzer": "^15.1.4",
"@next/third-parties": "^15.1.4",
"@emotion/styled": "^11.14.1",
"@mui/icons-material": "^5.18.0",
"@mui/joy": "^5.0.0-beta.52",
"@next/bundle-analyzer": "~15.1.8",
"@prisma/client": "~5.22.0",
"@t3-oss/env-nextjs": "^0.11.1",
"@tanstack/react-query": "^5.63.0",
"@tanstack/react-virtual": "^3.11.2",
"@trpc/client": "11.0.0-rc.688",
"@trpc/next": "11.0.0-rc.688",
"@trpc/react-query": "11.0.0-rc.688",
"@trpc/server": "11.0.0-rc.688",
"@vercel/analytics": "^1.4.1",
"@vercel/speed-insights": "^1.1.0",
"browser-fs-access": "^0.35.0",
"cheerio": "^1.0.0",
"dexie": "^4.0.10",
"@tanstack/react-query": "5.90.3",
"@trpc/client": "11.5.1",
"@trpc/next": "11.5.1",
"@trpc/react-query": "11.5.1",
"@trpc/server": "11.5.1",
"@vercel/analytics": "^1.5.0",
"@vercel/speed-insights": "^1.2.0",
"browser-fs-access": "^0.38.0",
"cheerio": "^1.1.2",
"csv-stringify": "^6.6.0",
"dexie": "^4.0.11",
"dexie-react-hooks": "^1.1.7",
"diff": "^7.0.0",
"eventsource-parser": "^3.0.0",
"idb-keyval": "^6.2.1",
"mammoth": "^1.9.0",
"nanoid": "^5.0.9",
"next": "^15.1.4",
"diff": "^8.0.2",
"eventemitter3": "^5.0.1",
"idb-keyval": "^6.2.2",
"mammoth": "^1.11.0",
"nanoid": "^5.1.6",
"next": "~15.1.8",
"nprogress": "^0.2.0",
"pdfjs-dist": "4.10.38",
"plantuml-encoder": "^1.4.0",
"prismjs": "^1.29.0",
"pdfjs-dist": "5.4.54",
"posthog-js": "^1.275.3",
"posthog-node": "^5.10.0",
"prismjs": "^1.30.0",
"puppeteer-core": "^24.25.0",
"react": "^18.3.1",
"react-csv": "^2.2.2",
"react-dom": "^18.3.1",
"react-hook-form": "^7.54.2",
"react-katex": "^3.0.1",
"react-markdown": "^9.0.3",
"react-player": "^2.16.0",
"react-resizable-panels": "^2.1.7",
"react-timeago": "^7.2.0",
"react-hook-form": "^7.65.0",
"react-markdown": "^10.1.0",
"react-player": "^3.3.3",
"react-resizable-panels": "^3.0.6",
"react-timeago": "^8.3.0",
"rehype-katex": "^7.0.1",
"remark-gfm": "^4.0.0",
"remark-gfm": "^4.0.1",
"remark-mark-highlight": "^0.1.1",
"remark-math": "^6.0.0",
"sharp": "^0.33.5",
"superjson": "^2.2.2",
"tesseract.js": "^6.0.0",
"tiktoken": "^1.0.18",
"turndown": "^7.2.0",
"zod": "^3.24.1",
"zod-to-json-schema": "^3.24.1",
"zustand": "^5.0.3"
"tesseract.js": "^6.0.1",
"tiktoken": "^1.0.22",
"turndown": "^7.2.1",
"zod": "^4.1.12",
"zustand": "5.0.7"
},
"devDependencies": {
"@types/diff": "^7.0.0",
"@types/node": "^22.10.5",
"@posthog/nextjs-config": "^1.3.2",
"@types/node": "^24.7.2",
"@types/nprogress": "^0.2.3",
"@types/plantuml-encoder": "^1.4.2",
"@types/prismjs": "^1.26.5",
"@types/react": "^18.3.18",
"@types/react-beautiful-dnd": "^13.1.8",
"@types/react": "^19.2.2",
"@types/react-csv": "^1.1.10",
"@types/react-dom": "^18.3.5",
"@types/react-katex": "^3.0.4",
"@types/react-timeago": "^4.1.7",
"@types/react-dom": "^19.2.2",
"@types/turndown": "^5.0.5",
"cross-env": "^7.0.3",
"eslint": "^9.17.0",
"eslint-config-next": "^15.1.4",
"prettier": "^3.4.2",
"cross-env": "^10.1.0",
"eslint": "^9.37.0",
"eslint-config-next": "~15.1.8",
"prettier": "^3.6.2",
"prisma": "~5.22.0",
"puppeteer-core": "^23.11.1",
"typescript": "^5.7.3"
"typescript": "^5.9.3"
},
"engines": {
"node": "^22.0.0 || ^20.0.0"
},
"overrides": {
"@types/react": "^18.3.18",
"@types/react-dom": "^18.3.5",
"uri-js": "npm:uri-js-replace"
"node": "^26.0.0 || ^24.0.0 || ^22.0.0 || ^20.0.0"
}
}
+11 -6
View File
@@ -14,6 +14,7 @@ import '~/common/styles/NProgress.css';
import '~/common/styles/agi.effects.css';
import '~/common/styles/app.styles.css';
import { ErrorBoundary } from '~/common/components/ErrorBoundary';
import { Is } from '~/common/util/pwaUtils';
import { OverlaysInsert } from '~/common/layout/overlays/OverlaysInsert';
import { ProviderBackendCapabilities } from '~/common/providers/ProviderBackendCapabilities';
@@ -21,7 +22,8 @@ import { ProviderBootstrapLogic } from '~/common/providers/ProviderBootstrapLogi
import { ProviderSingleTab } from '~/common/providers/ProviderSingleTab';
import { ProviderTheming } from '~/common/providers/ProviderTheming';
import { SnackbarInsert } from '~/common/components/snackbar/SnackbarInsert';
import { hasGoogleAnalytics, OptionalGoogleAnalytics } from '~/common/components/GoogleAnalytics';
import { hasGoogleAnalytics, OptionalGoogleAnalytics } from '~/common/components/3rdparty/GoogleAnalytics';
import { hasPostHogAnalytics, OptionalPostHogAnalytics } from '~/common/components/3rdparty/PostHogAnalytics';
const Big_AGI_App = ({ Component, emotionCache, pageProps }: MyAppProps) => {
@@ -42,11 +44,13 @@ const Big_AGI_App = ({ Component, emotionCache, pageProps }: MyAppProps) => {
<ProviderSingleTab>
<ProviderBackendCapabilities>
{/* ^ Backend capabilities & SSR boundary */}
<ProviderBootstrapLogic>
<SnackbarInsert />
{getLayout(<Component {...pageProps} />)}
<OverlaysInsert />
</ProviderBootstrapLogic>
<ErrorBoundary outer>
<ProviderBootstrapLogic>
<SnackbarInsert />
{getLayout(<Component {...pageProps} />)}
<OverlaysInsert />
</ProviderBootstrapLogic>
</ErrorBoundary>
</ProviderBackendCapabilities>
</ProviderSingleTab>
</ProviderTheming>
@@ -54,6 +58,7 @@ const Big_AGI_App = ({ Component, emotionCache, pageProps }: MyAppProps) => {
{Is.Deployment.VercelFromFrontend && <VercelAnalytics debug={false} />}
{Is.Deployment.VercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
{hasGoogleAnalytics && <OptionalGoogleAnalytics />}
{hasPostHogAnalytics && <OptionalPostHogAnalytics />}
</>;
};
+4
View File
@@ -100,6 +100,10 @@ MyDocument.getInitialProps = async (ctx: DocumentContext) => {
});
const initialProps = await Document.getInitialProps(ctx);
// Inject the comment before the HTML tag
initialProps.html = `<!-- ❤ Built with Big-AGI -->\n${initialProps.html}`;
// This is important. It prevents Emotion to render invalid HTML.
// See https://github.com/mui/material-ui/issues/26561#issuecomment-855286153
const emotionStyles = extractCriticalToChunks(initialProps.html);
+2 -3
View File
@@ -25,11 +25,11 @@ import { getLLMsDebugInfo } from '~/common/stores/llms/store-llms';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useFolderStore } from '~/common/stores/folders/store-chat-folders';
import { useLogicSherpaStore } from '~/common/logic/store-logic-sherpa';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
// utils access
import { BrowserLang, clientHostName, Is, isPwa } from '~/common/util/pwaUtils';
import { getGA4MeasurementId } from '~/common/components/GoogleAnalytics';
import { getGA4MeasurementId } from '~/common/components/3rdparty/GoogleAnalytics';
import { prettyTimestampForFilenames } from '~/common/util/timeUtils';
import { supportsClipboardRead } from '~/common/util/clipboardUtils';
import { supportsScreenCapture } from '~/common/util/screenCaptureUtils';
@@ -109,7 +109,6 @@ function AppDebug() {
reloads: usageCount,
},
release: {
app: Release.App,
build: frontendBuild,
},
};
Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

File diff suppressed because one or more lines are too long
+1 -2
View File
@@ -10,7 +10,6 @@ import { createBeamVanillaStore } from '~/modules/beam/store-beam_vanilla';
import { OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { createDConversation, DConversation } from '~/common/stores/chat/chat.conversation';
import { createDMessageTextContent, DMessage } from '~/common/stores/chat/chat.message';
import { getChatLLMId } from '~/common/stores/llms/store-llms';
import { useIsMobile } from '~/common/components/useMatchMedia';
@@ -22,7 +21,7 @@ function initTestConversation(): DConversation {
}
function initTestBeamStore(messages: DMessage[], beamStore: BeamStoreApi = createBeamVanillaStore()): BeamStoreApi {
beamStore.getState().open(messages, getChatLLMId(), false, (content) => alert(content));
beamStore.getState().open(messages, null, false, (content) => alert(content));
return beamStore;
}
+1 -1
View File
@@ -14,7 +14,7 @@ import { navigateBack } from '~/common/app.routes';
import { optimaOpenPreferences } from '~/common/layout/optima/useOptima';
import { useCapabilityBrowserSpeechRecognition, useCapabilityElevenLabs } from '~/common/components/useCapabilities';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useUICounter } from '~/common/state/store-ui';
import { useUICounter } from '~/common/stores/store-ui';
function StatusCard(props: { icon: React.JSX.Element, hasIssue: boolean, text: string, button?: React.JSX.Element }) {
+34 -27
View File
@@ -5,11 +5,11 @@ import { Avatar, Box, Card, CardContent, Chip, IconButton, Link as MuiLink, List
import CallIcon from '@mui/icons-material/Call';
import { GitHubProjectIssueCard } from '~/common/components/GitHubProjectIssueCard';
import { OptimaPanelGroup } from '~/common/layout/optima/panel/OptimaPanelGroup';
import { OptimaPanelGroupedList } from '~/common/layout/optima/panel/OptimaPanelGroupedList';
import { OptimaPanelIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { animationShadowRingLimey } from '~/common/util/animUtils';
import { conversationTitle, DConversation, DConversationId } from '~/common/stores/chat/chat.conversation';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useSetOptimaAppMenu } from '~/common/layout/optima/useOptima';
import type { AppCallIntent } from './AppCall';
import { MockPersona, useMockPersonas } from './state/useMockPersonas';
@@ -210,7 +210,7 @@ function useConversationsByPersona() {
}
export function Contacts(props: { setCallIntent: (intent: AppCallIntent) => void }) {
function ContactsMenuItems() {
// external state
const {
@@ -218,36 +218,43 @@ export function Contacts(props: { setCallIntent: (intent: AppCallIntent) => void
showConversations, toggleShowConversations,
showSupport, toggleShowSupport,
} = useAppCallStore();
return (
<OptimaPanelGroupedList title='Contacts Settings'>
<MenuItem onClick={toggleGrayUI}>
Grayed UI
<Switch checked={grayUI} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem onClick={toggleShowConversations}>
Conversations
<Switch checked={showConversations} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem onClick={toggleShowSupport}>
Show Support
<Switch checked={showSupport} sx={{ ml: 'auto' }} />
</MenuItem>
</OptimaPanelGroupedList>
);
}
export function Contacts(props: { setCallIntent: (intent: AppCallIntent) => void }) {
// external state
const { personas } = useMockPersonas();
const { grayUI, showConversations, showSupport } = useAppCallStore();
const conversationsByPersona = useConversationsByPersona();
// pluggable UI
const menuItems = React.useMemo(() => <OptimaPanelGroup title='Contacts Settings'>
<MenuItem onClick={toggleGrayUI}>
Grayed UI
<Switch checked={grayUI} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem onClick={toggleShowConversations}>
Conversations
<Switch checked={showConversations} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem onClick={toggleShowSupport}>
Show Support
<Switch checked={showSupport} sx={{ ml: 'auto' }} />
</MenuItem>
</OptimaPanelGroup>, [grayUI, showConversations, showSupport, toggleGrayUI, toggleShowConversations, toggleShowSupport]);
useSetOptimaAppMenu(menuItems, 'CallUI-Contacts');
return <>
{/* -> Panel */}
<OptimaPanelIn><ContactsMenuItems /></OptimaPanelIn>
{/* Header "Call AGI" */}
<Box sx={{
my: 6,
+28 -27
View File
@@ -21,17 +21,16 @@ import { useElevenLabsVoiceDropdown } from '~/modules/elevenlabs/useElevenLabsVo
import type { OptimaBarControlMethods } from '~/common/layout/optima/bar/OptimaBarDropdown';
import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
import { Link } from '~/common/components/Link';
import { OptimaPanelGroup } from '~/common/layout/optima/panel/OptimaPanelGroup';
import { OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { OptimaPanelGroupedList } from '~/common/layout/optima/panel/OptimaPanelGroupedList';
import { OptimaPanelIn, OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { SpeechResult, useSpeechRecognition } from '~/common/components/speechrecognition/useSpeechRecognition';
import { conversationTitle, remapMessagesSysToUsr } from '~/common/stores/chat/chat.conversation';
import { createDMessageFromFragments, createDMessageTextContent, DMessage, messageFragmentsReduceText } from '~/common/stores/chat/chat.message';
import { createDMessageFromFragments, createDMessageTextContent, DMessage, messageFragmentsReduceText, messageWasInterruptedAtStart } from '~/common/stores/chat/chat.message';
import { createErrorContentFragment } from '~/common/stores/chat/chat.fragments';
import { launchAppChat, navigateToIndex } from '~/common/app.routes';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { usePlayUrl } from '~/common/util/audio/usePlayUrl';
import { useSetOptimaAppMenu } from '~/common/layout/optima/useOptima';
import type { AppCallIntent } from './AppCall';
import { CallAvatar } from './components/CallAvatar';
@@ -41,7 +40,7 @@ import { CallStatus } from './components/CallStatus';
import { useAppCallStore } from './state/store-app-call';
function CallMenuItems(props: {
function CallMenu(props: {
pushToTalk: boolean,
setPushToTalk: (pushToTalk: boolean) => void,
override: boolean,
@@ -56,7 +55,7 @@ function CallMenuItems(props: {
const handleChangeVoiceToggle = () => props.setOverride(!props.override);
return <OptimaPanelGroup title='Call'>
return <OptimaPanelGroupedList title='Call'>
<MenuItem onClick={handlePushToTalkToggle}>
<ListItemDecorator>{props.pushToTalk ? <MicNoneIcon /> : <MicIcon />}</ListItemDecorator>
@@ -86,7 +85,7 @@ function CallMenuItems(props: {
Voice Calls Feedback
</MenuItem>
</OptimaPanelGroup>;
</OptimaPanelGroupedList>;
}
@@ -107,7 +106,7 @@ export function Telephone(props: {
const responseAbortController = React.useRef<AbortController | null>(null);
// external state
const { chatLLMId, chatLLMDropdown } = useChatLLMDropdown(llmDropdownRef);
const { chatLLMId: modelId, chatLLMDropdown: modelDropdown } = useChatLLMDropdown(llmDropdownRef);
const { chatTitle, reMessages } = useChatStore(useShallow(state => {
const conversation = props.callIntent.conversationId
? state.conversations.find(conversation => conversation.id === props.callIntent.conversationId) ?? null
@@ -226,7 +225,7 @@ export function Telephone(props: {
}
// bail if no llm selected
if (!chatLLMId) return;
if (!modelId) return;
// Call Message Generation Prompt
@@ -249,7 +248,7 @@ export function Telephone(props: {
setPersonaTextInterim('💭...');
aixChatGenerateContent_DMessage_FromConversation(
chatLLMId,
modelId,
callSystemInstruction,
callGenerationInputHistory,
'call',
@@ -262,6 +261,10 @@ export function Telephone(props: {
},
).then((status) => {
// don't add the message to conversation if it was interrupted with no content
if (messageWasInterruptedAtStart(status.lastDMessage))
return;
// whether status.outcome === 'success' or not, we get a valid DMessage, eventually with Error Fragments inside
const fullMessage = createDMessageFromFragments('assistant', status.lastDMessage.fragments);
fullMessage.generator = status.lastDMessage.generator;
@@ -274,8 +277,8 @@ export function Telephone(props: {
}).catch((err: DOMException) => {
if (err?.name !== 'AbortError') {
// create an error message to explain the exception
const errorMesage = createDMessageFromFragments('assistant', [createErrorContentFragment(err.message || err.toString())]);
setCallMessages(messages => [...messages, errorMesage]); // [state] append assistant:call_response-ERROR
const errorMessage = createDMessageFromFragments('assistant', [createErrorContentFragment(err.message || err.toString())]);
setCallMessages(messages => [...messages, errorMessage]); // [state] append assistant:call_response-ERROR
}
}).finally(() => {
setPersonaTextInterim(null);
@@ -285,7 +288,7 @@ export function Telephone(props: {
responseAbortController.current?.abort();
responseAbortController.current = null;
};
}, [isConnected, callMessages, chatLLMId, personaVoiceId, personaSystemMessage, reMessages]);
}, [isConnected, callMessages, modelId, personaVoiceId, personaSystemMessage, reMessages]);
// [E] Message interrupter
const abortTrigger = isConnected && recognitionState.hasSpeech;
@@ -311,22 +314,20 @@ export function Telephone(props: {
const isMicEnabled = recognitionState.isAvailable;
const isTTSEnabled = true;
const isEnabled = isMicEnabled && isTTSEnabled;
// pluggable UI
const menuItems = React.useMemo(() =>
<CallMenuItems
pushToTalk={pushToTalk} setPushToTalk={setPushToTalk}
override={overridePersonaVoice} setOverride={setOverridePersonaVoice} />
, [overridePersonaVoice, pushToTalk],
);
useSetOptimaAppMenu(menuItems, 'CallUI-Call');
const micErrorMessage = recognitionState.errorMessage;
return <>
<OptimaToolbarIn>{chatLLMDropdown}</OptimaToolbarIn>
{/* -> Toolbar */}
<OptimaToolbarIn>{modelDropdown}</OptimaToolbarIn>
{/* -> Panel */}
<OptimaPanelIn>
<CallMenu
pushToTalk={pushToTalk} setPushToTalk={setPushToTalk}
override={overridePersonaVoice} setOverride={setOverridePersonaVoice}
/>
</OptimaPanelIn>
<Typography
level='h1'
@@ -350,7 +351,7 @@ export function Telephone(props: {
callerName={isConnected ? undefined : personaName}
statusText={isRinging ? '' /*'is calling you'*/ : isDeclined ? 'call declined' : isEnded ? 'call ended' : callElapsedTime}
regardingText={chatTitle}
micError={!isMicEnabled} speakError={!isTTSEnabled}
micError={!isMicEnabled} micErrorMessage={micErrorMessage} speakError={!isTTSEnabled}
/>
{/* Live Transcript, w/ streaming messages, audio indication, etc. */}
+2 -2
View File
@@ -16,7 +16,7 @@ export function CallStatus(props: {
callerName?: string,
statusText: string,
regardingText: string | null,
micError: boolean, speakError: boolean,
micError: boolean, micErrorMessage: string | null, speakError: boolean,
// llmComponent?: React.JSX.Element,
}) {
return (
@@ -37,7 +37,7 @@ export function CallStatus(props: {
</Typography>}
{props.micError && <InlineError
severity='danger' error='Looks like this Browser may not support speech recognition. You can try Chrome on Windows or Android instead.' />}
severity='danger' error={props.micErrorMessage || 'Looks like this Browser may not support speech recognition. You can try Chrome on Windows or Android instead.'} />}
{props.speakError && <InlineError
severity='danger' error='Text-to-speech does not appear to be configured. Please set it up in Preferences > Voice.' />}
+151 -75
View File
@@ -2,12 +2,12 @@ import * as React from 'react';
import { Panel, PanelGroup, PanelResizeHandle } from 'react-resizable-panels';
import type { SxProps } from '@mui/joy/styles/types';
import { useTheme } from '@mui/joy';
import { Box, useTheme } from '@mui/joy';
import { DEV_MODE_SETTINGS } from '../settings-modal/UxLabsSettings';
import { DiagramConfig, DiagramsModal } from '~/modules/aifn/digrams/DiagramsModal';
import { FlattenerModal } from '~/modules/aifn/flatten/FlattenerModal';
import { TradeConfig, TradeModal } from '~/modules/trade/TradeModal';
import type { DiagramConfig } from '~/modules/aifn/digrams/DiagramsModal';
import type { TradeConfig } from '~/modules/trade/TradeModal';
import { downloadSingleChat, importConversationsFromFilesAtRest, openConversationsAtRestPicker } from '~/modules/trade/trade.client';
import { imaginePromptFromTextOrThrow } from '~/modules/aifn/imagine/imaginePromptFromText';
import { elevenLabsSpeakText } from '~/modules/elevenlabs/elevenlabs.client';
@@ -18,8 +18,9 @@ import type { DConversation, DConversationId } from '~/common/stores/chat/chat.c
import type { OptimaBarControlMethods } from '~/common/layout/optima/bar/OptimaBarDropdown';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { ConversationsManager } from '~/common/chat-overlay/ConversationsManager';
import { LLM_IF_ANT_PromptCaching, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { OptimaDrawerIn, OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { ErrorBoundary } from '~/common/components/ErrorBoundary';
import { getLLMContextTokens, LLM_IF_ANT_PromptCaching, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { OptimaDrawerIn, OptimaPanelIn, OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { PanelResizeInset } from '~/common/components/panes/GoodPanelResizeHandler';
import { Release } from '~/common/app.release';
import { ScrollToBottom } from '~/common/scroll-to-bottom/ScrollToBottom';
@@ -28,28 +29,31 @@ import { ShortcutKey, useGlobalShortcuts } from '~/common/components/shortcuts/u
import { WorkspaceIdProvider } from '~/common/stores/workspace/WorkspaceIdProvider';
import { addSnackbar, removeSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import { createDMessageFromFragments, createDMessagePlaceholderIncomplete, DMessageMetadata, duplicateDMessageMetadata } from '~/common/stores/chat/chat.message';
import { createErrorContentFragment, createTextContentFragment, DMessageAttachmentFragment, DMessageContentFragment, duplicateDMessageFragmentsNoVoid } from '~/common/stores/chat/chat.fragments';
import { createErrorContentFragment, createTextContentFragment, DMessageAttachmentFragment, DMessageContentFragment, duplicateDMessageFragments } from '~/common/stores/chat/chat.fragments';
import { gcChatImageAssets } from '~/common/stores/chat/chat.gc';
import { getChatLLMId } from '~/common/stores/llms/store-llms';
import { getConversation, getConversationSystemPurposeId, useConversation } from '~/common/stores/chat/store-chats';
import { optimaActions, optimaOpenModels, optimaOpenPreferences, useSetOptimaAppMenu } from '~/common/layout/optima/useOptima';
import { themeBgAppChatComposer } from '~/common/app.theme';
import { useChatLLM } from '~/common/stores/llms/llms.hooks';
import { optimaActions, optimaOpenModels, optimaOpenPreferences } from '~/common/layout/optima/useOptima';
import { useFolderStore } from '~/common/stores/folders/store-chat-folders';
import { useIsMobile, useIsTallScreen } from '~/common/components/useMatchMedia';
import { useLLM } from '~/common/stores/llms/llms.hooks';
import { useModelDomain } from '~/common/stores/llms/hooks/useModelDomain';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
import { useRouterQuery } from '~/common/app.routes';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { useUIComplexityIsMinimal } from '~/common/stores/store-ui';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
import { ChatPane } from './components/layout-pane/ChatPane';
import { ChatBarAltBeam } from './components/layout-bar/ChatBarAltBeam';
import { ChatBarBeam } from './components/layout-bar/ChatBarBeam';
import { ChatBarAltTitle } from './components/layout-bar/ChatBarAltTitle';
import { ChatBarDropdowns } from './components/layout-bar/ChatBarDropdowns';
import { ChatBarChat } from './components/layout-bar/ChatBarChat';
import { ChatBeamWrapper } from './components/ChatBeamWrapper';
import { ChatDrawerMemo } from './components/layout-drawer/ChatDrawer';
import { ChatMessageList } from './components/ChatMessageList';
import { Composer } from './components/composer/Composer';
import { usePanesManager } from './components/panes/usePanesManager';
import { PaneTitleOverlay } from './components/PaneTitleOverlay';
import { useComposerAutoHide } from './components/composer/useComposerAutoHide';
import { usePanesManager } from './components/panes/store-panes-manager';
import type { ChatExecuteMode } from './execute-mode/execute-mode.types';
@@ -74,24 +78,52 @@ const chatMessageListSx: SxProps = {
flexGrow: 1,
};
/*const chatMessageListBrandedSx: SxProps = {
flexGrow: 1,
backgroundBlendMode: 'soft-light',
backgroundColor: themeBgApp,
backgroundImage: 'url(https://...)',
backgroundPosition: 'center',
backgroundRepeat: 'no-repeat',
backgroundSize: 'contain',
} as const;*/
const chatBeamWrapperSx: SxProps = {
flexGrow: 1,
// we added these after removing the minSize={20} (%) from the containing panel.
minWidth: '18rem',
// minHeight: 'calc(100vh - 69px - var(--AGI-Nav-width))',
};
const composerOpenSx: SxProps = {
zIndex: 21, // just to allocate a surface, and potentially have a shadow
// NOTE: disabled on 2025-03-05: conflicts with the GlobalDragOverlay's
// zIndex: 21, // just to allocate a surface, and potentially have a shadow
minWidth: { md: 480 }, // don't get compresses too much on desktop
backgroundColor: themeBgAppChatComposer,
// backgroundColor: themeBgAppChatComposer, // inlined in the Composer
transition: 'background-color 0.5s ease-out',
borderTop: `1px solid`,
borderTopColor: 'rgba(var(--joy-palette-neutral-mainChannel, 99 107 116) / 0.4)',
// hack: eats the bottom of the last message (as it has a 1px divider)
mt: '-1px',
};
// NOTE: commented on 2024-05-13, as other content was stepping on the border due to it and missing zIndex
// mt: '-1px',
} as const;
const composerClosedSx: SxProps = {
display: 'none',
};
const composerOpenMobileSx: SxProps = {
zIndex: 21, // allocates the surface, possibly enables shadow if we like
py: 0.5, // have some breathing room
// boxShadow: '0px -1px 8px -2px rgba(0, 0, 0, 0.4)',
...composerOpenSx,
} as const;
// const composerClosedSx: SxProps = {
// display: 'none',
// };
// Lazy-loaded Modals
const DiagramsModalLazy = React.lazy(() => import('~/modules/aifn/digrams/DiagramsModal').then(module => ({ default: module.DiagramsModal })));
const FlattenerModalLazy = React.lazy(() => import('~/modules/aifn/flatten/FlattenerModal').then(module => ({ default: module.FlattenerModal })));
const TradeModalLazy = React.lazy(() => import('~/modules/trade/TradeModal').then(module => ({ default: module.TradeModal })));
export function AppChat() {
@@ -111,21 +143,25 @@ export function AppChat() {
// external state
const theme = useTheme();
const [composerHasContent, setComposerHasContent] = React.useState(false);
const isMobile = useIsMobile();
const isTallScreen = useIsTallScreen();
const isZenMode = useUIComplexityIsMinimal();
const intent = useRouterQuery<Partial<AppChatIntent>>();
const showAltTitleBar = useUXLabsStore(state => DEV_MODE_SETTINGS && state.labsChatBarAlt === 'title');
const { chatLLM } = useChatLLM();
const { domainModelId: chatLLMId } = useModelDomain('primaryChat');
const chatLLM = useLLM(chatLLMId) ?? null;
const {
// state
chatPanes,
focusedPaneConversationId, // <-- key
focusedPaneIndex,
focusedPaneConversationId,
// actions
navigateHistoryInFocusedPane,
openConversationInFocusedPane,
@@ -147,10 +183,9 @@ export function AppChat() {
}, [chatPanes]);
const beamsOpens = useAreBeamsOpen(paneBeamStores);
const beamOpenStoreInFocusedPane = React.useMemo(() => {
const open = focusedPaneIndex !== null ? (beamsOpens?.[focusedPaneIndex] ?? false) : false;
return open ? paneBeamStores?.[focusedPaneIndex!] ?? null : null;
}, [beamsOpens, focusedPaneIndex, paneBeamStores]);
const beamOpenStoreInFocusedPane = focusedPaneIndex === null ? null
: !beamsOpens?.[focusedPaneIndex] ? null
: paneBeamStores?.[focusedPaneIndex] ?? null;
const {
// focused
@@ -171,7 +206,7 @@ export function AppChat() {
// const focusedConversationWorkspaceId = workspaceForConversationIdentity(focusedPaneConversationId);
//// const focusedConversationWorkspace = useWorkspaceIdForConversation(focusedPaneConversationId);
const { mayWork: capabilityHasT2I } = useCapabilityTextToImage();
const { mayWork: capabilityHasT2I, mayEdit: capabilityHasT2IEdit } = useCapabilityTextToImage();
const activeFolderId = useFolderStore(({ enableFolders, folders }) => {
const activeFolderId = enableFolders ? _activeFolderId : null;
@@ -179,6 +214,9 @@ export function AppChat() {
return activeFolder?.id ?? null;
});
// Composer Auto-hiding
const forceComposerHide = !!beamOpenStoreInFocusedPane /* || !focusedPaneConversationId */; // auto-hide when no chat (the 'please select a conversation...' state) doesn't feel good
const composerAutoHide = useComposerAutoHide(forceComposerHide, composerHasContent);
// Window actions
@@ -211,7 +249,7 @@ export function AppChat() {
else if (outcome === 'err-t2i-unconfigured')
optimaOpenPreferences('draw');
else if (outcome === 'err-no-persona')
addSnackbar({ key: 'chat-no-persona', message: 'No persona selected.', type: 'issue' });
addSnackbar({ key: 'chat-no-persona', message: 'No persona selected.', type: 'issue', overrides: { autoHideDuration: 4000 } });
else if (outcome === 'err-no-conversation')
addSnackbar({ key: 'chat-no-conversation', message: 'No active conversation.', type: 'issue' });
else if (outcome === 'err-no-last-message')
@@ -237,7 +275,7 @@ export function AppChat() {
// create the user:message
// NOTE: this can lead to multiple chat messages with data refs that are referring to the same dblobs,
// however, we already got transferred ownership of the dblobs at this point.
const userMessage = createDMessageFromFragments('user', duplicateDMessageFragmentsNoVoid(fragments)); // [chat] create user:message to send per-chat
const userMessage = createDMessageFromFragments('user', duplicateDMessageFragments(fragments, true)); // [chat] create user:message to send per-chat
if (metadata) userMessage.metadata = duplicateDMessageMetadata(metadata);
ConversationsManager.getHandler(conversation.id).messageAppend(userMessage); // [chat] append user message in each conversation
@@ -329,9 +367,10 @@ export function AppChat() {
useFolderStore.getState().addConversationToFolder(activeFolderId, conversationId);
// focus the composer
composerTextAreaRef.current?.focus();
if (!isMobile)
composerTextAreaRef.current?.focus();
}, [activeFolderId, focusedPaneConversationId, handleOpenConversationInFocusedPane, prependNewConversation, recycleNewConversationId]);
}, [activeFolderId, focusedPaneConversationId, handleOpenConversationInFocusedPane, isMobile, prependNewConversation, recycleNewConversationId]);
const handleConversationImportDialog = React.useCallback(() => setTradeConfig({ dir: 'import' }), []);
@@ -432,11 +471,11 @@ export function AppChat() {
const barAltTitle = showAltTitleBar ? focusedChatTitle ?? 'No Chat' : null;
const focusedBarContent = React.useMemo(() => beamOpenStoreInFocusedPane
? <ChatBarAltBeam beamStore={beamOpenStoreInFocusedPane} isMobile={isMobile} />
? <ChatBarBeam conversationTitle={focusedChatTitle ?? 'No Chat'} beamStore={beamOpenStoreInFocusedPane} isMobile={isMobile} />
: (barAltTitle === null)
? <ChatBarDropdowns conversationId={focusedPaneConversationId} llmDropdownRef={llmDropdownRef} personaDropdownRef={personaDropdownRef} />
? <ChatBarChat conversationId={focusedPaneConversationId} llmDropdownRef={llmDropdownRef} personaDropdownRef={personaDropdownRef} />
: <ChatBarAltTitle conversationId={focusedPaneConversationId} conversationTitle={barAltTitle} />
, [barAltTitle, beamOpenStoreInFocusedPane, focusedPaneConversationId, isMobile],
, [barAltTitle, beamOpenStoreInFocusedPane, focusedChatTitle, focusedPaneConversationId, isMobile],
);
@@ -461,7 +500,7 @@ export function AppChat() {
[activeFolderId, disableNewButton, focusedPaneConversationId, handleConversationBranch, handleConversationExport, handleConversationImportDialog, handleConversationNewInFocusedPane, handleDeleteConversations, handleOpenConversationInFocusedPane, isDrawerOpen, paneUniqueConversationIds],
);
const focusedMenuItems = React.useMemo(() =>
const focusedChatPanelContent = React.useMemo(() => !focusedPaneConversationId ? null :
<ChatPane
conversationId={focusedPaneConversationId}
disableItems={!focusedPaneConversationId || isFocusedChatEmpty}
@@ -477,8 +516,6 @@ export function AppChat() {
[focusedPaneConversationId, handleConversationBranch, handleConversationFlatten, handleConversationReset, hasConversations, isFocusedChatEmpty, isMessageSelectionMode, isMobile, isTallScreen],
);
useSetOptimaAppMenu(focusedMenuItems, 'AppChat');
// Effects
@@ -578,8 +615,11 @@ export function AppChat() {
return <>
<OptimaDrawerIn>{drawerContent}</OptimaDrawerIn>
{/* -> Toolbar, -> Drawer, -> Panel*/}
<OptimaToolbarIn>{focusedBarContent}</OptimaToolbarIn>
<OptimaDrawerIn>{drawerContent}</OptimaDrawerIn>
<OptimaPanelIn>{focusedChatPanelContent}</OptimaPanelIn>
<PanelGroup
direction={(isMobile || isTallScreen) ? 'vertical' : 'horizontal'}
@@ -596,17 +636,19 @@ export function AppChat() {
const _panesCount = chatPanes.length;
const _keyAndId = `chat-pane-${pane.paneId}`;
const _sepId = `sep-pane-${idx}`;
return <WorkspaceIdProvider conversationId={_paneIsFocused ? _paneConversationId : null} key={_keyAndId}>
return <WorkspaceIdProvider conversationId={_paneIsFocused ? _paneConversationId : null} key={_keyAndId}><ErrorBoundary>
<Panel
id={_keyAndId}
order={idx}
collapsible={chatPanes.length === 2}
defaultSize={(_panesCount === 3 && idx === 1) ? 34 : Math.round(100 / _panesCount)}
minSize={20}
// minSize={20 /* IMPORTANT: this forces a reflow even on a simple on hover */}
onClick={(event) => {
const setFocus = chatPanes.length < 2 || !event.altKey;
setFocusedPaneIndex(setFocus ? idx : -1);
// Alt + Click: undocumented feature to clear focus
if (event.altKey && chatPanes.length > 1)
return setFocusedPaneIndex(-1);
setFocusedPaneIndex(idx);
}}
onCollapse={() => {
// NOTE: despite the delay to try to let the draggin settle, there seems to be an issue with the Pane locking the screen
@@ -618,14 +660,20 @@ export function AppChat() {
// for anchoring the scroll button in place
position: 'relative',
...(isMultiPane ? {
marginBottom: '1px', // compensates for the -1px in `composerOpenSx` for the Composer offset
borderRadius: '0.375rem',
border: `2px solid ${_paneIsFocused
borderStyle: 'solid',
borderColor: _paneIsFocused
? ((willMulticast || !isMultiConversationId) ? theme.palette.primary.solidBg : theme.palette.primary.solidBg)
: ((willMulticast || !isMultiConversationId) ? theme.palette.primary.softActiveBg : theme.palette.background.level1)}`,
: ((willMulticast || !isMultiConversationId) ? theme.palette.primary.softActiveBg : theme.palette.divider),
borderWidth: '2px',
// borderBottomWidth: '3px',
// DISABLED on 2024-03-13, it gets in the way quite a lot
// filter: (!willMulticast && !_paneIsFocused)
// ? (!isMultiConversationId ? 'grayscale(66.67%)' /* clone of the same */ : 'grayscale(66.67%)')
// : undefined,
// 2025-02-27: didn't try, here's another version
// filter: _paneIsFocused ? 'none' : 'brightness(0.94) saturate(0.9)',
} : {
// NOTE: this is a workaround for the 'stuck-after-collapse-close' issue. We will collapse the 'other' pane, which
// will get it removed (onCollapse), and somehow this pane will be stuck with a pointerEvents: 'none' style, which de-facto
@@ -636,10 +684,21 @@ export function AppChat() {
}),
...((_paneIsIncognito && {
backgroundColor: theme.palette.background.level3,
backgroundImage: 'repeating-linear-gradient(45deg, rgba(0,0,0,0.03), rgba(0,0,0,0.03) 10px, transparent 10px, transparent 20px)',
})),
}}
>
{isMultiPane && !isZenMode && (
<PaneTitleOverlay
paneIdx={idx}
conversationId={_paneConversationId}
isFocused={_paneIsFocused}
isIncognito={_paneIsIncognito}
onConversationDelete={handleDeleteConversations}
/>
)}
<ScrollToBottom
bootToBottom
stickToBottomInitial
@@ -653,7 +712,7 @@ export function AppChat() {
conversationHandler={_paneChatHandler}
capabilityHasT2I={capabilityHasT2I}
chatLLMAntPromptCaching={chatLLM?.interfaces?.includes(LLM_IF_ANT_PromptCaching) ?? false}
chatLLMContextTokens={chatLLM?.contextTokens ?? null}
chatLLMContextTokens={getLLMContextTokens(chatLLM) ?? null}
chatLLMSupportsImages={chatLLM?.interfaces?.includes(LLM_IF_OAI_Vision) ?? false}
fitScreen={isMobile || isMultiPane}
isMobile={isMobile}
@@ -691,50 +750,67 @@ export function AppChat() {
</PanelResizeHandle>
)}
</WorkspaceIdProvider>;
</ErrorBoundary></WorkspaceIdProvider>;
})}
</PanelGroup>
<Composer
isMobile={isMobile}
chatLLM={chatLLM}
composerTextAreaRef={composerTextAreaRef}
targetConversationId={focusedPaneConversationId}
capabilityHasT2I={capabilityHasT2I}
isMulticast={!isMultiConversationId ? null : isComposerMulticast}
isDeveloperMode={isFocusedChatDeveloper}
onAction={handleComposerAction}
onConversationsImportFromFiles={handleConversationsImportFromFiles}
onTextImagine={handleImagineFromText}
setIsMulticast={setIsComposerMulticast}
sx={beamOpenStoreInFocusedPane ? composerClosedSx : composerOpenSx}
/>
{/* Composer with auto-hide */}
<Box {...composerAutoHide.compressorProps}>
<div style={composerAutoHide.compressibleStyle}>
<Composer
isMobile={isMobile}
chatLLM={chatLLM}
composerTextAreaRef={composerTextAreaRef}
targetConversationId={focusedPaneConversationId}
capabilityHasT2I={capabilityHasT2I}
capabilityHasT2IEdit={capabilityHasT2IEdit}
isMulticast={!isMultiConversationId ? null : isComposerMulticast}
isDeveloperMode={isFocusedChatDeveloper}
onAction={handleComposerAction}
onConversationBeamEdit={handleMessageBeamLastInFocusedPane}
onConversationsImportFromFiles={handleConversationsImportFromFiles}
onTextImagine={handleImagineFromText}
setIsMulticast={setIsComposerMulticast}
onComposerHasContent={setComposerHasContent}
sx={isMobile ? composerOpenMobileSx : composerOpenSx}
/>
</div>
</Box>
{/* Hover zone for auto-hide */}
{!forceComposerHide && composerAutoHide.isHidden && <Box {...composerAutoHide.detectorProps} />}
{/* Diagrams */}
{!!diagramConfig && (
<DiagramsModal
config={diagramConfig}
onClose={() => setDiagramConfig(null)}
/>
<React.Suspense fallback={null}>
<DiagramsModalLazy
config={diagramConfig}
onClose={() => setDiagramConfig(null)}
/>
</React.Suspense>
)}
{/* Flatten */}
{!!flattenConversationId && (
<FlattenerModal
conversationId={flattenConversationId}
onConversationBranch={handleConversationBranch}
onClose={() => setFlattenConversationId(null)}
/>
<React.Suspense fallback={null}>
<FlattenerModalLazy
conversationId={flattenConversationId}
onConversationBranch={handleConversationBranch}
onClose={() => setFlattenConversationId(null)}
/>
</React.Suspense>
)}
{/* Import / Export */}
{!!tradeConfig && (
<TradeModal
config={tradeConfig}
onConversationActivate={handleOpenConversationInFocusedPane}
onClose={() => setTradeConfig(null)}
/>
<React.Suspense fallback={null}>
<TradeModalLazy
config={tradeConfig}
onConversationActivate={handleOpenConversationInFocusedPane}
onClose={() => setTradeConfig(null)}
/>
</React.Suspense>
)}
</>;
+41 -12
View File
@@ -1,19 +1,41 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Modal, ModalClose } from '@mui/joy';
import { Box, IconButton, Modal } from '@mui/joy';
import CloseFullscreenIcon from '@mui/icons-material/CloseFullscreen';
import { BeamStoreApi, useBeamStore } from '~/modules/beam/store-beam.hooks';
import { BeamView } from '~/modules/beam/BeamView';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { ScrollToBottom } from '~/common/scroll-to-bottom/ScrollToBottom';
import { themeZIndexBeamView } from '~/common/app.theme';
/*const overlaySx: SxProps = {
position: 'absolute',
inset: 0,
zIndex: themeZIndexBeamView, // stay on top of Message > Chips (:1), and Overlays (:2) - note: Desktop Drawer (:26)
}*/
const beamWrapperStyles = {
wrapper: {
position: 'absolute',
inset: 0,
backgroundColor: 'background.level2', // darker than the expected Level1, for a change
} as const,
closeContainer: {
position: 'absolute',
top: '0.25rem',
// left: '0.25rem',
left: { xs: 'calc(50% - 3rem)', md: '50%' }, // center on desktop, a bit left (for the islands) on mobile
// transform: 'translate(-50%, 0)',
zIndex: themeZIndexBeamView, // stay on top of Message > Chips (:1), and Overlays (:2) - note: Desktop Drawer (:26)
} as const,
closeButton: {
// color: 'white',
// borderRadius: '25%',
boxShadow: 'md',
} as const,
} as const;
export function ChatBeamWrapper(props: {
@@ -40,15 +62,22 @@ export function ChatBeamWrapper(props: {
return isMaximized ? (
<Modal open onClose={handleUnMaximize}>
<Box sx={{
backgroundColor: 'background.level1',
position: 'absolute',
inset: 0,
}}>
<Box sx={beamWrapperStyles.wrapper}>
<ScrollToBottom disableAutoStick>
{beamView}
</ScrollToBottom>
<ModalClose sx={{ color: 'white', backgroundColor: 'background.surface', boxShadow: 'xs', mr: 2 }} />
{/* Modal-Close-alike */}
<Box sx={beamWrapperStyles.closeContainer}>
<GoodTooltip title='Exit maximized mode'>
<IconButton variant='solid' onClick={handleUnMaximize} sx={beamWrapperStyles.closeButton}>
<CloseFullscreenIcon />
{/*<CloseRoundedIcon />*/}
</IconButton>
</GoodTooltip>
</Box>
</Box>
</Modal>
) : (
+46 -30
View File
@@ -9,10 +9,11 @@ import type { SystemPurposeExample } from '../../../data';
import type { DiagramConfig } from '~/modules/aifn/digrams/DiagramsModal';
import type { ConversationHandler } from '~/common/chat-overlay/ConversationHandler';
import type { DLLMContextTokens } from '~/common/stores/llms/llms.types';
import { DConversationId, excludeSystemMessages } from '~/common/stores/chat/chat.conversation';
import { ShortcutKey, useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { convertFilesToDAttachmentFragments } from '~/common/attachment-drafts/attachment.pipeline';
import { createDMessageFromFragments, createDMessageTextContent, DMessage, DMessageId, DMessageUserFlag, DMetaReferenceItem, MESSAGE_FLAG_AIX_SKIP } from '~/common/stores/chat/chat.message';
import { createDMessageFromFragments, createDMessageTextContent, DMessage, DMessageId, DMessageUserFlag, DMetaReferenceItem, MESSAGE_FLAG_AIX_SKIP, messageHasUserFlag } from '~/common/stores/chat/chat.message';
import { createTextContentFragment, DMessageFragment, DMessageFragmentId } from '~/common/stores/chat/chat.fragments';
import { openFileForAttaching } from '~/common/components/ButtonAttachFiles';
import { optimaOpenPreferences } from '~/common/layout/optima/useOptima';
@@ -40,7 +41,7 @@ export function ChatMessageList(props: {
conversationHandler: ConversationHandler | null,
capabilityHasT2I: boolean,
chatLLMAntPromptCaching: boolean,
chatLLMContextTokens: number | null,
chatLLMContextTokens: DLLMContextTokens,
chatLLMSupportsImages: boolean,
fitScreen: boolean,
isMobile: boolean,
@@ -118,9 +119,9 @@ export function ChatMessageList(props: {
}
}, [conversationHandler, conversationId, onConversationExecuteHistory, props.chatLLMSupportsImages]);
const handleMessageContinue = React.useCallback(async (_messageId: DMessageId /* Ignored for now */) => {
const handleMessageContinue = React.useCallback(async (_messageId: DMessageId /* Ignored for now */, continueText: null | string) => {
if (conversationId && conversationHandler) {
conversationHandler.messageAppend(createDMessageTextContent('user', 'Continue')); // [chat] append user:Continue
conversationHandler.messageAppend(createDMessageTextContent('user', continueText || 'Continue')); // [chat] append user:Continue (or custom text, likely from an 'option')
await onConversationExecuteHistory(conversationId);
}
}, [conversationHandler, conversationId, onConversationExecuteHistory]);
@@ -137,8 +138,8 @@ export function ChatMessageList(props: {
const handleMessageBeam = React.useCallback(async (messageId: DMessageId) => {
// Message option menu Beam
if (!conversationId || !props.conversationHandler || !props.conversationHandler.isValid()) return;
const inputHistory = props.conversationHandler.historyViewHeadOrThrow('chat-beam-message');
if (!conversationId || !conversationHandler || !conversationHandler.isValid()) return;
const inputHistory = conversationHandler.historyViewHeadOrThrow('chat-beam-message');
if (!inputHistory.length) return;
// TODO: replace the Persona and Auto-Cache-hint in the history?
@@ -151,52 +152,52 @@ export function ChatMessageList(props: {
// assistant: do an in-place beam
if (lastTruncatedMessage.role === 'assistant') {
if (truncatedHistory.length >= 2)
props.conversationHandler.beamInvoke(truncatedHistory.slice(0, -1), [lastTruncatedMessage], lastTruncatedMessage.id);
conversationHandler.beamInvoke(truncatedHistory.slice(0, -1), [lastTruncatedMessage], lastTruncatedMessage.id);
} else if (lastTruncatedMessage.role === 'user') {
// user: truncate and append (but if the next message is an assistant message, import it)
const possibleNextMessage = inputHistory[truncatedHistory.length];
if (possibleNextMessage?.role === 'assistant')
props.conversationHandler.beamInvoke(truncatedHistory, [possibleNextMessage], null);
conversationHandler.beamInvoke(truncatedHistory, [possibleNextMessage], null);
else
props.conversationHandler.beamInvoke(truncatedHistory, [], null);
conversationHandler.beamInvoke(truncatedHistory, [], null);
}
}, [conversationId, props.conversationHandler]);
}, [conversationHandler, conversationId]);
const handleMessageBranch = React.useCallback((messageId: DMessageId) => {
conversationId && onConversationBranch(conversationId, messageId, true);
}, [conversationId, onConversationBranch]);
const handleMessageTruncate = React.useCallback((messageId: DMessageId) => {
props.conversationHandler?.historyTruncateTo(messageId, 0);
}, [props.conversationHandler]);
conversationHandler?.historyTruncateTo(messageId, 0);
}, [conversationHandler]);
const handleMessageDelete = React.useCallback((messageId: DMessageId) => {
props.conversationHandler?.messagesDelete([messageId]);
}, [props.conversationHandler]);
conversationHandler?.messagesDelete([messageId]);
}, [conversationHandler]);
const handleMessageAppendFragment = React.useCallback((messageId: DMessageId, fragment: DMessageFragment) => {
props.conversationHandler?.messageFragmentAppend(messageId, fragment, false, false);
}, [props.conversationHandler]);
conversationHandler?.messageFragmentAppend(messageId, fragment, false, false);
}, [conversationHandler]);
const handleMessageDeleteFragment = React.useCallback((messageId: DMessageId, fragmentId: DMessageFragmentId) => {
props.conversationHandler?.messageFragmentDelete(messageId, fragmentId, false, true);
}, [props.conversationHandler]);
conversationHandler?.messageFragmentDelete(messageId, fragmentId, false, true);
}, [conversationHandler]);
const handleMessageReplaceFragment = React.useCallback((messageId: DMessageId, fragmentId: DMessageFragmentId, newFragment: DMessageFragment) => {
props.conversationHandler?.messageFragmentReplace(messageId, fragmentId, newFragment, false);
}, [props.conversationHandler]);
conversationHandler?.messageFragmentReplace(messageId, fragmentId, newFragment, true);
}, [conversationHandler]);
const handleMessageToggleUserFlag = React.useCallback((messageId: DMessageId, userFlag: DMessageUserFlag, _maxPerConversation?: number) => {
props.conversationHandler?.messageToggleUserFlag(messageId, userFlag, true /* touch */);
conversationHandler?.messageToggleUserFlag(messageId, userFlag, true /* touch */);
// Note: we don't support 'maxPerConversation' yet, which is supposed to turn off the flag from the beginning if it's too numerous
// if (_maxPerConversation) {
// ...
// }
}, [props.conversationHandler]);
}, [conversationHandler]);
const handleAddInReferenceTo = React.useCallback((item: DMetaReferenceItem) => {
props.conversationHandler?.overlayActions.addInReferenceTo(item);
}, [props.conversationHandler]);
conversationHandler?.overlayActions.addInReferenceTo(item);
}, [conversationHandler]);
const handleTextDiagram = React.useCallback(async (messageId: DMessageId, text: string) => {
conversationId && onTextDiagram({ conversationId: conversationId, messageId, text });
@@ -223,6 +224,16 @@ export function ChatMessageList(props: {
// operate on the local selection set
const areAllSelectedMessagesHidden = React.useMemo(() => {
if (selectedMessages.size === 0) return false;
for (const messageId of selectedMessages) {
const message = conversationMessages.find(m => m.id === messageId);
if (message && !messageHasUserFlag(message, MESSAGE_FLAG_AIX_SKIP))
return false;
}
return true;
}, [selectedMessages, conversationMessages]);
const handleSelectAll = (selected: boolean) => {
const newSelected = new Set<string>();
if (selected)
@@ -238,15 +249,15 @@ export function ChatMessageList(props: {
};
const handleSelectionDelete = React.useCallback(() => {
props.conversationHandler?.messagesDelete(Array.from(selectedMessages));
conversationHandler?.messagesDelete(Array.from(selectedMessages));
setSelectedMessages(new Set());
}, [props.conversationHandler, selectedMessages]);
}, [conversationHandler, selectedMessages]);
const handleSelectionHide = React.useCallback(() => {
const handleSelectionToggleVisibility = React.useCallback(() => {
for (let selectedMessage of Array.from(selectedMessages))
props.conversationHandler?.messageSetUserFlag(selectedMessage, MESSAGE_FLAG_AIX_SKIP, true, true);
conversationHandler?.messageSetUserFlag(selectedMessage, MESSAGE_FLAG_AIX_SKIP, !areAllSelectedMessagesHidden, true);
setSelectedMessages(new Set());
}, [props.conversationHandler, selectedMessages]);
}, [conversationHandler, selectedMessages, areAllSelectedMessagesHidden]);
const { isMessageSelectionMode, setIsMessageSelectionMode } = props;
@@ -282,6 +293,10 @@ export function ChatMessageList(props: {
p: 0,
...props.sx,
// we added these after removing the minSize={20} (%) from the containing panel.
minWidth: '18rem',
// minHeight: '180px', // not need for this, as it's already an overflow scrolling container, so one can reduce it to a pixel
// fix for the double-border on the last message (one by the composer, one to the bottom of the message)
// marginBottom: '-1px',
@@ -320,7 +335,8 @@ export function ChatMessageList(props: {
onClose={() => props.setIsMessageSelectionMode(false)}
onSelectAll={handleSelectAll}
onDeleteMessages={handleSelectionDelete}
onHideMessages={handleSelectionHide}
onToggleVisibility={handleSelectionToggleVisibility}
areAllMessagesHidden={areAllSelectedMessagesHidden}
/>
)}
+1 -1
View File
@@ -13,7 +13,7 @@ import { ScaledTextBlockRenderer } from '~/modules/blocks/ScaledTextBlockRendere
import type { DEphemeral } from '~/common/chat-overlay/store-perchat-ephemerals_slice';
import { ConversationHandler } from '~/common/chat-overlay/ConversationHandler';
import { adjustContentScaling, ContentScaling, lineHeightChatTextMd } from '~/common/app.theme';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { useUIPreferencesStore } from '~/common/stores/store-ui';
// State Pane
@@ -0,0 +1,194 @@
import * as React from 'react';
import { Box, IconButton, Sheet } from '@mui/joy';
import ClearIcon from '@mui/icons-material/Clear';
import DeleteForeverIcon from '@mui/icons-material/DeleteForever';
import EditRoundedIcon from '@mui/icons-material/EditRounded';
import OpenInFullIcon from '@mui/icons-material/OpenInFull';
import type { DConversationId } from '~/common/stores/chat/chat.conversation';
import { InlineTextarea } from '~/common/components/InlineTextarea';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { useConversationTitle } from '~/common/stores/chat/hooks/useConversationTitle';
import { panesManagerActions } from './panes/store-panes-manager';
// configuration
const ENABLE_DELETE = false;
const _styles = {
tileBar: {
position: 'absolute',
top: 0,
left: '50%',
transform: 'translateX(-50%)',
zIndex: 10,
padding: '0 0.125rem 0.125rem',
fontSize: 'sm',
fontWeight: 'md',
borderBottomLeftRadius: '8px',
borderBottomRightRadius: '8px',
// boxShadow: 'xs',
// border: '1px solid',
// borderColor: 'background.popup',
borderTop: 'none',
maxWidth: '78%',
display: 'flex',
alignItems: 'center',
gap: 1,
} as const,
titleBarIncognito: {
backgroundImage: 'repeating-linear-gradient(45deg, rgba(0,0,0,0.1), rgba(0,0,0,0.1) 10px, transparent 10px, transparent 20px)',
backgroundColor: 'neutral.solidBg',
} as const,
title: {
flex: 1,
overflow: 'hidden',
textOverflow: 'ellipsis',
whiteSpace: 'nowrap',
cursor: 'pointer',
minWidth: '2.75rem',
textAlign: 'center',
} as const,
toolButton: {
'--IconButton-size': '1.5rem',
backgroundColor: 'transparent',
opacity: 0.5,
transition: 'opacity 0.1s',
'&:hover': {
opacity: 1,
},
} as const,
toolIcon: {} as const,
toolIconLg: {
fontSize: 'lg',
} as const,
} as const;
export function PaneTitleOverlay(props: {
paneIdx: number,
conversationId: DConversationId | null,
isFocused: boolean,
isIncognito: boolean,
onConversationDelete: (conversationIds: DConversationId[], bypassConfirmation: boolean) => void,
}) {
// state
const [editingTitle, setEditingTitle] = React.useState(false);
// external state
const { title, setUserTitle } = useConversationTitle(props.conversationId);
// if (!title || title?.length < 3)
// return null;
// close tabs handlers
const handleCloseThis = React.useCallback(() => {
panesManagerActions().removePane(props.paneIdx);
}, [props.paneIdx]);
const handleCloseOthers = React.useCallback(() => {
panesManagerActions().removeOtherPanes(props.paneIdx);
}, [props.paneIdx]);
// title handles
const handleTitleEditBegin = React.useCallback(() => {
setEditingTitle(true);
}, []);
const handleTitleEditChange = React.useCallback((newTitle: string) => {
setUserTitle(newTitle);
setEditingTitle(false);
}, [setUserTitle]);
const handleTitleEditEnd = React.useCallback(() => {
setEditingTitle(false);
}, []);
// delete handlers
const { onConversationDelete } = props;
const handleDeleteClicked = React.useCallback((event: React.MouseEvent) => {
event.stopPropagation();
if (props.conversationId)
onConversationDelete([props.conversationId], event.shiftKey);
}, [onConversationDelete, props.conversationId]);
// don't render if not focused
// if (!props.isFocused)
// return null;
const hasTitle = title && title.length > 0;
const color = props.isFocused ? 'primary' : 'neutral';
const variantO = props.isFocused ? 'solid' : 'outlined';
const variantP = props.isFocused ? 'solid' : 'plain';
return (
<Sheet
color={color}
variant={variantO}
sx={!props.isIncognito ? _styles.tileBar : { ..._styles.tileBar, ..._styles.titleBarIncognito }}
>
{/* Close Others*/}
{/*<TooltipOutlined title='Close Other Tabs'>*/}
{!editingTitle && <IconButton title='Close Other Tabs' size='sm' color={color} variant={variantP} onClick={handleCloseOthers} sx={_styles.toolButton}>
<OpenInFullIcon sx={_styles.toolIcon} />
</IconButton>}
{/*</TooltipOutlined>*/}
{/* Title */}
{editingTitle ? (
<InlineTextarea
initialText={title || ''}
placeholder='Chat title...'
invertedColors
centerText
onEdit={handleTitleEditChange}
onCancel={handleTitleEditEnd}
sx={{
// flexGrow: 1,
// minWidth: 120,
mx: { md: 1 },
}}
/>
) : !!props.conversationId && <>
{hasTitle && <Box sx={_styles.title} onClick={handleTitleEditBegin}>
{title}
</Box>}
{!hasTitle && <Box fontStyle='italic' onClick={handleTitleEditBegin}>
untitled
</Box>}
{!hasTitle && <TooltipOutlined title='Edit Chat Title'>
<IconButton title='' size='sm' color={color} variant={variantP} onClick={handleTitleEditBegin} sx={_styles.toolButton}>
<EditRoundedIcon sx={_styles.toolIcon} />
</IconButton>
</TooltipOutlined>}
</>}
{/* Delete This */}
{ENABLE_DELETE && hasTitle && !!props.conversationId && (
<TooltipOutlined title='Delete Chat (Shift+Click to bypass confirmation)'>
<IconButton size='sm' variant={variantP} onClick={handleDeleteClicked} sx={_styles.toolButton}>
<DeleteForeverIcon />
</IconButton>
</TooltipOutlined>
)}
{/* Close This */}
{/*<TooltipOutlined title='Close'>*/}
{!editingTitle && <IconButton title='Close Tab' size='sm' color={color} variant={variantP} onClick={handleCloseThis} sx={_styles.toolButton}>
<ClearIcon sx={_styles.toolIconLg} />
</IconButton>}
{/*</TooltipOutlined>*/}
</Sheet>
);
}
+136 -88
View File
@@ -1,19 +1,17 @@
import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, IconButton, styled, Typography } from '@mui/joy';
import { Box, IconButton, Typography } from '@mui/joy';
import CloseRoundedIcon from '@mui/icons-material/CloseRounded';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import MinimizeIcon from '@mui/icons-material/Minimize';
// import { isMacUser } from '~/common/util/pwaUtils';
import type { ShortcutObject } from '~/common/components/shortcuts/useGlobalShortcuts';
import { ShortcutKey, ShortcutObject } from '~/common/components/shortcuts/useGlobalShortcuts';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { useGlobalShortcutsStore } from '~/common/components/shortcuts/store-global-shortcuts';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
// configuration
@@ -27,12 +25,92 @@ const hideButtonTooltip = (
</Box>
);
const hideButtonSx: SxProps = {
'--IconButton-size': '28px',
'--Icon-fontSize': '16px',
'--Icon-color': 'var(--joy-palette-text-tertiary)',
mr: -0.5,
};
const _styles = {
bar: {
borderBottom: '1px solid',
// borderBottomColor: 'var(--joy-palette-divider)',
borderBottomColor: 'rgba(var(--joy-palette-neutral-mainChannel) / 0.1)',
// borderTopColor: 'rgba(var(--joy-palette-neutral-mainChannel, 99 107 116) / 0.4)',
// backgroundColor: 'var(--joy-palette-background-surface)',
// paddingBlock: '0.25rem',
paddingInline: '0.5rem',
// layout
display: 'flex',
flexFlow: 'row nowrap',
columnGap: '1.5rem', // space between shortcuts
lineHeight: '1em',
// animation: `${animateAppear} 0.3s ease-out`,
// transition: 'all 0.2s ease',
// '&:hover': {
// backgroundColor: 'var(--joy-palette-background-level1)',
// },
} as const,
hideButton: {
'--IconButton-size': '28px',
'--Icon-fontSize': '16px',
'--Icon-color': 'var(--joy-palette-text-tertiary)',
mr: -0.5,
} as const,
shortcut: {
display: 'flex',
alignItems: 'center',
whiteSpace: 'nowrap',
gap: '2px', // space between modifiers
marginBlock: '0.25rem',
// transition: 'transform 0.2s ease',
// '&:hover': {
// transform: 'scale(1.05)',
// },
'&:hover > div': {
backgroundColor: 'background.level1',
} as const,
cursor: 'pointer',
[`&[aria-disabled="true"]`]: {
opacity: 0.5,
pointerEvents: 'none',
} as const,
} as const,
itemKeyGroup: {
fontSize: 'xs',
fontWeight: 'md',
outline: '1px solid',
outlineColor: 'neutral.outlinedBorder',
borderRadius: 'xs',
// backgroundColor: 'var(--joy-palette-neutral-outlinedBorder)',
backgroundColor: 'background.popup',
// boxShadow: 'inset 2px 0px 4px -2px var(--joy-palette-background-backdrop)',
boxShadow: 'xs',
// minWidth: '1rem',
paddingBlock: '2px',
paddingInline: '1px',
// pointerEvents: 'none',
cursor: 'pointer',
transition: 'background-color 1s ease',
display: 'flex',
textAlign: 'center',
// Remove the gap and use dividers instead
gap: 0,
'& > span': {
position: 'relative',
paddingInline: '4px',
minWidth: '1rem',
'&:not(:last-child)': {
borderRight: '1px solid',
borderRightColor: 'neutral.outlinedBorder',
},
},
} as const,
itemIcon: {
fontSize: 'md',
} as const,
} as const;
// const animateAppear = keyframes`
// from {
@@ -45,64 +123,6 @@ const hideButtonSx: SxProps = {
// }
// `;
const StatusBarContainer = styled(Box)({
borderBottom: '1px solid',
// borderBottomColor: 'var(--joy-palette-divider)',
borderBottomColor: 'rgba(var(--joy-palette-neutral-mainChannel) / 0.1)',
// borderTopColor: 'rgba(var(--joy-palette-neutral-mainChannel, 99 107 116) / 0.4)',
// backgroundColor: 'var(--joy-palette-background-surface)',
// paddingBlock: '0.25rem',
paddingInline: '0.5rem',
// layout
display: 'flex',
flexFlow: 'row nowrap',
columnGap: '1.5rem', // space between shortcuts
lineHeight: '1em',
// animation: `${animateAppear} 0.3s ease-out`,
// transition: 'all 0.2s ease',
// '&:hover': {
// backgroundColor: 'var(--joy-palette-background-level1)',
// },
});
const ShortcutContainer = styled(Box)({
display: 'flex',
alignItems: 'center',
whiteSpace: 'nowrap',
gap: '2px', // space between modifiers
marginBlock: '0.25rem',
// transition: 'transform 0.2s ease',
// '&:hover': {
// transform: 'scale(1.05)',
// },
'&:hover > div': {
backgroundColor: 'var(--joy-palette-background-level1)',
},
cursor: 'pointer',
[`&[aria-disabled="true"]`]: {
opacity: 0.5,
pointerEvents: 'none',
}
});
const ShortcutKey = styled(Box)({
fontSize: 'var(--joy-fontSize-xs)',
fontWeight: 'var(--joy-fontWeight-md)',
border: '1px solid',
borderColor: 'var(--joy-palette-neutral-outlinedBorder)',
borderRadius: 'var(--joy-radius-xs)',
// backgroundColor: 'var(--joy-palette-neutral-outlinedBorder)',
backgroundColor: 'var(--joy-palette-background-popup)',
// boxShadow: 'inset 2px 0px 4px -2px var(--joy-palette-background-backdrop)',
boxShadow: 'var(--joy-shadow-xs)',
// minWidth: '1rem',
paddingBlock: '1px',
paddingInline: '4px',
// pointerEvents: 'none',
cursor: 'pointer',
transition: 'background-color 1s ease',
});
// Display mac-style shortcuts on windows as well
const displayMacModifiers = true;
@@ -118,6 +138,8 @@ function _platformAwareModifier(symbol: 'Ctrl' | 'Alt' | 'Shift') {
}
}
const ShortcutItemMemo = React.memo(ShortcutItem);
function ShortcutItem(props: { shortcut: ShortcutObject }) {
const handleClicked = React.useCallback(() => {
@@ -126,17 +148,24 @@ function ShortcutItem(props: { shortcut: ShortcutObject }) {
}, [props.shortcut]);
return (
<ShortcutContainer onClick={!props.shortcut.disabled ? handleClicked : undefined} aria-disabled={props.shortcut.disabled}>
{!!props.shortcut.ctrl && <ShortcutKey>{_platformAwareModifier('Ctrl')}</ShortcutKey>}
{!!props.shortcut.shift && <ShortcutKey>{_platformAwareModifier('Shift')}</ShortcutKey>}
{/*{!!props.shortcut.altForNonMac && <ShortcutKey onClick={handleClicked}>{_platformAwareModifier('Alt')}</ShortcutKey>}*/}
<ShortcutKey>{props.shortcut.key === 'Escape' ? 'Esc' : props.shortcut.key === 'Enter' ? '↵' : props.shortcut.key.toUpperCase()}</ShortcutKey>
<Box
onClick={!props.shortcut.disabled ? handleClicked : undefined}
aria-disabled={props.shortcut.disabled}
sx={_styles.shortcut}
>
<Box sx={_styles.itemKeyGroup}>
{!!props.shortcut.ctrl && <span>{_platformAwareModifier('Ctrl')}</span>}
{!!props.shortcut.shift && <span>{_platformAwareModifier('Shift')}</span>}
{/*{!!props.shortcut.altForNonMac && <span>{_platformAwareModifier('Alt')}</span>}*/}
<span>{props.shortcut.key === 'Escape' ? 'Esc' : props.shortcut.key === 'Enter' ? '↵' : props.shortcut.key.toUpperCase()}</span>
</Box>
&nbsp;<Typography level='body-xs'>{props.shortcut.description}</Typography>
{props.shortcut.endDecoratorIcon && <props.shortcut.endDecoratorIcon sx={{ fontSize: 'md' }} />}
</ShortcutContainer>
{!!props.shortcut.endDecoratorIcon && <props.shortcut.endDecoratorIcon sx={_styles.itemIcon} />}
</Box>
);
}
export const StatusBarMemo = React.memo(StatusBar);
export function StatusBar(props: { toggleMinimized?: () => void, isMinimized?: boolean }) {
@@ -148,18 +177,34 @@ export function StatusBar(props: { toggleMinimized?: () => void, isMinimized?: b
// external state
const labsShowShortcutBar = useUXLabsStore(state => state.labsShowShortcutBar);
const shortcuts = useGlobalShortcutsStore(useShallow(state => {
// get visible shortcuts
let visibleShortcuts = !labsShowShortcutBar ? [] : state.getAllShortcuts().filter(shortcut => !!shortcut.description);
// filter by highest level if levels are present
const maxLevel = Math.max(...visibleShortcuts.map(s => s.level ?? 0));
if (maxLevel > 0)
visibleShortcuts = visibleShortcuts.filter(s => s.level === maxLevel);
visibleShortcuts.sort((a, b) => {
// if they don't have a 'shift', they are sorted first
if (a.shift !== b.shift)
return a.shift ? 1 : -1;
// (Hack) If the description is 'Beam', it goes last
if (a.description === 'Beam Edit')
return 1;
// alphabetical for the rest
// 1. First by level
if ((a.level ?? 0) !== (b.level ?? 0))
return (b.level ?? 0) - (a.level ?? 0);
// 2. Then by modifiers presence (no modifiers first)
const aModifiers = (a.ctrl ? 1 : 0) + (a.shift ? 1 : 0);
const bModifiers = (b.ctrl ? 1 : 0) + (b.shift ? 1 : 0);
if (aModifiers !== bModifiers)
return aModifiers - bModifiers;
// 3a. Special case for ShortcutKey.Esc, at the beginning
if (a.key === ShortcutKey.Esc) return -1;
if (b.key === ShortcutKey.Esc) return 1;
// 3. Special case for 'Beam Edit'
if (a.description === 'Beam Edit') return 1;
if (b.description === 'Beam Edit') return -1;
// 4. Finally alphabetically by key
return a.key.localeCompare(b.key);
});
return visibleShortcuts;
@@ -202,27 +247,30 @@ export function StatusBar(props: { toggleMinimized?: () => void, isMinimized?: b
return null;
return (
<StatusBarContainer aria-label='Status bar'>
<Box
aria-label='Shortcuts and status bar'
sx={_styles.bar}
>
{(!props.toggleMinimized || !COMPOSER_ENABLE_MINIMIZE) && !props.isMinimized ? (
// Close Button
<GoodTooltip variantOutlined arrow placement='top' title={hideButtonTooltip}>
<IconButton size='sm' sx={hideButtonSx} onClick={handleHideShortcuts}>
<IconButton size='sm' onClick={handleHideShortcuts} sx={_styles.hideButton}>
<CloseRoundedIcon />
</IconButton>
</GoodTooltip>
) : (
// Minimize / Maximize Button - note the Maximize icon would be more correct, but also less discoverable
<IconButton size='sm' sx={hideButtonSx} onClick={props.toggleMinimized}>
<IconButton size='sm' onClick={props.toggleMinimized} sx={_styles.hideButton}>
{props.isMinimized ? <ExpandLessIcon /> : <MinimizeIcon />}
</IconButton>
)}
{/* Show all shortcuts */}
{shortcuts.map((shortcut, idx) => (
<ShortcutItem key={shortcut.key + idx} shortcut={shortcut} />
<ShortcutItemMemo key={shortcut.key + idx} shortcut={shortcut} />
))}
</StatusBarContainer>
</Box>
);
}
@@ -127,7 +127,7 @@ export function CameraCaptureModal(props: {
const handleVideoDownloadClicked = React.useCallback(async () => {
if (!videoRef.current) return;
await downloadVideoFrame(videoRef.current, 'camera', 'image/jpeg', 0.98);
await downloadVideoFrame(videoRef.current, 'camera', 'image/jpeg', 0.98).catch(alert);
}, [videoRef]);
+110 -67
View File
@@ -2,12 +2,10 @@ import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import type { FileWithHandle } from 'browser-fs-access';
import { Box, Button, ButtonGroup, Card, Dropdown, Grid, IconButton, Menu, MenuButton, MenuItem, Textarea, Tooltip, Typography } from '@mui/joy';
import { Box, Button, ButtonGroup, Card, Dropdown, Grid, IconButton, Menu, MenuButton, MenuItem, Textarea, Typography } from '@mui/joy';
import { ColorPaletteProp, SxProps, VariantProp } from '@mui/joy/styles/types';
import AddCircleOutlineIcon from '@mui/icons-material/AddCircleOutline';
import AutoAwesomeIcon from '@mui/icons-material/AutoAwesome';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import FormatPaintTwoToneIcon from '@mui/icons-material/FormatPaintTwoTone';
import PsychologyIcon from '@mui/icons-material/Psychology';
import SendIcon from '@mui/icons-material/Send';
import StopOutlinedIcon from '@mui/icons-material/StopOutlined';
@@ -19,35 +17,34 @@ import { useChatAutoSuggestAttachmentPrompts, useChatMicTimeoutMsValue } from '.
import { useAgiAttachmentPrompts } from '~/modules/aifn/agiattachmentprompts/useAgiAttachmentPrompts';
import { useBrowseCapability } from '~/modules/browse/store-module-browsing';
import { DLLM, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { DLLM, getLLMContextTokens, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { AudioGenerator } from '~/common/util/audio/AudioGenerator';
import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
import { ButtonAttachFilesMemo, openFileForAttaching } from '~/common/components/ButtonAttachFiles';
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { ConversationsManager } from '~/common/chat-overlay/ConversationsManager';
import { DMessageMetadata, DMetaReferenceItem, messageFragmentsReduceText } from '~/common/stores/chat/chat.message';
import { DMessageId, DMessageMetadata, DMetaReferenceItem, messageFragmentsReduceText } from '~/common/stores/chat/chat.message';
import { ShortcutKey, ShortcutObject, useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { addSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import { animationEnterBelow } from '~/common/util/animUtils';
import { browserSpeechRecognitionCapability, PLACEHOLDER_INTERIM_TRANSCRIPT, SpeechResult, useSpeechRecognition } from '~/common/components/speechrecognition/useSpeechRecognition';
import { DConversationId } from '~/common/stores/chat/chat.conversation';
import { copyToClipboard, supportsClipboardRead } from '~/common/util/clipboardUtils';
import { createTextContentFragment, DMessageAttachmentFragment, DMessageContentFragment, duplicateDMessageFragmentsNoVoid } from '~/common/stores/chat/chat.fragments';
import { estimateTextTokens, glueForMessageTokens, marshallWrapDocFragments } from '~/common/stores/chat/chat.tokens';
import { createTextContentFragment, DMessageAttachmentFragment, DMessageContentFragment, duplicateDMessageFragments } from '~/common/stores/chat/chat.fragments';
import { glueForMessageTokens, marshallWrapDocFragments } from '~/common/stores/chat/chat.tokens';
import { isValidConversation, useChatStore } from '~/common/stores/chat/store-chats';
import { getModelParameterValueOrThrow } from '~/common/stores/llms/llms.parameters';
import { launchAppCall, removeQueryParam, useRouterQuery } from '~/common/app.routes';
import { lineHeightTextareaMd } from '~/common/app.theme';
import { lineHeightTextareaMd, themeBgAppChatComposer } from '~/common/app.theme';
import { optimaOpenPreferences } from '~/common/layout/optima/useOptima';
import { platformAwareKeystrokes } from '~/common/components/KeyStroke';
import { supportsScreenCapture } from '~/common/util/screenCaptureUtils';
import { useChatComposerOverlayStore } from '~/common/chat-overlay/store-perchat_vanilla';
import { useComposerStartupText, useLogicSherpaStore } from '~/common/logic/store-logic-sherpa';
import { useDebouncer } from '~/common/components/useDebouncer';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
import { useUICounter, useUIPreferencesStore } from '~/common/state/store-ui';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { useUICounter, useUIPreferencesStore } from '~/common/stores/store-ui';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
import type { ActileItem } from './actile/ActileProvider';
import { providerAttachmentLabels } from './actile/providerAttachmentLabels';
@@ -57,6 +54,7 @@ import { useActileManager } from './actile/useActileManager';
import type { AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import { LLMAttachmentDraftsAction, LLMAttachmentsList } from './llmattachments/LLMAttachmentsList';
import { PhPaintBrush } from '~/common/components/icons/phosphor/PhPaintBrush';
import { useAttachmentDrafts } from '~/common/attachment-drafts/useAttachmentDrafts';
import { useLLMAttachmentDrafts } from './llmattachments/useLLMAttachmentDrafts';
@@ -69,19 +67,24 @@ import { ButtonAttachScreenCaptureMemo } from './buttons/ButtonAttachScreenCaptu
import { ButtonAttachWebMemo } from './buttons/ButtonAttachWeb';
import { ButtonBeamMemo } from './buttons/ButtonBeam';
import { ButtonCallMemo } from './buttons/ButtonCall';
import { ButtonGroupDrawRepeat } from './buttons/ButtonGroupDrawRepeat';
import { ButtonMicContinuationMemo } from './buttons/ButtonMicContinuation';
import { ButtonMicMemo } from './buttons/ButtonMic';
import { ButtonMultiChatMemo } from './buttons/ButtonMultiChat';
import { ButtonOptionsDraw } from './buttons/ButtonOptionsDraw';
import { ComposerTextAreaActions } from './textarea/ComposerTextAreaActions';
import { StatusBar } from '../StatusBar';
import { ComposerTextAreaDrawActions } from './textarea/ComposerTextAreaDrawActions';
import { StatusBarMemo } from '../StatusBar';
import { TokenBadgeMemo } from './tokens/TokenBadge';
import { TokenProgressbarMemo } from './tokens/TokenProgressbar';
import { useComposerDragDrop } from './useComposerDragDrop';
import { useTextTokenCount } from './tokens/useTextTokenCounter';
import { useWebInputModal } from './WebInputModal';
// configuration
const zIndexComposerOverlayMic = 10;
const SHOW_TIPS_AFTER_RELOADS = 25;
const paddingBoxSx: SxProps = {
@@ -101,20 +104,24 @@ const minimizedSx: SxProps = {
export function Composer(props: {
isMobile: boolean;
chatLLM: DLLM | null;
composerTextAreaRef: React.RefObject<HTMLTextAreaElement>;
composerTextAreaRef: React.RefObject<HTMLTextAreaElement | null>;
targetConversationId: DConversationId | null;
capabilityHasT2I: boolean;
capabilityHasT2IEdit: boolean;
isMulticast: boolean | null;
isDeveloperMode: boolean;
onAction: (conversationId: DConversationId, chatExecuteMode: ChatExecuteMode, fragments: (DMessageContentFragment | DMessageAttachmentFragment)[], metadata?: DMessageMetadata) => boolean;
onConversationBeamEdit: (conversationId: DConversationId, editMessageId?: DMessageId) => Promise<void>;
onConversationsImportFromFiles: (files: File[]) => Promise<void>;
onTextImagine: (conversationId: DConversationId, text: string) => void;
setIsMulticast: (on: boolean) => void;
onComposerHasContent: (hasContent: boolean) => void;
sx?: SxProps;
}) {
// state
const [composeText, debouncedText, setComposeText] = useDebouncer('', 300, 1200, true);
const [composeText, setComposeText] = React.useState('');
const [drawRepeat, setDrawRepeat] = React.useState(1);
const [micContinuation, setMicContinuation] = React.useState(false);
const [speechInterimResult, setSpeechInterimResult] = React.useState<SpeechResult | null>(null);
const [sendStarted, setSendStarted] = React.useState(false);
@@ -135,12 +142,13 @@ export function Composer(props: {
labsShowCost: state.labsShowCost,
labsShowShortcutBar: state.labsShowShortcutBar,
})));
const timeToShowTips = useLogicSherpaStore(state => state.usageCount >= 5);
const timeToShowTips = useLogicSherpaStore(state => state.usageCount >= SHOW_TIPS_AFTER_RELOADS);
const { novel: explainShiftEnter, touch: touchShiftEnter } = useUICounter('composer-shift-enter');
const { novel: explainAltEnter, touch: touchAltEnter } = useUICounter('composer-alt-enter');
const { novel: explainCtrlEnter, touch: touchCtrlEnter } = useUICounter('composer-ctrl-enter');
const [startupText, setStartupText] = useComposerStartupText();
const enterIsNewline = useUIPreferencesStore(state => state.enterIsNewline);
const composerQuickButton = useUIPreferencesStore(state => state.composerQuickButton);
const chatMicTimeoutMs = useChatMicTimeoutMsValue();
const { assistantAbortible, systemPurposeId, tokenCount: _historyTokenCount, abortConversationTemp } = useChatStore(useShallow(state => {
const conversation = state.conversations.find(_c => _c.id === props.targetConversationId);
@@ -170,7 +178,7 @@ export function Composer(props: {
const enableLoadURLsInComposer = hasComposerBrowseCapability && !composeText.startsWith('/');
// user message for attachments
const { onConversationsImportFromFiles } = props;
const { onConversationBeamEdit, onConversationsImportFromFiles } = props;
const handleFilterAGIFile = React.useCallback(async (file: File): Promise<boolean> =>
await showPromisedOverlay('composer-open-or-attach', { rejectWithValue: false }, ({ onResolve, onUserReject }) => (
<ConfirmationModal
@@ -186,11 +194,12 @@ export function Composer(props: {
)), [onConversationsImportFromFiles, showPromisedOverlay]);
// attachments-overlay: comes from the attachments slice of the conversation overlay
const showChatAttachments = chatExecuteModeCanAttach(chatExecuteMode, props.capabilityHasT2IEdit);
const {
/* items */ attachmentDrafts,
/* append */ attachAppendClipboardItems, attachAppendDataTransfer, attachAppendEgoFragments, attachAppendFile, attachAppendUrl,
/* take */ attachmentsRemoveAll, attachmentsTakeAllFragments, attachmentsTakeFragmentsByType,
} = useAttachmentDrafts(conversationOverlayStore, enableLoadURLsInComposer, chatLLMSupportsImages, handleFilterAGIFile);
} = useAttachmentDrafts(conversationOverlayStore, enableLoadURLsInComposer, chatLLMSupportsImages, handleFilterAGIFile, showChatAttachments === 'only-images');
// attachments derived state
const llmAttachmentDraftsCollection = useLLMAttachmentDrafts(attachmentDrafts, props.chatLLM, chatLLMSupportsImages);
@@ -208,7 +217,8 @@ export function Composer(props: {
const isMobile = props.isMobile;
const isDesktop = !props.isMobile;
const noConversation = !targetConversationId;
const showChatAttachments = chatExecuteModeCanAttach(chatExecuteMode);
const composerTextSuffix = chatExecuteMode === 'generate-image' && isDesktop && drawRepeat > 1 ? ` x${drawRepeat}` : '';
const micIsRunning = !!speechInterimResult;
// more mic way below, as we use complex hooks
@@ -216,17 +226,13 @@ export function Composer(props: {
// tokens derived state
const tokensComposerTextDebounced = React.useMemo(() => {
return (debouncedText && props.chatLLM)
? estimateTextTokens(debouncedText, props.chatLLM, 'composer text')
: 0;
}, [props.chatLLM, debouncedText]);
let tokensComposer = tokensComposerTextDebounced + (llmAttachmentDraftsCollection.llmTokenCountApprox || 0);
const tokensComposerTextDebounced = useTextTokenCount(composeText, props.chatLLM, 800, 1600);
let tokensComposer = (tokensComposerTextDebounced ?? 0) + (llmAttachmentDraftsCollection.llmTokenCountApprox || 0);
if (props.chatLLM && tokensComposer > 0)
tokensComposer += glueForMessageTokens(props.chatLLM);
const tokensHistory = _historyTokenCount;
const tokensResponseMax = getModelParameterValueOrThrow('llmResponseTokens', props.chatLLM?.initialParameters, props.chatLLM?.userParameters, 0) ?? 0;
const tokenLimit = props.chatLLM?.contextTokens || 0;
const tokenLimit = getLLMContextTokens(props.chatLLM) ?? 0;
const tokenChatPricing = props.chatLLM?.pricing?.chat;
@@ -238,6 +244,13 @@ export function Composer(props: {
}
}, [setComposeText, setStartupText, startupText]);
// Effect: notify the parent of presence/absence of content
const isContentful = composeText.length > 0 || !!attachmentDrafts.length;
const { onComposerHasContent } = props;
React.useEffect(() => {
onComposerHasContent?.(isContentful);
}, [isContentful, onComposerHasContent]);
// Overlay actions
@@ -298,9 +311,9 @@ export function Composer(props: {
// prepare the fragments: content (if any) and attachments (if allowed, and any)
const fragments: (DMessageContentFragment | DMessageAttachmentFragment)[] = [];
if (composerText)
fragments.push(createTextContentFragment(composerText));
fragments.push(createTextContentFragment(composerText + composerTextSuffix));
const canAttach = chatExecuteModeCanAttach(_chatExecuteMode);
const canAttach = chatExecuteModeCanAttach(_chatExecuteMode, props.capabilityHasT2IEdit);
if (canAttach) {
const attachmentFragments = await attachmentsTakeAllFragments('global', 'app-chat');
fragments.push(...attachmentFragments);
@@ -319,7 +332,7 @@ export function Composer(props: {
if (enqueued)
_handleClearText();
return enqueued;
}, [attachmentsTakeAllFragments, confirmProceedIfAttachmentsNotSupported, _handleClearText, inReferenceTo, onAction, targetConversationId]);
}, [targetConversationId, confirmProceedIfAttachmentsNotSupported, composerTextSuffix, props.capabilityHasT2IEdit, inReferenceTo, onAction, _handleClearText, attachmentsTakeAllFragments]);
const handleSendAction = React.useCallback(async (chatExecuteMode: ChatExecuteMode, composerText: string): Promise<boolean> => {
setSendStarted(true);
@@ -445,8 +458,13 @@ export function Composer(props: {
addSnackbar({ key: 'chat-mic-running', message: 'Please wait for the microphone to finish.', type: 'info' });
return;
}
await handleSendAction('beam-content', composeText); // 'beam' button
}, [composeText, handleSendAction, micIsRunning]);
if (composeText) {
await handleSendAction('beam-content', composeText); // 'beam' button
} else {
if (targetConversationId)
void onConversationBeamEdit(targetConversationId); // beam-edit conversation
}
}, [composeText, handleSendAction, micIsRunning, onConversationBeamEdit, targetConversationId]);
const handleStopClicked = React.useCallback(() => {
targetConversationId && abortConversationTemp(targetConversationId);
@@ -493,7 +511,7 @@ export function Composer(props: {
const cHandler = ConversationsManager.getHandler(conversationId);
const messageToEmbed = cHandler.historyFindMessageOrThrow(messageId);
if (messageToEmbed) {
const fragmentsCopy = duplicateDMessageFragmentsNoVoid(messageToEmbed.fragments); // [attach] deep copy a message's fragments to attach to ego
const fragmentsCopy = duplicateDMessageFragments(messageToEmbed.fragments, true); // [attach] deep copy a message's fragments to attach to ego
if (fragmentsCopy.length) {
const chatTitle = cHandler.title() ?? '';
const messageText = messageFragmentsReduceText(fragmentsCopy);
@@ -600,7 +618,7 @@ export function Composer(props: {
links.forEach(link => void attachAppendUrl('input-link', link.url));
}, [attachAppendUrl]);
const { openWebInputDialog, webInputDialogComponent } = useWebInputModal(handleAttachWebLinks);
const { openWebInputDialog, webInputDialogComponent } = useWebInputModal(handleAttachWebLinks, composeText);
// Attachments Down
@@ -630,8 +648,12 @@ export function Composer(props: {
const composerShortcuts: ShortcutObject[] = [];
if (showChatAttachments) {
composerShortcuts.push({ key: 'f', ctrl: true, shift: true, action: () => openFileForAttaching(true, handleAttachFiles), description: 'Attach File' });
composerShortcuts.push({ key: 'l', ctrl: true, shift: true, action: openWebInputDialog, description: 'Attach Link' });
if (supportsClipboardRead())
composerShortcuts.push({ key: 'v', ctrl: true, shift: true, action: attachAppendClipboardItems, description: 'Attach Clipboard' });
// Future: keep reactive state here to support Live Screen Capture and more
// if (labsAttachScreenCapture && supportsScreenCapture)
// composerShortcuts.push({ key: 's', ctrl: true, shift: true, action: openScreenCaptureDialog, description: 'Attach Screen Capture' });
}
if (recognitionState.isActive) {
composerShortcuts.push({ key: 'm', ctrl: true, action: handleFinishMicAndSend, description: 'Mic · Send', disabled: !recognitionState.hasSpeech || sendStarted, endDecoratorIcon: TelegramIcon as any, level: 4 });
@@ -650,7 +672,7 @@ export function Composer(props: {
}, description: 'Microphone',
});
return composerShortcuts;
}, [attachAppendClipboardItems, handleAttachFiles, handleFinishMicAndSend, recognitionState.hasSpeech, recognitionState.isActive, sendStarted, showChatAttachments, toggleRecognition]));
}, [attachAppendClipboardItems, handleAttachFiles, handleFinishMicAndSend, openWebInputDialog, recognitionState.hasSpeech, recognitionState.isActive, sendStarted, showChatAttachments, toggleRecognition]));
// ...
@@ -662,7 +684,7 @@ export function Composer(props: {
const isDraw = chatExecuteMode === 'generate-image';
const showChatInReferenceTo = !!inReferenceTo?.length;
const showChatExtras = isText && !showChatInReferenceTo;
const showChatExtras = isText && !showChatInReferenceTo && !assistantAbortible && composerQuickButton !== 'off';
const sendButtonVariant: VariantProp = (isAppend || (isMobile && isTextBeam)) ? 'outlined' : 'solid';
@@ -678,13 +700,15 @@ export function Composer(props: {
: isAppend ? <SendIcon sx={{ fontSize: 18 }} />
: isReAct ? <PsychologyIcon />
: isTextBeam ? <ChatBeamIcon /> /* <GavelIcon /> */
: isDraw ? <FormatPaintTwoToneIcon />
: isDraw ? <PhPaintBrush />
: <TelegramIcon />;
const beamButtonColor: ColorPaletteProp | undefined =
!llmAttachmentDraftsCollection.canAttachAllFragments ? 'warning'
: undefined;
const showTint: ColorPaletteProp | undefined = isDraw ? 'warning' : isReAct ? 'success' : undefined;
// stable randomization of the /verb, between '/draw', '/react'
const placeholderAction = React.useMemo(() => {
const actions: string[] = ['/react'];
@@ -704,13 +728,13 @@ export function Composer(props: {
+ (recognitionState.isAvailable ? ' · ramble' : '')
+ '...';
if (isDesktop && timeToShowTips) {
if (isDesktop && timeToShowTips && !isDraw) {
if (explainShiftEnter)
textPlaceholder += !enterIsNewline ? '\n\n💡 Shift + Enter to add a new line' : '\n\n💡 Shift + Enter to send';
else if (explainAltEnter)
textPlaceholder += platformAwareKeystrokes('\n\n💡 Tip: Alt + Enter to just append the message');
textPlaceholder += !enterIsNewline ? '\n\n Shift + Enter to add a new line' : '\n\n Shift + Enter to send';
// else if (explainAltEnter)
// textPlaceholder += platformAwareKeystrokes('\n\n Tip: Alt + Enter to just append the message');
else if (explainCtrlEnter)
textPlaceholder += platformAwareKeystrokes('\n\n💡 Tip: Ctrl + Enter to beam');
textPlaceholder += platformAwareKeystrokes('\n\n Tip: Ctrl + Enter to beam');
}
const stableGridSx: SxProps = React.useMemo(() => ({
@@ -721,9 +745,14 @@ export function Composer(props: {
}), [dragContainerSx]);
return (
<Box aria-label='User Message' component='section' sx={props.sx}>
<Box
aria-label='New Message'
component='section'
bgcolor={showTint ? `var(--joy-palette-${showTint}-softBg)` : themeBgAppChatComposer}
sx={props.sx}
>
{!isMobile && labsShowShortcutBar && <StatusBar toggleMinimized={handleToggleMinimized} isMinimized={isMinimized} />}
{!isMobile && labsShowShortcutBar && <StatusBarMemo toggleMinimized={handleToggleMinimized} isMinimized={isMinimized} />}
{/* This container is here just to let the potential statusbar fill the whole space, so we moved the padding here and not in the parent */}
<Box sx={(!isMinimized || isMobile || !labsShowShortcutBar) ? paddingBoxSx : minimizedSx}>
@@ -744,13 +773,16 @@ export function Composer(props: {
<Box sx={{ flexGrow: 0, display: 'grid', gap: 1, alignSelf: 'flex-start' }}>
{/* [mobile] Mic button */}
{recognitionState.isAvailable && <ButtonMicMemo variant={micVariant} color={micColor} errorMessage={recognitionState.errorMessage} onClick={handleToggleMic} />}
{recognitionState.isAvailable && <ButtonMicMemo variant={micVariant} color={micColor === 'danger' ? 'danger' : showTint || micColor} errorMessage={recognitionState.errorMessage} onClick={handleToggleMic} />}
{/* Responsive Camera OCR button */}
{showChatAttachments && <ButtonAttachCameraMemo isMobile onOpenCamera={openCamera} />}
{showChatAttachments && <ButtonAttachCameraMemo color={showTint} isMobile onOpenCamera={openCamera} />}
{/* [mobile] Attach file button (in draw with image mode) */}
{showChatAttachments === 'only-images' && <ButtonAttachFilesMemo color={showTint} isMobile onAttachFiles={handleAttachFiles} fullWidth multiple />}
{/* [mobile] [+] button */}
{showChatAttachments && (
{showChatAttachments === true && (
<Dropdown>
<MenuButton slots={{ root: IconButton }}>
<AddCircleOutlineIcon />
@@ -791,19 +823,19 @@ export function Composer(props: {
{/*</FormHelperText>*/}
{/* Responsive Open Files button */}
<ButtonAttachFilesMemo onAttachFiles={handleAttachFiles} fullWidth multiple />
<ButtonAttachFilesMemo color={showTint} onAttachFiles={handleAttachFiles} fullWidth multiple />
{/* Responsive Web button */}
<ButtonAttachWebMemo disabled={!hasComposerBrowseCapability} onOpenWebInput={openWebInputDialog} />
{showChatAttachments !== 'only-images' && <ButtonAttachWebMemo color={showTint} disabled={!hasComposerBrowseCapability} onOpenWebInput={openWebInputDialog} />}
{/* Responsive Paste button */}
{supportsClipboardRead() && <ButtonAttachClipboardMemo onAttachClipboard={attachAppendClipboardItems} />}
{supportsClipboardRead() && showChatAttachments !== 'only-images' && <ButtonAttachClipboardMemo color={showTint} onAttachClipboard={attachAppendClipboardItems} />}
{/* Responsive Screen Capture button */}
{labsAttachScreenCapture && supportsScreenCapture && <ButtonAttachScreenCaptureMemo onAttachScreenCapture={handleAttachScreenCapture} />}
{labsAttachScreenCapture && supportsScreenCapture && <ButtonAttachScreenCaptureMemo color={showTint} onAttachScreenCapture={handleAttachScreenCapture} />}
{/* Responsive Camera OCR button */}
{labsCameraDesktop && <ButtonAttachCameraMemo onOpenCamera={openCamera} />}
{labsCameraDesktop && <ButtonAttachCameraMemo color={showTint} onOpenCamera={openCamera} />}
</Box>)}
@@ -828,7 +860,7 @@ export function Composer(props: {
variant='outlined'
color={isDraw ? 'warning' : isReAct ? 'success' : undefined}
autoFocus
minRows={isMobile ? 4 : agiAttachmentPrompts.hasData ? 3 : showChatInReferenceTo ? 4 : 5}
minRows={isMobile ? 3.5 : isDraw ? 4 : agiAttachmentPrompts.hasData ? 3 : showChatInReferenceTo ? 4 : 5}
maxRows={isMobile ? 8 : 10}
placeholder={textPlaceholder}
value={composeText}
@@ -837,8 +869,12 @@ export function Composer(props: {
onPasteCapture={handleAttachCtrlV}
// onFocusCapture={handleFocusModeOn}
// onBlurCapture={handleFocusModeOff}
endDecorator={
<ComposerTextAreaActions
endDecorator={isDraw
? <ComposerTextAreaDrawActions
composerText={composeText}
onReplaceText={setComposeText}
/>
: <ComposerTextAreaActions
agiAttachmentPrompts={agiAttachmentPrompts}
inReferenceTo={inReferenceTo}
onAppendAndSend={handleAppendTextAndSend}
@@ -847,6 +883,7 @@ export function Composer(props: {
}
slotProps={{
textarea: {
tabIndex: !recognitionState.isActive ? undefined : -1,
height: '100%',
enterKeyHint: enterIsNewline ? 'enter' : 'send',
sx: {
@@ -858,16 +895,16 @@ export function Composer(props: {
}}
sx={{
height: '100%',
backgroundColor: 'background.level1',
backgroundColor: showTint ? undefined : 'background.level1',
'&:focus-within': { backgroundColor: 'background.popup', '.within-composer-focus': { backgroundColor: 'background.popup' } },
lineHeight: lineHeightTextareaMd,
}} />
{!showChatInReferenceTo && tokenLimit > 0 && (tokensComposer > 0 || (tokensHistory + tokensResponseMax) > 0) && (
{!showChatInReferenceTo && !isDraw && tokenLimit > 0 && (tokensComposer > 0 || (tokensHistory + tokensResponseMax) > 0) && (
<TokenProgressbarMemo chatPricing={tokenChatPricing} direct={tokensComposer} history={tokensHistory} responseMax={tokensResponseMax} limit={tokenLimit} />
)}
{!showChatInReferenceTo && tokenLimit > 0 && (
{!showChatInReferenceTo && !isDraw && tokenLimit > 0 && (
<TokenBadgeMemo hideBelowDollars={0.0001} chatPricing={tokenChatPricing} direct={tokensComposer} history={tokensHistory} responseMax={tokensResponseMax} limit={tokenLimit} showCost={labsShowCost} enableHover={!isMobile} showExcess absoluteBottomRight />
)}
@@ -936,7 +973,7 @@ export function Composer(props: {
fontStyle: 'italic',
},
}}>
{!!debouncedText && <span className='preceding'>{debouncedText.endsWith(' ') ? debouncedText : debouncedText + ' '}</span>}
{!!composeText && <span className='preceding'>{composeText.endsWith(' ') ? composeText : composeText + ' '}</span>}
{speechInterimResult.transcript}
<span className={speechInterimResult.interimTranscript === PLACEHOLDER_INTERIM_TRANSCRIPT ? 'placeholder' : 'interim'}>{speechInterimResult.interimTranscript}</span>
</Typography>
@@ -971,7 +1008,9 @@ export function Composer(props: {
{/* [mobile] bottom-corner secondary button */}
{isMobile && (showChatExtras
? <ButtonCallMemo isMobile disabled={noConversation || noLLM} onClick={handleCallClicked} />
? (composerQuickButton === 'call'
? <ButtonCallMemo isMobile disabled={noConversation || noLLM} onClick={handleCallClicked} />
: <ButtonBeamMemo isMobile disabled={noConversation /*|| noLLM*/} color={beamButtonColor} hasContent={!!composeText} onClick={handleSendTextBeamClicked} />)
: isDraw
? <ButtonOptionsDraw isMobile onClick={handleDrawOptionsClicked} sx={{ mr: { xs: 1, md: 2 } }} />
: <IconButton disabled sx={{ mr: { xs: 1, md: 2 } }} />
@@ -991,7 +1030,7 @@ export function Composer(props: {
<Button
key='composer-act'
fullWidth
disabled={noConversation || noLLM}
disabled={noConversation /* || noLLM*/}
loading={sendStarted}
loadingPosition='end'
onClick={handleSendClicked}
@@ -1022,16 +1061,17 @@ export function Composer(props: {
{/*</Tooltip>}*/}
{/* [Draw] Imagine */}
{isDraw && !!composeText && <Tooltip title='Generate an image prompt'>
<IconButton variant='outlined' disabled={noConversation || noLLM} onClick={handleTextImagineClicked}>
<AutoAwesomeIcon />
</IconButton>
</Tooltip>}
{/* NOTE: disabled: as we have prompt enhancement in the TextArea (Draw Mode) already */}
{/*{isDraw && !!composeText && <Tooltip title='Generate an image prompt'>*/}
{/* <IconButton variant='outlined' disabled={noConversation || noLLM} onClick={handleTextImagineClicked}>*/}
{/* <AutoAwesomeIcon />*/}
{/* </IconButton>*/}
{/*</Tooltip>}*/}
{/* Mode expander */}
<IconButton
variant={assistantAbortible ? 'soft' : isDraw ? undefined : undefined}
disabled={noConversation || noLLM || chatExecuteMenuShown}
variant={chatExecuteMenuShown ? 'outlined' : assistantAbortible ? 'soft' : isDraw ? undefined : undefined}
disabled={noConversation /*|| chatExecuteMenuShown*/}
onClick={showChatExecuteMenu}
>
<ExpandLessIcon />
@@ -1042,7 +1082,7 @@ export function Composer(props: {
{isDesktop && showChatExtras && !assistantAbortible && (
<ButtonBeamMemo
color={beamButtonColor}
disabled={noConversation || noLLM}
disabled={noConversation /*|| noLLM*/}
hasContent={!!composeText}
onClick={handleSendTextBeamClicked}
/>
@@ -1050,6 +1090,9 @@ export function Composer(props: {
</Box>
{/* [desktop] Draw mode N buttons */}
{isDesktop && isDraw && <ButtonGroupDrawRepeat drawRepeat={drawRepeat} setDrawRepeat={setDrawRepeat} />}
{/* [desktop] Multicast switch (under the Chat button) */}
{isDesktop && props.isMulticast !== null && <ButtonMultiChatMemo multiChat={props.isMulticast} onSetMultiChat={props.setIsMulticast} />}
@@ -1,8 +1,10 @@
import * as React from 'react';
import { Controller, useFieldArray, useForm } from 'react-hook-form';
import { Box, Button, FormControl, FormHelperText, IconButton, Input, Stack, Typography } from '@mui/joy';
import { Box, Button, Chip, FormControl, FormHelperText, IconButton, Input, Stack, Typography } from '@mui/joy';
import AddCircleOutlineRoundedIcon from '@mui/icons-material/AddCircleOutlineRounded';
import AddIcon from '@mui/icons-material/Add';
import BrowserUpdatedOutlinedIcon from '@mui/icons-material/BrowserUpdatedOutlined';
import DeleteOutlineIcon from '@mui/icons-material/DeleteOutline';
import LanguageRoundedIcon from '@mui/icons-material/LanguageRounded';
import YouTubeIcon from '@mui/icons-material/YouTube';
@@ -11,7 +13,7 @@ import { extractYoutubeVideoIDFromURL } from '~/modules/youtube/youtube.utils';
import { GoodModal } from '~/common/components/modals/GoodModal';
import { addSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import { asValidURL } from '~/common/util/urlUtils';
import { asValidURL, extractUrlsFromText } from '~/common/util/urlUtils';
// configuration
@@ -26,8 +28,25 @@ type WebInputModalInputs = {
links: WebInputData[];
}
const _styles = {
ytIcon: {
color: 'red',
} as const,
chipLink: {
ml: 'auto',
pr: 1.125,
// '--Chip-radius': '4px',
// whiteSpace: 'break-spaces',
// gap: 1.5,
} as const,
} as const;
function WebInputModal(props: {
composerText?: string,
onClose: () => void,
onWebLinks: (urls: WebInputData[]) => void,
}) {
@@ -35,13 +54,31 @@ function WebInputModal(props: {
// state
const { control: formControl, handleSubmit: formHandleSubmit, formState: { isValid: formIsValid, isDirty: formIsDirty } } = useForm<WebInputModalInputs>({
values: { links: [{ url: '' }] },
// mode: 'onChange', // validate on change
mode: 'onChange', // validate on change
});
const { fields: formFields, append: formFieldsAppend, remove: formFieldsRemove } = useFieldArray({ control: formControl, name: 'links' });
const { fields: formFields, append: formFieldsAppend, remove: formFieldsRemove, update: formFieldsUpdate } = useFieldArray({ control: formControl, name: 'links' });
const firstInputRef = React.useRef<HTMLInputElement>(null);
// derived
const urlFieldCount = formFields.length;
const canAddMoreUrls = urlFieldCount < MAX_URLS;
// [effect] auto-focus first input
React.useEffect(() => {
setTimeout(() => {
if (firstInputRef.current)
firstInputRef.current.focus();
}, 0);
}, []);
// memos
const extractedComposerUrls = React.useMemo(() => {
return !props.composerText ? null : extractUrlsFromText(props.composerText);
}, [props.composerText]);
const extractedUrlsCount = extractedComposerUrls?.length ?? 0;
// handlers
@@ -70,6 +107,46 @@ function WebInputModal(props: {
}, [handleClose, onWebLinks]);
// const handleAddUrl = React.useCallback((newUrl: string) => {
// // bail if can't add
// if (!canAddMoreUrls)
// return addSnackbar({ key: 'max-urls', message: `Maximum ${MAX_URLS} URLs allowed`, type: 'precondition-fail' });
//
// // bail if already in
// const exists = formFields.some(({ url }) => url === newUrl);
// if (exists)
// return addSnackbar({ key: 'duplicate-url', message: 'URL already added', type: 'info' });
//
// // replace the first empty field, or append
// const emptyFieldIndex = formFields.findIndex(field => !field.url.trim());
// if (emptyFieldIndex >= 0)
// formFieldsUpdate(emptyFieldIndex, { url: newUrl });
// else
// formFieldsAppend({ url: newUrl });
// }, [canAddMoreUrls, formFields, formFieldsAppend, formFieldsUpdate]);
const handleAddAllUrls = React.useCallback(() => {
if (!extractedComposerUrls) return;
// new URLs that are not already in the form
const newURLs = extractedComposerUrls.filter(url => !formFields.some(field => field.url.trim() === url));
if (!newURLs.length) return;
// find empty fields first
for (let i = 0; i < formFields.length; i++) {
const field = formFields[i];
if (!field.url.trim()) {
formFieldsUpdate(i, { url: newURLs.shift()! });
if (!newURLs.length) break;
}
}
// append remaining
newURLs.forEach(url => formFieldsAppend({ url }));
}, [extractedComposerUrls, formFields, formFieldsAppend, formFieldsUpdate]);
return (
<GoodModal
open
@@ -89,6 +166,26 @@ function WebInputModal(props: {
{/*You can add up to {MAX_URLS} URLs.*/}
</Typography>
{/* Modified URLs section */}
{!!extractedUrlsCount && (
<Box sx={{ display: 'flex', alignItems: 'center', gap: 1 }}>
<Typography level='title-sm' startDecorator={<BrowserUpdatedOutlinedIcon />}>
{extractedUrlsCount} URL{extractedUrlsCount > 1 ? 's' : ''} in your message
{/*{extractedUrlsCount} URL{extractedUrlsCount > 1 ? 's' : ''} found in your message*/}
</Typography>
<Chip
variant='soft'
onClick={handleAddAllUrls}
startDecorator={<AddCircleOutlineRoundedIcon />}
sx={_styles.chipLink}
>
Add
</Chip>
</Box>
)}
<form onSubmit={formHandleSubmit(handleSubmit)}>
<Stack spacing={1}>
{formFields.map((field, index) => (
@@ -101,12 +198,16 @@ function WebInputModal(props: {
<FormControl error={!!error}>
<Box sx={{ display: 'flex', gap: 1 }}>
<Input
autoFocus={index === 0}
required={index === 0}
placeholder='https://...'
endDecorator={extractYoutubeVideoIDFromURL(value) ? <YouTubeIcon sx={{ color: 'red' }} /> : undefined}
endDecorator={extractYoutubeVideoIDFromURL(value) ? <YouTubeIcon sx={_styles.ytIcon} /> : undefined}
value={value}
onChange={onChange}
slotProps={index !== 0 ? undefined : {
input: {
ref: firstInputRef,
},
}}
sx={{ flex: 1 }}
/>
{urlFieldCount > 1 && (
@@ -133,7 +234,7 @@ function WebInputModal(props: {
{formIsDirty && <Button
color='neutral'
variant='soft'
disabled={urlFieldCount >= MAX_URLS}
disabled={!canAddMoreUrls}
onClick={() => formFieldsAppend({ url: '' })}
startDecorator={<AddIcon />}
>
@@ -147,7 +248,7 @@ function WebInputModal(props: {
disabled={!formIsValid || !formIsDirty}
sx={{ minWidth: 160, ml: 'auto' }}
>
Add {urlFieldCount > 1 ? `(${urlFieldCount})` : ''}
Import {urlFieldCount > 1 ? `(${urlFieldCount})` : ''}
</Button>
</Box>
@@ -158,15 +259,20 @@ function WebInputModal(props: {
}
export function useWebInputModal(onAttachWebLinks: (urls: WebInputData[]) => void) {
export function useWebInputModal(onAttachWebLinks: (urls: WebInputData[]) => void, composerText?: string) {
// state
const [open, setOpen] = React.useState(false);
const composerTextRef = React.useRef(composerText);
// copy the text to a ref, constantly - we just care about a recent snapshot, but don't want to invalidate hooks
composerTextRef.current = composerText;
const openWebInputDialog = React.useCallback(() => setOpen(true), []);
const webInputDialogComponent = React.useMemo(() => open && (
<WebInputModal
composerText={composerTextRef.current}
onClose={() => setOpen(false)}
onWebLinks={onAttachWebLinks}
/>
@@ -38,6 +38,7 @@ export function ActilePopup(props: {
maxHeightGapPx={320}
minWidth={320}
noBottomPadding
noAutoFocus={true /* we control keyboard navigation */}
noTopPadding
>
@@ -4,7 +4,7 @@ import type { ActileItem, ActileProvider } from './ActileProvider';
import { ActilePopup } from './ActilePopup';
export const useActileManager = (providers: ActileProvider[], anchorRef: React.RefObject<HTMLElement>) => {
export const useActileManager = (providers: ActileProvider[], anchorRef: React.RefObject<HTMLElement | null>) => {
// state
const [popupOpen, setPopupOpen] = React.useState(false);
@@ -1,6 +1,6 @@
import * as React from 'react';
import { Box, Button, IconButton, Tooltip } from '@mui/joy';
import { Box, Button, ColorPaletteProp, IconButton, Tooltip } from '@mui/joy';
import AddAPhotoIcon from '@mui/icons-material/AddAPhoto';
import CameraAltOutlinedIcon from '@mui/icons-material/CameraAltOutlined';
@@ -12,6 +12,7 @@ import { CameraCaptureModal } from '../CameraCaptureModal';
export const ButtonAttachCameraMemo = React.memo(ButtonAttachCamera);
function ButtonAttachCamera(props: {
color?: ColorPaletteProp,
isMobile?: boolean,
disabled?: boolean,
fullWidth?: boolean,
@@ -19,7 +20,7 @@ function ButtonAttachCamera(props: {
onOpenCamera: () => void,
}) {
return props.isMobile ? (
<IconButton disabled={props.disabled} onClick={props.onOpenCamera}>
<IconButton color={props.color} disabled={props.disabled} onClick={props.onOpenCamera}>
<AddAPhotoIcon />
</IconButton>
) : (
@@ -30,8 +31,8 @@ function ButtonAttachCamera(props: {
</Box>
)}>
<Button
variant='plain'
color='neutral'
variant={props.color ? 'soft' : 'plain'}
color={props.color || 'neutral'}
disabled={props.disabled}
fullWidth={props.fullWidth}
startDecorator={<CameraAltOutlinedIcon />}
@@ -1,6 +1,6 @@
import * as React from 'react';
import { Box, Button, IconButton, Tooltip } from '@mui/joy';
import { Box, Button, ColorPaletteProp, IconButton, Tooltip } from '@mui/joy';
import ContentPasteGoIcon from '@mui/icons-material/ContentPasteGo';
import { KeyStroke } from '~/common/components/KeyStroke';
@@ -10,6 +10,7 @@ import { buttonAttachSx } from '~/common/components/ButtonAttachFiles';
export const ButtonAttachClipboardMemo = React.memo(ButtonAttachClipboard);
function ButtonAttachClipboard(props: {
color?: ColorPaletteProp,
isMobile?: boolean,
disabled?: boolean,
fullWidth?: boolean,
@@ -17,7 +18,7 @@ function ButtonAttachClipboard(props: {
onAttachClipboard: () => void,
}) {
return props.isMobile ? (
<IconButton disabled={props.disabled} onClick={props.onAttachClipboard}>
<IconButton color={props.color} disabled={props.disabled} onClick={props.onAttachClipboard}>
<ContentPasteGoIcon />
</IconButton>
) : (
@@ -29,8 +30,8 @@ function ButtonAttachClipboard(props: {
</Box>
)}>
<Button
variant='plain'
color='neutral'
variant={props.color ? 'soft' : 'plain'}
color={props.color || 'neutral'}
disabled={props.disabled}
fullWidth={props.fullWidth}
startDecorator={<ContentPasteGoIcon />}
@@ -1,6 +1,6 @@
import * as React from 'react';
import { Box, Button, IconButton, Tooltip } from '@mui/joy';
import { Box, Button, ColorPaletteProp, IconButton, Tooltip } from '@mui/joy';
import AddRoundedIcon from '@mui/icons-material/AddRounded';
import { buttonAttachSx } from '~/common/components/ButtonAttachFiles';
@@ -9,6 +9,7 @@ import { buttonAttachSx } from '~/common/components/ButtonAttachFiles';
export const ButtonAttachNewMemo = React.memo(ButtonAttachNew);
function ButtonAttachNew(props: {
color?: ColorPaletteProp,
isMobile?: boolean,
disabled?: boolean,
fullWidth?: boolean,
@@ -16,7 +17,7 @@ function ButtonAttachNew(props: {
onAttachNew: () => void,
}) {
return props.isMobile ? (
<IconButton disabled={props.disabled} onClick={props.onAttachNew}>
<IconButton color={props.color} disabled={props.disabled} onClick={props.onAttachNew}>
<AddRoundedIcon />
</IconButton>
) : (
@@ -29,15 +30,15 @@ function ButtonAttachNew(props: {
</Box>
)}>
<Button
variant='plain'
color='neutral'
variant={props.color ? 'soft' : 'plain'}
color={props.color || 'neutral'}
disabled={props.disabled}
fullWidth={props.fullWidth}
startDecorator={<AddRoundedIcon />}
onClick={props.onAttachNew}
sx={buttonAttachSx.desktop}
>
New
Note
</Button>
</Tooltip>
);
@@ -1,6 +1,6 @@
import * as React from 'react';
import { Box, Button, IconButton, Tooltip } from '@mui/joy';
import { Box, Button, ColorPaletteProp, IconButton, Tooltip } from '@mui/joy';
import ScreenshotMonitorIcon from '@mui/icons-material/ScreenshotMonitor';
import { Is } from '~/common/util/pwaUtils';
@@ -11,6 +11,7 @@ import { takeScreenCapture } from '~/common/util/screenCaptureUtils';
export const ButtonAttachScreenCaptureMemo = React.memo(ButtonAttachScreenCapture);
function ButtonAttachScreenCapture(props: {
color?: ColorPaletteProp,
isMobile?: boolean,
disabled?: boolean,
fullWidth?: boolean,
@@ -41,7 +42,7 @@ function ButtonAttachScreenCapture(props: {
return props.isMobile ? (
<IconButton disabled={props.disabled} onClick={handleTakeScreenCapture}>
<IconButton color={props.color} disabled={props.disabled} onClick={handleTakeScreenCapture}>
<ScreenshotMonitorIcon />
</IconButton>
) : (
@@ -55,8 +56,8 @@ function ButtonAttachScreenCapture(props: {
</Box>
)}>
<Button
variant={capturing ? 'solid' : 'plain'}
color={!!error ? 'danger' : 'neutral'}
variant={capturing ? 'solid' : props.color ? 'soft' : 'plain'}
color={!!error ? 'danger' : props.color || 'neutral'}
disabled={props.disabled}
fullWidth={props.fullWidth}
loading={capturing}
@@ -1,14 +1,16 @@
import * as React from 'react';
import { Box, Button, IconButton, Tooltip } from '@mui/joy';
import { Box, Button, ColorPaletteProp, IconButton, Tooltip } from '@mui/joy';
import LanguageRoundedIcon from '@mui/icons-material/LanguageRounded';
import { buttonAttachSx } from '~/common/components/ButtonAttachFiles';
import { KeyStroke } from '~/common/components/KeyStroke';
export const ButtonAttachWebMemo = React.memo(ButtonAttachWeb);
function ButtonAttachWeb(props: {
color?: ColorPaletteProp,
isMobile?: boolean,
disabled?: boolean,
fullWidth?: boolean,
@@ -17,13 +19,13 @@ function ButtonAttachWeb(props: {
}) {
const button = props.isMobile ? (
<IconButton disabled={props.disabled} onClick={props.onOpenWebInput}>
<IconButton color={props.color} disabled={props.disabled} onClick={props.onOpenWebInput}>
<LanguageRoundedIcon />
</IconButton>
) : (
<Button
variant='plain'
color='neutral'
variant={props.color ? 'soft' : 'plain'}
color={props.color || 'neutral'}
disabled={props.disabled}
fullWidth={props.fullWidth}
startDecorator={<LanguageRoundedIcon />}
@@ -35,12 +37,13 @@ function ButtonAttachWeb(props: {
);
return (props.noToolTip || props.isMobile) ? button : (
<Tooltip arrow disableInteractive placement='top-start' title={(
<Tooltip arrow disableInteractive placement='top-start' title={
<Box sx={buttonAttachSx.tooltip}>
<b>Add Web Content 🌐</b><br />
Import from websites and YouTube
<KeyStroke combo='Ctrl + Shift + L' sx={{ mt: 1, mb: 0.5 }} />
</Box>
)}>
}>
{button}
</Tooltip>
);
@@ -43,7 +43,7 @@ function ButtonBeam(props: {
onClick: () => void,
}) {
return props.isMobile ? (
<IconButton variant='soft' color={props.color ?? 'primary'} disabled={props.disabled} onClick={props.onClick} sx={mobileSx}>
<IconButton variant='outlined' color={props.color ?? 'primary'} disabled={props.disabled} onClick={props.onClick} sx={mobileSx}>
<ChatBeamIcon />
</IconButton>
) : (
@@ -0,0 +1,77 @@
import * as React from 'react';
import { Box, FormControl, IconButton } from '@mui/joy';
const _styles = {
control: {
gap: 1,
mt: 1,
} as const,
buttonGroup: {
display: 'flex',
justifyContent: 'space-evenly',
// overflowX: 'hidden',
flexWrap: 'wrap',
minWidth: '131px',
} as const,
buttonActive: {
'--IconButton-size': { xs: '1.75rem', lg: '2rem' },
} as const,
button: {
'--IconButton-size': { xs: '1.75rem', lg: '2rem' },
border: '1px solid',
borderColor: 'warning.outlinedBorder',
backgroundColor: 'background.popup',
// boxShadow: drawRepeat === n ? '0px 2px 8px 0px rgb(var(--joy-palette-warning-mainChannel) / 40%)' : 'none',
// fontWeight: drawRepeat === n ? 'xl' : 400, /* reset, from 600 */
transition: 'transform 0.14s, box-shadow 0.14s',
'&:hover': {
transform: 'translateY(-1px)',
// backgroundColor: drawRepeat === n ? 'background.popup' : 'background.surface',
// boxShadow: '0 0 8px 1px rgb(var(--joy-palette-warning-mainChannel) / 40%)',
} as const,
} as const,
text: {
mx: 'auto',
fontSize: 'xs',
opacity: '0.5',
} as const,
} as const;
export function ButtonGroupDrawRepeat(props: {
drawRepeat: number,
setDrawRepeat: (n: number) => void,
}) {
const { drawRepeat, setDrawRepeat } = props;
return (
<FormControl sx={_styles.control}>
<Box sx={_styles.buttonGroup}>
{[1, 2, 4, 5, 10].map((n) => (
<IconButton
key={n}
size='sm'
color='warning'
variant={drawRepeat === n ? 'solid' : 'soft'}
onClick={() => setDrawRepeat(n)}
sx={drawRepeat === n ? _styles.buttonActive : _styles.button}
>
{n}
</IconButton>
))}
</Box>
<Box sx={_styles.text}>
{drawRepeat > 1
? `Create ${drawRepeat} Images`
: 'Number of Images'}
</Box>
</FormControl>
);
}
@@ -7,6 +7,7 @@ import MicIcon from '@mui/icons-material/Mic';
import { ExternalDocsLink } from '~/common/components/ExternalDocsLink';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { KeyStroke } from '~/common/components/KeyStroke';
import { useDontBlurTextarea } from '~/common/components/useDontBlurTextarea';
const micLegend = (errorMessage: string | null) =>
@@ -35,12 +36,7 @@ function ButtonMic(props: {
}) {
// Mobile: don't blur the textarea when clicking the mic button
const handleDontBlurTextArea = React.useCallback((event: React.MouseEvent) => {
const isTextAreaFocused = document.activeElement?.tagName === 'TEXTAREA';
// If a textarea is focused, prevent the default blur behavior
if (isTextAreaFocused)
event.preventDefault();
}, []);
const handleDontBlurTextArea = useDontBlurTextarea();
return (
<GoodTooltip placement='top' arrow enableInteractive title={micLegend(props.errorMessage)}>
@@ -16,7 +16,7 @@ export function ButtonMultiChat(props: { isMobile?: boolean, multiChat: boolean,
color={multiChat ? 'warning' : undefined}
onClick={() => props.onSetMultiChat(!multiChat)}
>
{multiChat ? <ChatMulticastOnIcon /> : <ChatMulticastOffIcon />}
{multiChat ? <ChatMulticastOnIcon /> : <ChatMulticastOnIcon />}
</IconButton>
) : (
<FormControl orientation='horizontal' sx={{ minHeight: '2.25rem', justifyContent: 'space-between' }}>
@@ -4,6 +4,8 @@ import { Button, IconButton } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
import FormatPaintTwoToneIcon from '@mui/icons-material/FormatPaintTwoTone';
import { PhSlidersHorizontalIcon } from '~/common/components/icons/phosphor/PhSlidersHorizontalIcon';
export function ButtonOptionsDraw(props: { isMobile?: boolean, onClick: () => void, sx?: SxProps }) {
return props.isMobile ? (
@@ -11,8 +13,8 @@ export function ButtonOptionsDraw(props: { isMobile?: boolean, onClick: () => vo
<FormatPaintTwoToneIcon />
</IconButton>
) : (
<Button variant='soft' color='warning' onClick={props.onClick} sx={props.sx}>
Options
<Button variant='soft' color='warning' onClick={props.onClick} sx={props.sx} endDecorator={<PhSlidersHorizontalIcon />}>
Image Settings
</Button>
);
}
@@ -22,7 +22,7 @@ import { RenderImageRefDBlob } from '~/modules/blocks/image/RenderImageRefDBlob'
import { RenderImageURL } from '~/modules/blocks/image/RenderImageURL';
import type { AttachmentDraft, AttachmentDraftConverterType, AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import { DMessageDataRef, DMessageImageRefPart, isImageRefPart } from '~/common/stores/chat/chat.fragments';
import { DMessageDataRef, DMessageImageRefPart, isImageRefPart, isZyncAssetImageReferencePartWithLegacyDBlob } from '~/common/stores/chat/chat.fragments';
import { LiveFileIcon } from '~/common/livefile/liveFile.icons';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { ellipsizeFront, ellipsizeMiddle } from '~/common/util/textUtils';
@@ -115,8 +115,8 @@ const converterTypeToIconMap: { [key in AttachmentDraftConverterType]: React.Com
};
function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean, onViewImageRefPart: (imageRefPart: DMessageImageRefPart) => void) {
const activeConterters = attachmentDraft.converters.filter(c => c.isActive);
if (activeConterters.length === 0)
const activeConverters = attachmentDraft.converters.filter(c => c.isActive);
if (activeConverters.length === 0)
return null;
// Alternate icon for the Web Page Screenshot
@@ -127,15 +127,21 @@ function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean,
let outputSingleImageRefDBlobs: Extract<DMessageDataRef, { reftype: 'dblob' }>[] = [];
if (!urlImageData && attachmentDraft.outputFragments.length === 1) {
const fragment = attachmentDraft.outputFragments[0];
if (isImageRefPart(fragment.part) && fragment.part.dataRef && fragment.part.dataRef.reftype === 'dblob')
if (isZyncAssetImageReferencePartWithLegacyDBlob(fragment.part))
outputSingleImageRefDBlobs = [fragment.part._legacyImageRefPart!.dataRef];
else if (isImageRefPart(fragment.part) && fragment.part.dataRef && fragment.part.dataRef.reftype === 'dblob')
outputSingleImageRefDBlobs = [fragment.part.dataRef];
}
const handleViewFirstImage = (e: React.MouseEvent) => {
e.preventDefault();
e.stopPropagation();
if (attachmentDraft.outputFragments[0] && isImageRefPart(attachmentDraft.outputFragments[0].part))
onViewImageRefPart(attachmentDraft.outputFragments[0].part);
const fragment = attachmentDraft.outputFragments[0];
if (!fragment) return;
if (isZyncAssetImageReferencePartWithLegacyDBlob(fragment.part))
onViewImageRefPart(fragment.part._legacyImageRefPart!);
else if (isImageRefPart(fragment.part))
onViewImageRefPart(fragment.part);
};
// Whether to render the converters
@@ -162,12 +168,13 @@ function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean,
)}
{/* Render DBlob referred images in place of converter icons */}
{outputSingleImageRefDBlobs.map((dataRef, i) => dataRef && (
{outputSingleImageRefDBlobs.map((dataRef, _i) => dataRef && (
<TooltipOutlined key={`image-${dataRef.dblobAssetId}`} title={noTooltips ? null : <>View converted image{/* <br/>{dataRef?.bytesSize?.toLocaleString()} bytes */}</>} placement='top-start'>
<div>
<RenderImageRefDBlob
dataRefDBlobAssetId={dataRef.dblobAssetId}
dataRefMimeType={dataRef.mimeType}
dataRefBytesSize={dataRef.bytesSize}
variant='attachment-button'
scaledImageSx={attachmentIconSx}
onClick={handleViewFirstImage}
@@ -176,8 +183,8 @@ function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean,
</TooltipOutlined>
))}
{/*{activeConterters.some(c => c.id.startsWith('url-page-')) ? <LanguageIcon sx={{ opacity: 0.2, ml: -2.5 }} /> : null}*/}
{renderConverterIcons && activeConterters.map((_converter, idx) => {
{/*{activeConverters.some(c => c.id.startsWith('url-page-')) ? <LanguageIcon sx={{ opacity: 0.2, ml: -2.5 }} /> : null}*/}
{renderConverterIcons && activeConverters.map((_converter, idx) => {
const Icon = converterTypeToIconMap[_converter.id] ?? null;
return !Icon ? null : (
<TooltipOutlined key={`${_converter.id}-${idx}`} title={noTooltips ? null : `Attached as ${_converter.name}`} placement='top-start'>
@@ -1,7 +1,7 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Checkbox, Chip, CircularProgress, LinearProgress, Link, ListDivider, ListItem, ListItemDecorator, MenuItem, Radio, Typography } from '@mui/joy';
import { Box, Checkbox, Chip, CircularProgress, LinearProgress, ListDivider, ListItem, ListItemDecorator, MenuItem, Radio, Typography } from '@mui/joy';
import AttachmentIcon from '@mui/icons-material/Attachment';
import ClearIcon from '@mui/icons-material/Clear';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
@@ -10,17 +10,16 @@ import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import KeyboardArrowLeftIcon from '@mui/icons-material/KeyboardArrowLeft';
import KeyboardArrowRightIcon from '@mui/icons-material/KeyboardArrowRight';
import LaunchIcon from '@mui/icons-material/Launch';
import ReadMoreIcon from '@mui/icons-material/ReadMore';
import VerticalAlignBottomIcon from '@mui/icons-material/VerticalAlignBottom';
import VisibilityIcon from '@mui/icons-material/Visibility';
import { CloseablePopup } from '~/common/components/CloseablePopup';
import { DMessageAttachmentFragment, DMessageDocPart, DMessageImageRefPart, isDocPart, isImageRefPart } from '~/common/stores/chat/chat.fragments';
import { DMessageAttachmentFragment, DMessageDocPart, DMessageImageRefPart, isDocPart, isImageRefPart, isZyncAssetImageReferencePartWithLegacyDBlob } from '~/common/stores/chat/chat.fragments';
import { LiveFileIcon } from '~/common/livefile/liveFile.icons';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import { showImageDataURLInNewTab } from '~/common/util/imageUtils';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { themeZIndexOverMobileDrawer } from '~/common/app.theme';
import { useUIPreferencesStore } from '~/common/stores/store-ui';
import type { AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import type { AttachmentDraftsStoreApi } from '~/common/attachment-drafts/store-attachment-drafts_slice';
@@ -158,6 +157,7 @@ export function LLMAttachmentMenu(props: {
minWidth={260}
noTopPadding
placement='top'
zIndex={themeZIndexOverMobileDrawer /* was not set, but the Attachment Menu can be used from the Personas Modal */}
>
{/* Move Arrows */}
@@ -311,13 +311,23 @@ export function LLMAttachmentMenu(props: {
<Typography level='body-sm' sx={indicatorGapSx}>
{draftInput.urlImage.mimeType} · {draftInput.urlImage.width} x {draftInput.urlImage.height} · {draftInput.urlImage.imgDataUrl?.length.toLocaleString()}
{' · '}
<Link onClick={(event) => {
event.preventDefault();
event.stopPropagation();
showImageDataURLInNewTab(draftInput?.urlImage?.imgDataUrl || '');
<Chip component='span' size='sm' color='primary' variant='outlined' startDecorator={<VisibilityIcon />} onClick={(event) => {
if (draftInput?.urlImage?.imgDataUrl) {
// Invoke the viewer but with a virtual 'temp' part description to see this preview image
handleViewImageRefPart(event, {
pt: 'image_ref',
dataRef: {
reftype: 'url',
url: draftInput.urlImage.imgDataUrl,
},
altText: draft.label || 'URL Image Preview',
width: draftInput.urlImage.width || undefined,
height: draftInput.urlImage.height || undefined,
});
}
}}>
open <LaunchIcon sx={{ mx: 0.5, fontSize: 16 }} />
</Link>
view
</Chip>
</Typography>
)}
@@ -343,13 +353,17 @@ export function LLMAttachmentMenu(props: {
</Chip>
</Typography>
);
} else if (isImageRefPart(part)) {
const resolution = part.width && part.height ? `${part.width} x ${part.height}` : 'no resolution';
const mime = part.dataRef.reftype === 'dblob' ? part.dataRef.mimeType : 'unknown image';
} else if (isZyncAssetImageReferencePartWithLegacyDBlob(part) || isImageRefPart(part)) {
// Unified Image Reference handling (both Zync Asset References with legacy fallback and legacy image_ref)
const legacyImageRefPart = isZyncAssetImageReferencePartWithLegacyDBlob(part) ? part._legacyImageRefPart! : part;
const { dataRef, width, height } = legacyImageRefPart;
const resolution = width && height ? `${width} x ${height}` : 'no resolution';
const mime = dataRef.reftype === 'dblob' ? dataRef.mimeType : 'unknown image';
return (
<Typography key={index} level='body-sm' sx={{ color: 'text.primary' }} startDecorator={<ReadMoreIcon sx={indicatorSx} />}>
<span>{mime /*.replace('image/', 'img: ')*/} · {resolution} · {part.dataRef.reftype === 'dblob' ? (part.dataRef.bytesSize?.toLocaleString() || 'no size') : '(remote)'} ·&nbsp;</span>
<Chip component='span' size={isOutputMultiple ? 'sm' : 'md'} color='primary' variant='outlined' startDecorator={<VisibilityIcon />} onClick={(event) => handleViewImageRefPart(event, part)}>
<span>{mime /*.replace('image/', 'img: ')*/} · {resolution} · {dataRef.reftype === 'dblob' ? (dataRef.bytesSize?.toLocaleString() || 'no size') : '(remote)'} ·&nbsp;</span>
<Chip component='span' size={isOutputMultiple ? 'sm' : 'md'} color='primary' variant='outlined' startDecorator={<VisibilityIcon />}
onClick={(event) => handleViewImageRefPart(event, legacyImageRefPart)}>
view
</Chip>
{isOutputMultiple && <Chip component='span' size={isOutputMultiple ? 'sm' : 'md'} color='danger' variant='outlined' startDecorator={<DeleteForeverIcon />} onClick={(event) => handleDeleteOutputFragment(event, index)}>
@@ -193,7 +193,7 @@ export function LLMAttachmentsList(props: {
</Box>
{/* Overall Menu button */}
{!_style.barWraps && (
{!props.buttonsCanWrap && (
<IconButton
onClick={handleOverallMenuToggle}
onContextMenu={handleOverallMenuToggle}
@@ -1,12 +1,13 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, CircularProgress, IconButton, Tooltip } from '@mui/joy';
import { Box, CircularProgress, IconButton } from '@mui/joy';
import AutoFixHighIcon from '@mui/icons-material/AutoFixHigh';
import type { AgiAttachmentPromptsData } from '~/modules/aifn/agiattachmentprompts/useAgiAttachmentPrompts';
import { AgiSquircleIcon } from '~/common/components/icons/AgiSquircleIcon';
import { BigAgiSquircleIcon } from '~/common/components/icons/big-agi/BigAgiSquircleIcon';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { AGI_SUGGESTIONS_COLOR } from '../textarea/ComposerTextAreaActions';
@@ -42,7 +43,7 @@ function LLMAttachmentsPromptsButton({ data }: { data: AgiAttachmentPromptsData
const tooltipTitle =
data.error ? (data.error.message || 'Error guessing actions')
: data.isFetching ? null
: data.isPending ? <Box sx={{ display: 'flex', gap: 1 }}><AgiSquircleIcon inverted sx={{ color: 'white', borderRadius: '1rem' }} /> What can I do?</Box>
: data.isPending ? <Box sx={{ display: 'flex', gap: 1 }}><BigAgiSquircleIcon inverted sx={{ color: 'white', borderRadius: '1rem' }} /> What can I do?</Box>
: 'Give me more ideas';
const button = (
@@ -64,8 +65,8 @@ function LLMAttachmentsPromptsButton({ data }: { data: AgiAttachmentPromptsData
);
return !tooltipTitle ? button : (
<Tooltip variant='outlined' disableInteractive placement='left' arrow title={tooltipTitle}>
<GoodTooltip variantOutlined arrow title={tooltipTitle}>
{button}
</Tooltip>
</GoodTooltip>
);
}
@@ -11,6 +11,7 @@ export interface LLMAttachmentDraftsCollection {
canAttachAllFragments: boolean;
canInlineSomeFragments: boolean;
llmTokenCountApprox: number | null;
hasImageFragments: boolean;
}
@@ -19,6 +20,7 @@ export interface LLMAttachmentDraft {
llmSupportsAllFragments: boolean;
llmSupportsTextFragments: boolean;
llmTokenCountApprox: number | null;
hasImageFragments: boolean;
}
@@ -44,7 +46,10 @@ export function useLLMAttachmentDrafts(attachmentDrafts: AttachmentDraft[], chat
const equalChatLLM = chatLLM === prevStateRef.current.chatLLM;
// LLM-dependent multi-modal enablement
const supportedTypes: DMessageAttachmentFragment['part']['pt'][] = chatLLMSupportsImages ? ['image_ref', 'doc'] : ['doc'];
// TODO: consider also Audio inputs, maybe PDF binary inputs
// FIXME: reference fragments could refer to non-image as well
const imageTypes: DMessageAttachmentFragment['part']['pt'][] = ['reference', 'image_ref'];
const supportedTypes: DMessageAttachmentFragment['part']['pt'][] = chatLLMSupportsImages ? [...imageTypes, 'doc'] : ['doc'];
const supportedTextTypes: DMessageAttachmentFragment['part']['pt'][] = supportedTypes.filter(pt => pt === 'doc');
// Add LLM-specific properties to each attachment draft
@@ -66,6 +71,7 @@ export function useLLMAttachmentDrafts(attachmentDrafts: AttachmentDraft[], chat
llmTokenCountApprox: chatLLM
? estimateTokensForFragments(chatLLM, 'user', a.outputFragments, true, 'useLLMAttachmentDrafts')
: null,
hasImageFragments: !a.outputFragments ? false : a.outputFragments.some(op => imageTypes.includes(op.part.pt)),
};
});
@@ -75,6 +81,7 @@ export function useLLMAttachmentDrafts(attachmentDrafts: AttachmentDraft[], chat
const llmTokenCountApprox = chatLLM
? llmAttachmentDrafts.reduce((acc, a) => acc + (a.llmTokenCountApprox || 0), 0)
: null;
const hasImageFragments = llmAttachmentDrafts.some(a => a.hasImageFragments);
// [Optimization] Update the ref with the new state
prevStateRef.current = { llmAttachmentDrafts, chatLLM };
@@ -84,6 +91,7 @@ export function useLLMAttachmentDrafts(attachmentDrafts: AttachmentDraft[], chat
canAttachAllFragments,
canInlineSomeFragments,
llmTokenCountApprox,
hasImageFragments,
};
}, [attachmentDrafts, chatLLM, chatLLMSupportsImages]); // Dependencies for the outer useMemo
@@ -15,7 +15,7 @@ export const AGI_SUGGESTIONS_COLOR: ColorPaletteProp = 'success';
// Styles
const textAreaSx: SxProps = {
export const composerTextAreaSx: SxProps = {
flex: 1,
// layout
@@ -29,8 +29,8 @@ const textAreaSx: SxProps = {
'--Button-gap': '1.2rem',
transition: 'background-color 0.2s, color 0.2s',
// minWidth: 160,
},
};
} as const,
} as const;
const promptButtonSx: SxProps = {
@@ -75,7 +75,7 @@ export function ComposerTextAreaActions(props: {
return null;
return (
<Box sx={textAreaSx}>
<Box sx={composerTextAreaSx}>
{/* In-Reference-To bubbles */}
{props.inReferenceTo?.map((item, index) => (
@@ -0,0 +1,76 @@
import * as React from 'react';
import { Box, Button } from '@mui/joy';
import AutoFixHighIcon from '@mui/icons-material/AutoFixHigh';
import { composerTextAreaSx } from './ComposerTextAreaActions';
import { imaginePromptFromTextOrThrow } from '~/modules/aifn/imagine/imaginePromptFromText';
const _style = {
enhance: {
minWidth: 170,
mx: 0.625,
pr: 2,
border: '1px solid',
borderColor: 'warning.outlinedBorder',
boxShadow: '0px 4px 4px -4px rgb(var(--joy-palette-warning-darkChannel) / 20%)',
transition: 'background-color 0.14s',
justifyContent: 'space-between',
} as const,
gone: {
visibility: 'hidden',
} as const,
} as const;
export function ComposerTextAreaDrawActions(props: {
composerText: string,
onReplaceText: (text: string) => void,
}) {
// state
const [isSimpleEnhancing, setIsSimpleEnhancing] = React.useState(false);
// derived
const trimmedPrompt = props.composerText.trim();
const userHasText = trimmedPrompt.length >= 3;
const { onReplaceText } = props;
const handleSimpleEnhance = React.useCallback(async () => {
if (!trimmedPrompt || isSimpleEnhancing) return;
setIsSimpleEnhancing(true);
const improvedPrompt = await imaginePromptFromTextOrThrow(trimmedPrompt, 'DEV')
.catch(console.error);
if (improvedPrompt)
onReplaceText(improvedPrompt);
setIsSimpleEnhancing(false);
}, [isSimpleEnhancing, onReplaceText, trimmedPrompt]);
return (
<Box sx={composerTextAreaSx}>
{/* Enhance button */}
<Box sx={{ ml: 'auto' }}>
<Button
size='sm'
variant={isSimpleEnhancing ? 'soft' : 'soft'}
color='warning'
disabled={!userHasText}
loading={isSimpleEnhancing}
loadingPosition='end'
// className={promptButtonClass}
endDecorator={<AutoFixHighIcon sx={{ fontSize: '20px' }} />}
onClick={handleSimpleEnhance}
sx={!userHasText ? _style.gone : _style.enhance}
>
{isSimpleEnhancing ? 'Enhancing...' : 'Enhance Prompt'}
</Button>
</Box>
</Box>
);
}
@@ -6,7 +6,7 @@ import { Box, ColorPaletteProp, Tooltip } from '@mui/joy';
import { DPricingChatGenerate, getLlmCostForTokens } from '~/common/stores/llms/llms.pricing';
import { adjustContentScaling, themeScalingMap } from '~/common/app.theme';
import { formatModelsCost } from '~/common/util/costUtils';
import { useUIContentScaling } from '~/common/state/store-ui';
import { useUIContentScaling } from '~/common/stores/store-ui';
export function tokenCountsMathAndMessage(tokenLimit: number | 0, directTokens: number, historyTokens?: number, responseMaxTokens?: number, chatPricing?: DPricingChatGenerate): {
@@ -0,0 +1,109 @@
import * as React from 'react';
import type { DLLM } from '~/common/stores/llms/llms.types';
import { estimateTextTokens } from '~/common/stores/chat/chat.tokens';
/**
* Efficient hook that calculates token count for text with debouncing and deadline,
* and only updates when the token count changes.
*
* @param text The text to count tokens for.
* @param llm The LLM (includes the config) we perform the token count FOR.
* @param debounceMs The minimum time between updates (keeps rolling at every change)
* @param deadlineMs The maximum time between updates (fires even if the text is still changing)
*/
export function useTextTokenCount(
text: string,
llm: DLLM | null,
debounceMs: number = 300,
deadlineMs: number = 1200,
): number | undefined {
// state: text ref to just read point value
const lastTextRef = React.useRef<string>(undefined);
// state
const [tokenCount, setTokenCount] = React.useState<number | undefined>(undefined);
const lastTokenCountRef = React.useRef<number | undefined>(undefined);
const resetTokenCount = React.useCallback((value: number | undefined = 0) => {
if (lastTokenCountRef.current === value) return;
lastTokenCountRef.current = value;
setTokenCount(value);
}, []);
// Timers: Debounced/Deadlined
const debounceTimerRef = React.useRef<ReturnType<typeof setTimeout>>(undefined);
const deadlineTimerRef = React.useRef<ReturnType<typeof setTimeout>>(undefined);
const clearTimers = React.useCallback((clearDebounce: boolean = true, clearDeadline: boolean = true) => {
if (clearDebounce && debounceTimerRef.current) {
clearTimeout(debounceTimerRef.current);
debounceTimerRef.current = undefined;
}
if (clearDeadline && deadlineTimerRef.current) {
clearTimeout(deadlineTimerRef.current);
deadlineTimerRef.current = undefined;
}
}, []);
// tokens calculation, given the input text and LLM (which includes the LLM configuration)
// NOTE: we shall extend this for fragments? (images, etc.)
const calculateAndUpdateTextTokens = React.useCallback(() => {
// no llm: can't count
const currentText = lastTextRef.current;
if (!llm || currentText === undefined) {
resetTokenCount(undefined);
return;
}
// [HEAVY] compute tokens
const newTextTokens = !currentText ? 0
: estimateTextTokens(currentText, llm, 'useTextTokenCount');
// only update state if changed
if (newTextTokens !== lastTokenCountRef.current) {
lastTokenCountRef.current = newTextTokens;
setTokenCount(newTextTokens);
}
// clear both timers since we're current now
clearTimers(true, true);
}, [clearTimers, llm, resetTokenCount]);
// debounce mechanics
React.useEffect(() => {
// if there's no LLM, we can't do anything
if (!llm || text === undefined) {
resetTokenCount(undefined);
return;
}
// update text reference for the calculation function
lastTextRef.current = text;
// restart the debounce timer
clearTimers(true, false);
debounceTimerRef.current = setTimeout(calculateAndUpdateTextTokens, debounceMs);
// set a deadline timer if one isn't already running
if (!deadlineTimerRef.current && deadlineMs > debounceMs)
deadlineTimerRef.current = setTimeout(calculateAndUpdateTextTokens, deadlineMs);
}, [calculateAndUpdateTextTokens, clearTimers, deadlineMs, debounceMs, llm, resetTokenCount, text]);
// cleanup at unmount
React.useEffect(() => () => clearTimers(true, true), [clearTimers]);
return tokenCount;
}
@@ -0,0 +1,151 @@
import * as React from 'react';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
// configuration
const HIDE_DELAY = 1500; // milliseconds before hiding after mouse leaves
const FORCE_SHOW_DURATION = 3000; // milliseconds to keep shown after user interaction
const compressibleStyle = {
minHeight: 0, // makes the compressor collapse this
overflow: 'hidden', // when collapsing cuts the content
contain: 'paint', // improves performance by limiting the area to paint
// Note: the following in the composer's style would make for a much better animation
// sx={{
// // Add slide animation for both beam and auto-hide
// transition: 'transform 0.3s cubic-bezier(0.4, 0, 0.2, 1)',
// transform: composerAutoHide.isHidden ? 'translateY(100%)' : 'translateY(0)',
// }}
} as const;
const _styles = {
compressorClosed: {
display: 'grid',
gridTemplateRows: '0fr',
transition: 'grid-template-rows 0.3s cubic-bezier(0.4, 0, 0.2, 1)',
} as const,
compressorOpen: {
display: 'grid',
gridTemplateRows: '1fr',
transition: 'grid-template-rows 0.3s cubic-bezier(0.4, 0, 0.2, 1)',
} as const,
detector: {
position: 'fixed',
bottom: 0,
left: 0,
right: 0,
height: '2rem',
backgroundColor: 'rgba(var(--joy-palette-neutral-mainChannel) / 0.1)',
// backgroundColor: { xs: 'rgba(var(--joy-palette-neutral-mainChannel) / 0.1)', md: 'transparent' },
zIndex: 20,
} as const,
} as const;
export function useComposerAutoHide(forceHide: boolean, isContentful: boolean) {
// state
const [isAutoHidden, setAutoHidden] = React.useState(false);
const [isFocused, setIsFocused] = React.useState(false);
const [isHovering, setIsHovering] = React.useState(false);
const [forceShowUntil, setForceShowUntil] = React.useState<number>(0);
// external state
const autoHideEnabled = useUXLabsStore((state) => state.labsAutoHideComposer);
const hideTimeoutRef = React.useRef<NodeJS.Timeout | undefined>(undefined);
// Force show the composer for a duration (e.g., after sending a message)
const forceShow = React.useCallback((durationMs: number = FORCE_SHOW_DURATION) => {
setForceShowUntil(Date.now() + Math.max(1000, durationMs));
setAutoHidden(false);
}, []);
const showComposer = React.useCallback(() => {
if (hideTimeoutRef.current) {
clearTimeout(hideTimeoutRef.current);
hideTimeoutRef.current = undefined;
}
setAutoHidden(false);
}, []);
const hideComposerDelayed = React.useCallback(() => {
if (hideTimeoutRef.current)
clearTimeout(hideTimeoutRef.current);
hideTimeoutRef.current = setTimeout(() => {
setAutoHidden(true);
setIsFocused(false); // reset focus state when hiding
hideTimeoutRef.current = undefined;
}, HIDE_DELAY);
}, []);
// Effect: Handle auto-hide logic based on various conditions
const shouldStayVisible = isContentful || isHovering || isFocused || forceShowUntil > Date.now();
const shouldAutoHide = autoHideEnabled && !shouldStayVisible;
React.useEffect(() => {
if (shouldAutoHide)
hideComposerDelayed();
else
showComposer();
}, [hideComposerDelayed, shouldAutoHide, showComposer]);
// Clear force show timer when it expires
React.useEffect(() => {
if (forceShowUntil > 0) {
const timeout = setTimeout(() => {
setForceShowUntil(0);
}, forceShowUntil - Date.now());
return () => clearTimeout(timeout);
}
}, [forceShowUntil]);
// Cleanup on unmount
React.useEffect(() => {
return () => {
if (hideTimeoutRef.current)
clearTimeout(hideTimeoutRef.current);
};
}, []);
const doHide = forceHide || (autoHideEnabled && isAutoHidden);
const compressorProps = React.useMemo(() => ({
onMouseEnter: !autoHideEnabled ? undefined : () => setIsHovering(true),
onMouseLeave: !autoHideEnabled ? undefined : () => setIsHovering(false),
onFocusCapture: !autoHideEnabled ? undefined : () => setIsFocused(true),
onBlurCapture: !autoHideEnabled ? undefined : () => setIsFocused(false),
sx: doHide ? _styles.compressorClosed : _styles.compressorOpen,
}), [autoHideEnabled, doHide]);
const detectorProps = React.useMemo(() => ({
onMouseEnter: () => {
setIsHovering(true);
showComposer();
},
onMouseLeave: () => {
setIsHovering(false);
},
sx: _styles.detector,
}), [showComposer]);
return {
isHidden: doHide,
compressorProps,
compressibleStyle,
detectorProps,
forceShow,
};
}
@@ -3,7 +3,7 @@ import * as React from 'react';
import { SvgIcon } from '@mui/joy';
import AttachFileRoundedIcon from '@mui/icons-material/AttachFileRounded';
import { useDragDropDataTransfer } from '~/common/components/useDragDropDataTransfer';
import { useDragDropDataTransfer } from '~/common/components/dnd-dt/useDragDropDataTransfer';
export function useComposerDragDrop(
@@ -3,19 +3,56 @@ import { useShallow } from 'zustand/react/shallow';
import { Box, IconButton, Typography } from '@mui/joy';
import CloseRoundedIcon from '@mui/icons-material/CloseRounded';
import FullscreenRoundedIcon from '@mui/icons-material/FullscreenRounded';
import { BeamStoreApi, useBeamStore } from '~/modules/beam/store-beam.hooks';
import { AppBreadcrumbs } from '~/common/components/AppBreadcrumbs';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { KeyStroke } from '~/common/components/KeyStroke';
import { Release } from '~/common/app.release';
import { ShortcutKey, useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { animationBackgroundBeamGather, animationColorBeamScatterINV, animationEnterBelow } from '~/common/util/animUtils';
export function ChatBarAltBeam(props: {
const _styles = {
bar: {
// layout
display: 'flex',
alignItems: 'center',
gap: { xs: 1, md: 2 } as const,
minWidth: 0, // ensures the breadcrumbs don't overflow
// Customize breadcrumbs to enable collapse of the first one (chat title)
'& nav': {
overflow: 'hidden',
},
'& nav > ol': {
flexWrap: 'nowrap',
} as const,
'& nav > ol > li:first-of-type': {
overflow: 'hidden',
maxWidth: { xs: '110px', md: '140px' },
} as const,
} as const,
barScatter: {
animation: `${animationColorBeamScatterINV} 5s infinite, ${animationEnterBelow} 0.6s`,
} as const,
barGather: {
animation: `${animationBackgroundBeamGather} 3s infinite, ${animationEnterBelow} 0.6s`,
px: 1.5, py: 0.5,
} as const,
} as const;
export function ChatBarBeam(props: {
beamStore: BeamStoreApi,
conversationTitle: string,
isMobile: boolean,
}) {
@@ -66,41 +103,45 @@ export function ChatBarAltBeam(props: {
return (
<Box sx={{ display: 'flex', gap: { xs: 1, md: 2 }, alignItems: 'center' }}>
<Box sx={_styles.bar}>
{/* Title & Status */}
<Typography level='title-md'>
<Box
component='span'
sx={
isGatheringAny ? { animation: `${animationBackgroundBeamGather} 3s infinite, ${animationEnterBelow} 0.6s`, px: 1.5, py: 0.5 }
: isScattering ? { animation: `${animationColorBeamScatterINV} 5s infinite, ${animationEnterBelow} 0.6s` }
: { fontWeight: 'lg' }
}>
{isGatheringAny ? 'Merging...' : isScattering ? 'Beaming...' : isEditMode ? 'Beam Edit' : 'Beam'}
</Box>
{(!isGatheringAny && !isScattering && !isEditMode) && ' Mode'}
</Typography>
{/* Right Close Icon */}
<Box sx={{ display: 'flex' }}>
{/* [desktop] maximize button, or a disabled spacer */}
{!props.isMobile && (
<GoodTooltip variantOutlined title={<Box sx={{ p: 1 }}>Maximize</Box>}>
<IconButton size='sm' onClick={handleMaximizeBeam}>
<FullscreenRoundedIcon />
</IconButton>
</GoodTooltip>
)}
<GoodTooltip variantOutlined title={<Box sx={{ p: 1, display: 'flex', flexDirection: 'column', gap: 1 }}>Back to Chat <KeyStroke variant='outlined' combo='Esc' /></Box>}>
<IconButton aria-label='Close' size='sm' onClick={handleCloseBeam}>
<CloseRoundedIcon />
{/* [desktop] maximize button, or a disabled spacer */}
{!props.isMobile && (
<GoodTooltip variantOutlined title={<Box sx={{ p: 1 }}>Maximize Beam</Box>}>
<IconButton size='sm' onClick={handleMaximizeBeam}>
{/*<OpenInFullIcon sx={{ fontSize: 'md' }} />*/}
</IconButton>
</GoodTooltip>
)}
</Box>
<AppBreadcrumbs rootTitle={
props.conversationTitle?.length > 3
? <Box className='agi-ellipsize'>{props.conversationTitle || 'Chat'}</Box>
: undefined
}>
{/* Title & Status */}
<Typography level='title-md' noWrap>
<Box
component='span'
sx={Release.Features.LIGHTER_ANIMATIONS ? undefined
: isGatheringAny ? _styles.barGather
: isScattering ? _styles.barScatter
: undefined}
>
{isGatheringAny ? 'Merging...' : isScattering ? 'Beaming...' : isEditMode ? 'Beam Edit' : 'Beam'}
</Box>
{(!isGatheringAny && !isScattering && !isEditMode) && ' Mode'}
</Typography>
</AppBreadcrumbs>
{/* Right Close Icon */}
<GoodTooltip variantOutlined title={<Box sx={{ p: 1, display: 'flex', flexDirection: 'column', gap: 1 }}>Back to Chat <KeyStroke variant='outlined' combo='Esc' /></Box>}>
<IconButton aria-label='Close' size='sm' onClick={handleCloseBeam}>
<CloseRoundedIcon />
</IconButton>
</GoodTooltip>
{/* Confirmation Modal */}
@@ -8,7 +8,7 @@ import { usePersonaIdDropdown } from './usePersonaDropdown';
import { useFolderDropdown } from './useFolderDropdown';
export function ChatBarDropdowns(props: {
export function ChatBarChat(props: {
conversationId: DConversationId | null;
llmDropdownRef: React.Ref<OptimaBarControlMethods>;
personaDropdownRef: React.Ref<OptimaBarControlMethods>;
@@ -1,27 +1,30 @@
import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import { Box, IconButton, ListItemButton, ListItemDecorator } from '@mui/joy';
import ArrowForwardRoundedIcon from '@mui/icons-material/ArrowForwardRounded';
import BuildCircleIcon from '@mui/icons-material/BuildCircle';
import SettingsIcon from '@mui/icons-material/Settings';
import { findModelVendor } from '~/modules/llms/vendors/vendors.registry';
import type { DLLM, DLLMId } from '~/common/stores/llms/llms.types';
import type { DModelsServiceId } from '~/common/stores/llms/modelsservice.types';
import type { DModelsServiceId } from '~/common/stores/llms/llms.service.types';
import { DLLM, DLLMId, isLLMVisible } from '~/common/stores/llms/llms.types';
import { DebouncedInputMemo } from '~/common/components/DebouncedInput';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { KeyStroke } from '~/common/components/KeyStroke';
import { OptimaBarControlMethods, OptimaBarDropdownMemo, OptimaDropdownItems } from '~/common/layout/optima/bar/OptimaBarDropdown';
import { findModelsServiceOrNull, llmsStoreActions, useModelsStore } from '~/common/stores/llms/store-llms';
import { findModelsServiceOrNull } from '~/common/stores/llms/store-llms';
import { isDeepEqual } from '~/common/util/hooks/useDeep';
import { optimaActions, optimaOpenModels } from '~/common/layout/optima/useOptima';
import { useAllLLMs } from '~/common/stores/llms/hooks/useAllLLMs';
import { useModelDomain } from '~/common/stores/llms/hooks/useModelDomain';
import { useUIComplexityMode } from '~/common/stores/store-ui';
function LLMDropdown(props: {
dropdownRef: React.Ref<OptimaBarControlMethods>,
llms: DLLM[],
chatLlmId: DLLMId | null,
llms: ReadonlyArray<DLLM>,
chatLlmId: undefined | DLLMId | null,
setChatLlmId: (llmId: DLLMId | null) => void,
placeholder?: string,
}) {
@@ -29,10 +32,14 @@ function LLMDropdown(props: {
// state
const [filterString, setfilterString] = React.useState<string | null>(null);
// external state
const uiComplexityMode = useUIComplexityMode();
const showSymbols = uiComplexityMode !== 'minimal';
// derived state
const { chatLlmId, llms, setChatLlmId } = props;
const llmsCount = llms.filter(llm => !llm.hidden).length;
const llmsCount = llms.filter(isLLMVisible).length;
const showFilter = llmsCount >= 50;
const handleChatLLMChange = React.useCallback((value: DLLMId | null) => {
@@ -44,8 +51,8 @@ function LLMDropdown(props: {
}, [chatLlmId]);
// dropdown items - chached
const stabilizeLlmOptions = React.useRef<OptimaDropdownItems>();
// dropdown items - cached
const stabilizeLlmOptions = React.useRef<OptimaDropdownItems>(undefined);
const llmDropdownItems: OptimaDropdownItems = React.useMemo(() => {
const llmItems: OptimaDropdownItems = {};
@@ -62,7 +69,7 @@ function LLMDropdown(props: {
return false;
// filter-out hidden models from the dropdown
return lcFilterString ? true : !llm.hidden;
return lcFilterString ? true : isLLMVisible(llm);
});
for (const llm of filteredLLMs) {
@@ -83,6 +90,7 @@ function LLMDropdown(props: {
// add the model item
llmItems[llm.id] = {
title: llm.label,
...(llm.userStarred ? { symbol: '⭐' } : {}),
// icon: llm.id.startsWith('some vendor') ? <VendorIcon /> : undefined,
};
}
@@ -154,6 +162,9 @@ function LLMDropdown(props: {
// }, [chatLlmId]);
// Zero State - no models available
const hasDropdownOptions = Object.keys(llmDropdownItems || {}).length > 0;
// "Models Setup" button
const llmDropdownAppendOptions = React.useMemo(() => <>
@@ -167,15 +178,18 @@ function LLMDropdown(props: {
{/* </ListItemButton>*/}
{/*)}*/}
<ListItemButton key='menu-llms' onClick={optimaOpenModels} sx={{ backgroundColor: 'background.surface' }}>
<ListItemDecorator><BuildCircleIcon color='success' /></ListItemDecorator>
<Box sx={{ flexGrow: 1, display: 'flex', justifyContent: 'space-between', gap: 1 }}>
Models
<KeyStroke variant='outlined' combo='Ctrl + Shift + M' sx={{ ml: 2 }} />
<ListItemButton key='menu-llms' onClick={optimaOpenModels} sx={{ backgroundColor: 'background.surface', py: 'calc(2 * var(--ListDivider-gap))' }}>
<ListItemDecorator>{!hasDropdownOptions ? '⚠️' : <BuildCircleIcon color='success' />}</ListItemDecorator>
<Box sx={{ flexGrow: 1, display: 'flex', justifyContent: 'space-between', gap: 1, alignItems: 'center' }}>
{!hasDropdownOptions ? 'Add Models' : 'Models'}
{/*<Box sx={{ display: 'flex', alignItems: 'center', gap: 1 }}>*/}
{/* <KeyStroke variant='outlined' size='sm' combo='Ctrl + Shift + M' sx={{ ml: 2, bgcolor: 'background.popup' }} />*/}
<ArrowForwardRoundedIcon sx={{ ml: 'auto', fontSize: 'xl' }} />
{/*</Box>*/}
</Box>
</ListItemButton>
</>, []);
</>, [hasDropdownOptions]);
return (
@@ -184,26 +198,25 @@ function LLMDropdown(props: {
items={llmDropdownItems}
value={chatLlmId}
onChange={handleChatLLMChange}
placeholder={props.placeholder || 'Models …'}
placeholder={props.placeholder || '⚠️ Models …'}
prependOption={llmDropdownPrependOptions}
appendOption={llmDropdownAppendOptions}
activeEndDecorator={llmDropdownButton}
showSymbols={showSymbols ? 'compact' : false}
/>
);
}
export function useChatLLMDropdown(dropdownRef: React.Ref<OptimaBarControlMethods>) {
// external state
const { llms, chatLLMId } = useModelsStore(useShallow(state => ({
llms: state.llms, // NOTE: we don't need a deep comparison as we reference the same array
chatLLMId: state.chatLLMId,
})));
const chatLLMDropdown = React.useMemo(
() => <LLMDropdown dropdownRef={dropdownRef} llms={llms} chatLlmId={chatLLMId} setChatLlmId={llmsStoreActions().setChatLLMId} />,
[chatLLMId, dropdownRef, llms],
);
// external state
const llms = useAllLLMs();
const { domainModelId: chatLLMId, assignDomainModelId: setChatLLMId } = useModelDomain('primaryChat');
const chatLLMDropdown = React.useMemo(() => {
return <LLMDropdown dropdownRef={dropdownRef} llms={llms} chatLlmId={chatLLMId} setChatLlmId={setChatLLMId} />;
}, [chatLLMId, dropdownRef, llms, setChatLLMId]);
return { chatLLMId, chatLLMDropdown };
}
@@ -6,7 +6,7 @@ import { SystemPurposeId, SystemPurposes } from '../../../../data';
import { DConversationId } from '~/common/stores/chat/chat.conversation';
import { OptimaBarControlMethods, OptimaBarDropdownMemo } from '~/common/layout/optima/bar/OptimaBarDropdown';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useUIComplexityIsMinimal } from '~/common/state/store-ui';
import { useUIComplexityIsMinimal } from '~/common/stores/store-ui';
import { usePurposeStore } from '../persona-selector/store-purposes';
@@ -1,9 +1,9 @@
import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import { useVirtualizer } from '@tanstack/react-virtual';
import { Box, Button, Dropdown, IconButton, ListDivider, ListItem, ListItemButton, ListItemDecorator, Menu, MenuButton, MenuItem, Tooltip, Typography } from '@mui/joy';
import AddIcon from '@mui/icons-material/Add';
import ArchiveOutlinedIcon from '@mui/icons-material/ArchiveOutlined';
import AttachFileRoundedIcon from '@mui/icons-material/AttachFileRounded';
import CheckRoundedIcon from '@mui/icons-material/CheckRounded';
import ClearIcon from '@mui/icons-material/Clear';
@@ -21,13 +21,14 @@ import { DFolder, useFolderStore } from '~/common/stores/folders/store-chat-fold
import { DebouncedInputMemo } from '~/common/components/DebouncedInput';
import { FoldersToggleOff } from '~/common/components/icons/FoldersToggleOff';
import { FoldersToggleOn } from '~/common/components/icons/FoldersToggleOn';
import { OPTIMA_DRAWER_BACKGROUND } from '~/common/layout/optima/optima.config';
import { OptimaDrawerHeader } from '~/common/layout/optima/drawer/OptimaDrawerHeader';
import { OptimaDrawerList } from '~/common/layout/optima/drawer/OptimaDrawerList';
import { capitalizeFirstLetter } from '~/common/util/textUtils';
import { getIsMobile } from '~/common/components/useMatchMedia';
import { optimaCloseDrawer } from '~/common/layout/optima/useOptima';
import { themeScalingMap, themeZIndexOverMobileDrawer } from '~/common/app.theme';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { useUIPreferencesStore } from '~/common/stores/store-ui';
import { ChatDrawerItemMemo, FolderChangeRequest } from './ChatDrawerItem';
import { ChatFolderList } from './folders/ChatFolderList';
@@ -82,6 +83,7 @@ function ChatDrawer(props: {
const [searchDepth, setSearchDepth] = React.useState<ChatSearchDepth>('attachments'); // default: full search
const [debouncedSearchQuery, setDebouncedSearchQuery] = React.useState('');
const [folderChangeRequest, setFolderChangeRequest] = React.useState<FolderChangeRequest | null>(null);
const [renderLimit, setRenderLimit] = React.useState(200); // progressive loading limit
// external state
const {
@@ -89,17 +91,28 @@ function ChatDrawer(props: {
filterHasDocFragments, toggleFilterHasDocFragments,
filterHasImageAssets, toggleFilterHasImageAssets,
filterHasStars, toggleFilterHasStars,
filterIsArchived, toggleFilterIsArchived,
showPersonaIcons, toggleShowPersonaIcons,
showRelativeSize, toggleShowRelativeSize,
} = useChatDrawerFilters();
const { activeFolder, allFolders, enableFolders, toggleEnableFolders } = useFolders(props.activeFolderId);
const { filteredChatsCount, filteredChatIDs, filteredChatsAreEmpty, filteredChatsBarBasis, filteredChatsIncludeActive, renderNavItems } = useChatDrawerRenderItems(
props.activeConversationId, props.chatPanesConversationIds, debouncedSearchQuery, activeFolder, allFolders, filterHasStars, filterHasImageAssets, filterHasDocFragments, navGrouping, searchSorting, showRelativeSize, searchDepth,
props.activeConversationId, props.chatPanesConversationIds, debouncedSearchQuery, activeFolder, allFolders, filterHasStars, filterHasImageAssets, filterHasDocFragments, filterIsArchived, navGrouping, searchSorting, showRelativeSize, searchDepth,
);
const [uiComplexityMode, contentScaling] = useUIPreferencesStore(useShallow((state) => [state.complexityMode, state.contentScaling]));
const zenMode = uiComplexityMode === 'minimal';
const gifMode = uiComplexityMode === 'extra';
// Calculate chat counts per folder
// TODO: restore this, but also check if conversations are active? or move the computation to the renderNavItems hook?
// const folderChatCounts = React.useMemo(() => {
// const counts: Record<string, number> = {};
// allFolders.forEach(folder => {
// counts[folder.id] = folder.conversationIds.length;
// });
// return counts;
// }, [allFolders]);
// New/Activate/Delete Conversation
@@ -152,6 +165,30 @@ function ChatDrawer(props: {
}, []);
// Render limit - load more items
const handleRenderLimitIncrease = React.useCallback(() => {
setRenderLimit(prevValue => {
// Thresholds: 200 --(+200)--> 400 --(+500)--> 900 --(+1000)--> 1900 --> Infinity --> 200 (cycle)
if (prevValue === 200)
return (filteredChatsCount > 400 ? 400 : Infinity); // if less than 400, show all
else if (prevValue === 400)
return (filteredChatsCount > 900 ? 900 : Infinity); // if less than 900, show all
else if (prevValue === 900)
return (filteredChatsCount > 1900 ? 1900 : Infinity); // if less than 1900, show all
else if (prevValue === 1900)
return Infinity; // no limit
else
return 200; // go back to optimized view
});
}, [filteredChatsCount]);
// Reset render limit when search query changes
React.useEffect(() => {
setRenderLimit(200);
}, [debouncedSearchQuery]);
// memoize the group dropdown
const { isSearching } = isDrawerSearching(debouncedSearchQuery);
const groupingComponent = React.useMemo(() => (
@@ -190,6 +227,10 @@ function ChatDrawer(props: {
<ListItemDecorator>{filterHasStars && <CheckRoundedIcon />}</ListItemDecorator>
Starred <StarOutlineRoundedIcon />
</MenuItem>
<MenuItem onClick={toggleFilterIsArchived}>
<ListItemDecorator>{filterIsArchived && <CheckRoundedIcon />}</ListItemDecorator>
Archived <ArchiveOutlinedIcon />
</MenuItem>
<MenuItem onClick={toggleFilterHasImageAssets}>
<ListItemDecorator>{filterHasImageAssets && <CheckRoundedIcon />}</ListItemDecorator>
Has Images <FormatPaintOutlinedIcon />
@@ -246,49 +287,11 @@ function ChatDrawer(props: {
)}
</Dropdown>
), [
filterHasDocFragments, filterHasImageAssets, filterHasStars, isSearching, navGrouping, searchSorting, searchDepth, showPersonaIcons, showRelativeSize,
toggleFilterHasDocFragments, toggleFilterHasImageAssets, toggleFilterHasStars, toggleShowPersonaIcons, toggleShowRelativeSize,
filterHasDocFragments, filterHasImageAssets, filterHasStars, isSearching, navGrouping, searchSorting, searchDepth, filterIsArchived, showPersonaIcons, showRelativeSize,
toggleFilterHasDocFragments, toggleFilterHasImageAssets, toggleFilterHasStars, toggleFilterIsArchived, toggleShowPersonaIcons, toggleShowRelativeSize,
]);
// Virtualize the list
const parentRef = React.useRef<HTMLDivElement>(null);
const virtEstimateSize = React.useCallback((index: number) => {
const item = renderNavItems[index];
switch (item.type) {
case 'nav-item-group':
return 34;
case 'nav-item-chat-data':
return item.isActive ? 80 : 36;
case 'nav-item-info-message':
return 34;
}
}, [renderNavItems]);
const virtUniqueKeys = React.useMemo(() => renderNavItems.map((item, idx) => {
switch (item.type) {
case 'nav-item-group':
return `g-${item.title}`;
case 'nav-item-chat-data':
return `c-${item.conversationId}${item.isActive ? '-active' : ''}`;
case 'nav-item-info-message':
return `i-${idx}`;
}
}), [renderNavItems]);
const virtUniqueKey = React.useCallback((index: number) => virtUniqueKeys[index], [virtUniqueKeys]);
const rowVirtualizer = useVirtualizer({
count: renderNavItems.length,
getScrollElement: () => parentRef.current,
estimateSize: virtEstimateSize,
getItemKey: virtUniqueKey,
overscan: 0,
});
return <>
{/* Drawer Header */}
@@ -314,6 +317,7 @@ function ChatDrawer(props: {
{enableFolders && (
<ChatFolderList
folders={allFolders}
// folderChatCounts={folderChatCounts}
contentScaling={contentScaling}
activeFolderId={props.activeFolderId}
onFolderSelect={props.setActiveFolderId}
@@ -367,94 +371,77 @@ function ChatDrawer(props: {
// transition: 'box-shadow 0.2s',
}}
>
<ListItemDecorator><AddIcon /></ListItemDecorator>
<ListItemDecorator><AddIcon sx={{ fontSize: '' }} /></ListItemDecorator>
New chat
</Button>
</Box>
{/* Chat Titles List (shrink as half the rate as the Folders List) */}
<Box
ref={parentRef}
sx={{
flex: 1,
// flexGrow: 1,
// flexShrink: 1,
// flexBasis: '20rem',
overflowY: 'auto',
...themeScalingMap[contentScaling].chatDrawerItemSx,
}}
>
<div
style={{
height: `${rowVirtualizer.getTotalSize()}px`,
width: '100%',
position: 'relative',
}}
>
{rowVirtualizer.getVirtualItems().map((virtualRow) => {
const item = renderNavItems[virtualRow.index];
return (
<div
key={virtualRow.key}
data-index={virtualRow.index}
ref={rowVirtualizer.measureElement}
style={{
position: 'absolute',
top: 0,
left: 0,
width: '100%',
transform: `translateY(${virtualRow.start}px)`,
}}
>
{item.type === 'nav-item-group' ? (
<Typography
level='body-xs'
sx={{
textAlign: 'center',
my: 1,
// my: 'calc(var(--ListItem-minHeight) / 4)',
// keeps the group header sticky to the top
position: 'sticky',
top: 0,
backgroundColor: 'background.popup',
zIndex: 1,
}}
>
{item.title}
</Typography>
) : item.type === 'nav-item-chat-data' ? (
<ChatDrawerItemMemo
item={item}
showSymbols={!showPersonaIcons ? false : zenMode ? false : gifMode ? 'gif' : true}
bottomBarBasis={filteredChatsBarBasis}
onConversationActivate={handleConversationActivate}
onConversationBranch={onConversationBranch}
onConversationDeleteNoConfirmation={handleConversationDeleteNoConfirmation}
onConversationExport={onConversationsExportDialog}
onConversationFolderChange={handleConversationFolderChange}
/>
) : item.type === 'nav-item-info-message' ? (
<Box sx={{ display: 'flex', alignItems: 'center', justifyContent: 'center', gap: 1, ml: 2 }}>
<Typography level='body-xs' sx={{ color: 'primary.softColor', my: 'calc(var(--ListItem-minHeight) / 4)' }}>
{filterHasStars && (
<StarOutlineRoundedIcon sx={{ color: 'primary.softColor', fontSize: 'xl', mb: -0.5, mr: 1 }} />
)}
{item.message}
</Typography>
{(filterHasStars || filterHasImageAssets || filterHasDocFragments) && (
<Tooltip title='Clear Filters'>
<IconButton size='sm' color='primary' onClick={clearFilters}>
<ClearIcon />
</IconButton>
</Tooltip>
)}
</Box>
) : 'Unknown item type'}
</div>
);
})}
</div>
<Box sx={{ flexGrow: 1, flexShrink: 1, flexBasis: '20rem', overflowY: 'auto', ...themeScalingMap[contentScaling].chatDrawerItemSx }}>
{renderNavItems.slice(0, renderLimit).map((item, idx) => item.type === 'nav-item-chat-data' ? (
<ChatDrawerItemMemo
key={'nav-chat-' + item.conversationId}
item={item}
showSymbols={!showPersonaIcons ? false : zenMode ? false : gifMode ? 'gif' : true}
bottomBarBasis={filteredChatsBarBasis}
onConversationActivate={handleConversationActivate}
onConversationBranch={onConversationBranch}
onConversationDeleteNoConfirmation={handleConversationDeleteNoConfirmation}
onConversationExport={onConversationsExportDialog}
onConversationFolderChange={handleConversationFolderChange}
/>
) : item.type === 'nav-item-group' ? (
<Typography key={'nav-divider-' + idx} level='body-xs' sx={{
textAlign: 'center',
my: 1,
// my: 'calc(var(--ListItem-minHeight) / 4)',
// keeps the group header sticky to the top
position: 'sticky',
top: 0,
backgroundColor: OPTIMA_DRAWER_BACKGROUND,
zIndex: 1,
}}>
{item.title}
</Typography>
) : item.type === 'nav-item-info-message' ? (
<Box key={'nav-info-' + idx} sx={{ display: 'flex', alignItems: 'center', justifyContent: 'center', gap: 1, ml: 2 }}>
<Typography level='body-xs' sx={{ color: 'primary.softColor', my: 'calc(var(--ListItem-minHeight) / 4)' }}>
{filterHasStars && <StarOutlineRoundedIcon sx={{ color: 'primary.softColor', fontSize: 'xl', mb: -0.5, mr: 1 }} />}
{item.message}
</Typography>
{(filterHasStars || filterHasImageAssets || filterHasDocFragments || filterIsArchived) && (
<Tooltip title='Clear Filters'>
<IconButton size='sm' color='primary' onClick={clearFilters}>
<ClearIcon />
</IconButton>
</Tooltip>
)}
</Box>
) : null,
)}
{/* Load More Button */}
{filteredChatsCount > 200 && (
<ListItem>
<ListItemButton
variant='soft'
onClick={handleRenderLimitIncrease}
sx={{ justifyContent: 'center', py: 3 }}
>
{renderLimit === Infinity
? 'Show less'
: (renderLimit === 200 && filteredChatsCount > 400)
? 'Show 200 more'
: (renderLimit === 400 && filteredChatsCount > 900)
? 'Show 500 more'
: (renderLimit === 900 && filteredChatsCount > 1900)
? 'Show 1000 more'
: 'Show all'
} {renderLimit !== Infinity && `(${filteredChatsCount - renderLimit} hidden)`}
</ListItemButton>
</ListItem>
)}
</Box>
<ListDivider sx={{ my: 0 }} />
@@ -20,6 +20,7 @@ import { autoConversationTitle } from '~/modules/aifn/autotitle/autoTitle';
import type { DConversationId } from '~/common/stores/chat/chat.conversation';
import type { DFolder } from '~/common/stores/folders/store-chat-folders';
import { ANIM_BUSY_TYPING } from '~/common/util/dMessageUtils';
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { InlineTextarea } from '~/common/components/InlineTextarea';
import { isDeepEqual } from '~/common/util/hooks/useDeep';
import { useChatStore } from '~/common/stores/chat/store-chats';
@@ -58,12 +59,14 @@ export interface ChatNavigationItemData {
isEmpty: boolean;
isIncognito: boolean;
title: string;
isArchived: boolean;
userSymbol: string | undefined;
userFlagsSummary: string | undefined;
containsDocAttachments: boolean;
containsImageAssets: boolean;
folder: DFolder | null | undefined; // null: 'All', undefined: do not show folder select
updatedAt: number;
hasBeamOpen: boolean;
messageCount: number;
beingGenerated: boolean;
systemPurposeId: SystemPurposeId;
@@ -106,6 +109,7 @@ function ChatDrawerItem(props: {
containsDocAttachments,
containsImageAssets,
folder,
hasBeamOpen,
messageCount,
beingGenerated,
systemPurposeId,
@@ -210,8 +214,12 @@ function ChatDrawerItem(props: {
{/* Symbol, if globally enabled */}
{(props.showSymbols || isIncognito) && (
<ListItemDecorator>
{isIncognito ? (
<VisibilityOffIcon sx={{ fontSize: 'xl' }} />
{hasBeamOpen ? (
<ChatBeamIcon sx={{ fontSize: 'xl' }} />
) : isIncognito ? (
<Avatar variant='soft' sx={{ backgroundColor: `#9C27B022`, width: '1.5rem', height: '1.5rem' }}>
<VisibilityOffIcon sx={{ fontSize: 'md', color: `#9C27B0` }} />
</Avatar>
) : (beingGenerated && props.showSymbols === 'gif') ? (
<Avatar
alt='chat activity'
@@ -286,7 +294,7 @@ function ChatDrawerItem(props: {
</Box>
) : null}
</>, [beingGenerated, containsDocAttachments, containsImageAssets, handleTitleEditBegin, handleTitleEditCancel, handleTitleEditChange, isActive, isEditingTitle, isIncognito, isNew, personaImageURI, personaSymbol, props.showSymbols, searchFrequency, title, userFlagsSummary]);
</>, [beingGenerated, containsDocAttachments, containsImageAssets, handleTitleEditBegin, handleTitleEditCancel, handleTitleEditChange, hasBeamOpen, isActive, isEditingTitle, isIncognito, isNew, personaImageURI, personaSymbol, props.showSymbols, searchFrequency, title, userFlagsSummary]);
const progressBarFixedComponent = React.useMemo(() =>
progress > 0 && (
@@ -324,8 +332,26 @@ function ChatDrawerItem(props: {
'&:hover > button': {
opacity: 1, // fade in buttons when hovering, but by default wash them out a bit
},
// NOTE: we experimented with this code to have the actions fade in on hover, but idk about mobile..
// Buttons Row had the "className='chat-actions'"
// '& .chat-actions': {
// opacity: 0,
// transition: 'opacity 0.2s ease-in-out',
// },
// '&:hover .chat-actions': {
// opacity: 1,
// },
...(isIncognito && {
filter: 'brightness(0.5) contrast(0.5)',
backgroundColor: 'background.level2',
backgroundImage: 'repeating-linear-gradient(45deg, rgba(0,0,0,0.03), rgba(0,0,0,0.03) 10px, transparent 10px, transparent 20px)',
// border: 'none',
// border: '1px dashed',
borderColor: 'background.level3',
// purple icon to further indicate incognito mode
'& .MuiListItemDecorator-root': {
color: '#9C27B0',
},
// filter: 'brightness(0.5) contrast(0.5)',
}),
}}
>

Some files were not shown because too many files have changed in this diff Show More