Compare commits

...

421 Commits

Author SHA1 Message Date
Enrico Ros 1eb4eeea42 2.0.4: update readme 2026-03-24 19:17:52 -07:00
Enrico Ros 5ca094111c 2.0.4: update news (removing old beam callout) 2026-03-24 19:16:04 -07:00
Enrico Ros 4ce4202750 2.0.4: update package 2026-03-24 19:03:28 -07:00
Enrico Ros 4873c0c390 Json-ld: OS 2026-03-24 15:33:56 -07:00
Enrico Ros 351a28f34f Json-ld: ALTS 2026-03-24 14:50:55 -07:00
Enrico Ros a2e99ed84f Big-AGI: descs 2026-03-24 13:05:51 -07:00
Enrico Ros 7d2a26ab66 Roll AIX 2026-03-24 12:59:59 -07:00
Enrico Ros 94268187f1 Big-AGI: Capitalize 2026-03-24 12:36:08 -07:00
Enrico Ros 5aafa98f1c README: remove expired link 2026-03-24 12:33:31 -07:00
Enrico Ros c42c34acb4 KB: adding LLM vendors 2026-03-24 11:56:28 -07:00
Enrico Ros f052963da3 Md cleanup 2026-03-24 11:53:01 -07:00
Enrico Ros 07fa93609d CC: allow head|tail 2026-03-24 11:38:29 -07:00
Enrico Ros cbef9e5a57 BlockPartPlaceholder: slight render change 2026-03-23 18:59:10 -07:00
Enrico Ros 0b342339d4 AIX/Fragments: preserve placeholder location 2026-03-23 18:59:06 -07:00
Enrico Ros 9de3d5a26f AIX: Anthropic: parser: bits 2026-03-23 18:58:58 -07:00
Enrico Ros 78878076c2 errorUtils: add convenience fucntion for proper signal abort() 2026-03-23 17:55:06 -07:00
Enrico Ros 65cca958a6 AIX: Transmitter: show dialect 2026-03-23 17:51:59 -07:00
Enrico Ros 19263f8494 AIX: CG Exeuctor: Continuation ephemeral notice. #1010 2026-03-23 17:28:27 -07:00
Enrico Ros 5f71cbed47 AIX: CG Exeuctor: Continuation framework for Anthropic. #1010, #1005 2026-03-23 17:28:27 -07:00
Enrico Ros fe93a66d3b AIX: CG Exeuctor: rename to operation retry signal 2026-03-23 17:27:45 -07:00
Enrico Ros aa3b451e00 AIX: CG Exeuctor: slight rename 2026-03-23 17:27:45 -07:00
Enrico Ros ca245bf8b8 AIX: Retriers: cleanup name 2026-03-23 17:27:45 -07:00
Enrico Ros 9868068cd6 AIX: Anthropic: disable the fix for reusing blocks (seems to have been fixed upstream now) 2026-03-23 17:27:37 -07:00
Enrico Ros 5fd27629d0 idUtils: safer fallback for browser not having the crypto function (shall NEVER happen, but people may deploy on HTTP connections). Fixes #1034 2026-03-23 13:47:29 -07:00
Enrico Ros 4bfc7636c9 Beam: Merge: perform merges discarding the reasoning fragments if the policy says so. Fixes #1042 2026-03-23 13:36:58 -07:00
Enrico Ros 305a7784ee ChatThinkingPolicy: backport. #1042 2026-03-23 13:15:07 -07:00
Enrico Ros 87ecc11661 Allow for 2 Gemini vendors. Fixes #1045 2026-03-23 12:36:12 -07:00
Enrico Ros 0faf5d5957 Roll AIX 2026-03-21 19:51:58 -07:00
Enrico Ros 55d7ebd804 AIX/LLMS: Anthropic: Dynamic Web Filtering 2026-03-21 19:51:30 -07:00
Enrico Ros 842b5b96c2 AIX: Anthropic: parser: cleanup 2026-03-21 18:53:48 -07:00
Enrico Ros b07fc759c2 AIX: Anthropic: wires: update with new API features and tools
- tools allowed callers for client and server
- all tool definitions common options
- new code_execution, web_fetch, web_search tools
- top-level cache_contol
- thinking with disabled summaries for speed
- message updates with container variants
-fix tool_search_tool results
2026-03-21 18:53:48 -07:00
Enrico Ros 0afa70aaab System Theme: partially revert c8a33a06 to keep the default to the light mode 2026-03-21 16:14:32 -07:00
Enrico Ros c2cf93bf1a Events: remove dead code 2026-03-21 16:12:13 -07:00
Enrico Ros 88639b8b57 AttachmentSources: raise popups 2026-03-21 16:12:13 -07:00
Enrico Ros bfecc63d0d CC: allow select eslint tsc 2026-03-21 16:12:13 -07:00
Enrico Ros 20bea327e4 AIX: Anthropic: stremaing FC parser edge case 2026-03-21 16:12:13 -07:00
Enrico Ros 1e5c26b490 AIX: Anthropic: fix double newline elision post start 2026-03-21 16:12:13 -07:00
Enrico Ros d9183c9658 LLMs: xAI: add Grok 4.20 models, including multi-agent 2026-03-21 16:12:13 -07:00
Enrico Ros 3ecbbc3b70 LLMs: OpenAI: sweep align (add images support on select models) 2026-03-21 16:12:13 -07:00
Enrico Ros 1c1d21eed7 Sweep: update OpenAI params (more image supports) 2026-03-21 16:12:13 -07:00
Enrico Ros 6129971bb2 LLMs: OpenAI: add 5.4 mini/nano 2026-03-21 16:12:13 -07:00
Enrico Ros 8a3d75f077 Merge pull request #1033
feat(ui): add system theme mode for dark mode controls
2026-03-21 16:11:56 -07:00
Enrico Ros 9c249b513f Merge pull request #1041 from dLo999/fix/issue-1037-export-filename-local-time
fix: use local time for flash backup export filename (#1037)
2026-03-21 15:48:20 -07:00
Dustin 04d3fe6e99 fix: use local time for flash backup export filename (#1037)
Replace inline toISOString() with prettyTimestampForFilenames(false)
to match the other two export options that already use local time.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-20 08:04:50 -07:00
Enrico Ros ea7283b96e Merge pull request #1028 from enricoros/dependabot/github_actions/actions/download-artifact-8.0.1
chore(deps): bump actions/download-artifact from 8.0.0 to 8.0.1
2026-03-18 22:24:20 -07:00
Enrico Ros 295fc111c4 Expander: update 2026-03-18 02:33:36 -07:00
Enrico Ros 58d73d5d81 ModelsList: show Code designation as well. Fixes #1039 2026-03-17 22:07:43 -07:00
Enrico Ros fd8ce2e99a model.domains.registry: do not include a model name. Fixes #1038 2026-03-17 22:07:43 -07:00
blacksuan19 c8a33a06fa feat(ui): add system theme mode for dark mode controls
- default Joy color scheme to system
- cycle theme control through light, dark, and system modes
- update labels and icons to reflect the active theme preference

Signed-off-by: blacksuan19 <abubakaryagob@gmail.com>
2026-03-15 20:18:51 -05:00
Enrico Ros 874be92a56 ChatDrawer: include current chat, if missing 2026-03-14 16:00:48 -07:00
Enrico Ros 6bdb01e3c5 BlockOpOptions: allow spaces after the bold 2026-03-14 14:47:41 -07:00
dependabot[bot] ba03ab3aa8 chore(deps): bump actions/download-artifact from 8.0.0 to 8.0.1
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 8.0.0 to 8.0.1.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3...3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-14 11:32:12 +00:00
Enrico Ros 3d554e513d PostHog: more proper way to disable /flags refresh 2026-03-14 00:14:56 -07:00
Enrico Ros e516b9dae9 PostHog: we don't use Feature Flags - stop them 2026-03-14 00:06:32 -07:00
Enrico Ros 281d5a611e BlockOpOptions: support numbered lists 2026-03-13 14:10:52 -07:00
Enrico Ros 03eec23efe BlockOpOptions: supports bold options 2026-03-13 14:02:31 -07:00
Enrico Ros e3d01f6615 Reverting 61a60c5b: "Markdown: bundle in main chunk instead of lazy-loading" because of bundle size (for now) 2026-03-13 13:49:48 -07:00
Enrico Ros 99e15333cb Roll posthog again 2026-03-13 13:47:07 -07:00
Enrico Ros 5efd16c060 LLMs: LocalAI/Ollama/LMStudio: always allow CSF 2026-03-13 12:58:30 -07:00
Enrico Ros b4a6c80d8c Composer: correct browsing flag 2026-03-13 12:37:31 -07:00
Enrico Ros 7991920f08 Attachments: show disabled 2026-03-13 12:37:17 -07:00
Enrico Ros a113b8223b Roll deps 2026-03-13 12:25:24 -07:00
Enrico Ros 7bb720a903 Beam: Fusion: fix stop/stage 2026-03-13 04:00:55 -07:00
Enrico Ros 515de2679e InlineTextarea: size support 2026-03-13 01:57:59 -07:00
Enrico Ros 38caacf816 Expander component, externally controllable 2026-03-13 00:47:30 -07:00
Enrico Ros 676b0537e6 ChatMessage: chat/words count 2026-03-12 23:15:56 -07:00
Enrico Ros a24341cda6 Sel highlighter: export type 2026-03-12 23:15:54 -07:00
Enrico Ros d937bc246a AppChat: filter by open beam (support) 2026-03-12 21:45:40 -07:00
Enrico Ros 5d2543131a selHighlighter: cut also copies 2026-03-12 21:42:54 -07:00
Enrico Ros ca5d6872b5 clipboardUtils: improve dom copy 2026-03-12 21:42:51 -07:00
Enrico Ros a97ce26072 Replace PhTreeStructure for diagrams 2026-03-12 19:55:29 -07:00
Enrico Ros c698f78f92 FormRadioControl: fix hierarchy 2026-03-12 17:50:56 -07:00
Enrico Ros 77782a63eb Radio Controls: support tooltips 2026-03-12 16:35:56 -07:00
Enrico Ros 41e1e44ef0 TooltipOutlined: support size 2026-03-12 16:35:54 -07:00
Enrico Ros 7b1fc56320 LLMs: Deepseek: misc comment 2026-03-12 15:03:06 -07:00
Enrico Ros c0ed41a529 llms.parameters: find Spec and TS fix 2026-03-12 15:03:06 -07:00
Enrico Ros ba47fe1cfe AttachmentSources: strings again 2026-03-12 04:10:05 -07:00
Enrico Ros f1356d8fdc AttachmentSources: optimize RichMenuItem 2026-03-12 04:10:05 -07:00
Enrico Ros 7a899c538f Sources: bits 2026-03-12 01:28:57 -07:00
Enrico Ros 3daac973b1 AttachmentSources: tooltips on live 2026-03-11 15:17:53 -07:00
Enrico Ros b0ec5f7459 Attachments: add live types 2026-03-10 23:12:36 -07:00
Enrico Ros 71d6868512 AttachmentSources: bits 2026-03-10 23:12:36 -07:00
Enrico Ros 605bb83eb3 Components: add MediaStreamPreview 2026-03-10 23:12:36 -07:00
Enrico Ros 3092e02ce9 DBlobs: allow attachment image on destination scope (rather than moving it later) 2026-03-10 23:12:36 -07:00
Enrico Ros 5d82374975 DBlobs: GC: debug option 2026-03-10 23:12:36 -07:00
Enrico Ros ab4d63e596 screenCaptureUtils: export stream 2026-03-10 17:16:16 -07:00
Enrico Ros f800bb8dae CameraCaptureModal: open with options 2026-03-10 17:16:16 -07:00
Enrico Ros 18862c0ff4 Fragments: set origin Id in place 2026-03-10 11:32:10 -07:00
Enrico Ros 3765e8c69e Fragments: set origin Id 2026-03-10 11:28:58 -07:00
Enrico Ros 70d54a9aa3 Labs: option to skip image compression. Fixes #1024 2026-03-10 01:24:24 -07:00
Enrico Ros 50c6ee69af FormSwitchControl: pass through tooltipWarning 2026-03-10 01:05:49 -07:00
Enrico Ros dd2532e269 AttachmentSources: allow external menu button 2026-03-10 00:42:16 -07:00
Enrico Ros 16a54b3452 Audio: catch low-level errors 2026-03-10 00:08:21 -07:00
Enrico Ros 8373c1c785 AudioPlayer: make them cancelable & renames 2026-03-09 23:37:14 -07:00
Enrico Ros 39beda5519 revert AudioPlayer reason changes 2026-03-09 22:45:10 -07:00
Enrico Ros c7d1eae327 Speex: voice url preview with cancelation 2026-03-09 22:33:57 -07:00
Enrico Ros ec81e2ff5b AudioPlayer: pre-open 2026-03-09 22:33:57 -07:00
Enrico Ros 697090b695 AIX: Reassembler: audio player 2026-03-09 22:13:36 -07:00
Enrico Ros 8680fcc3db Image rendering: view on click 2026-03-09 21:30:59 -07:00
Enrico Ros 233037edd2 RenderImageRefDBlob: only regen if prompt is present 2026-03-09 21:29:38 -07:00
Enrico Ros 81c3251c6e AIX: Gemini: small note 2026-03-09 21:29:35 -07:00
Enrico Ros dc0fe7f4ca Beam Briefinx/Speex: use speakText with the rpc audio hint 2026-03-09 17:08:47 -07:00
Enrico Ros 2c9c0f2e0b Merge pull request #1019 from enricoros/dependabot/github_actions/docker/login-action-4.0.0
chore(deps): bump docker/login-action from 3.7.0 to 4.0.0
2026-03-09 01:20:51 -07:00
Enrico Ros 9c3fb9aadb Merge pull request #1018 from enricoros/dependabot/github_actions/docker/build-push-action-7.0.0
chore(deps): bump docker/build-push-action from 6.19.2 to 7.0.0
2026-03-09 01:20:43 -07:00
Enrico Ros de37ac2c51 Merge pull request #1017 from enricoros/dependabot/github_actions/docker/metadata-action-6.0.0
chore(deps): bump docker/metadata-action from 5.10.0 to 6.0.0
2026-03-09 01:20:35 -07:00
Enrico Ros d6b57702bd Merge pull request #1016 from enricoros/dependabot/github_actions/docker/setup-buildx-action-4.0.0
chore(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
2026-03-09 01:20:25 -07:00
dependabot[bot] d94642c29f chore(deps): bump docker/login-action from 3.7.0 to 4.0.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.7.0 to 4.0.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/c94ce9fb468520275223c153574b00df6fe4bcc9...b45d80f862d83dbcd57f89517bcf500b2ab88fb2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-07 11:32:35 +00:00
dependabot[bot] 75378ea88f chore(deps): bump docker/build-push-action from 6.19.2 to 7.0.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.19.2 to 7.0.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/10e90e3645eae34f1e60eeb005ba3a3d33f178e8...d08e5c354a6adb9ed34480a06d141179aa583294)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-07 11:32:31 +00:00
dependabot[bot] d539c1369b chore(deps): bump docker/metadata-action from 5.10.0 to 6.0.0
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 5.10.0 to 6.0.0.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Commits](https://github.com/docker/metadata-action/compare/c299e40c65443455700f0fdfc63efafe5b349051...030e881283bb7a6894de51c315a6bfe6a94e05cf)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-07 11:32:27 +00:00
dependabot[bot] 555ee6f333 chore(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.12.0 to 4.0.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/8d2750c68a42422c14e847fe6c8ac0403b4cbd6f...4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-03-07 11:32:23 +00:00
Enrico Ros ad989d8a0b CameraCaptureModal: improve multi-attach 2026-03-06 19:11:50 -08:00
Enrico Ros aae7af4713 useCameraCapture: vastly improve state, flow, remove race conditions, add detach 2026-03-06 17:53:01 -08:00
Enrico Ros df0a204767 CameraCaptureModal: full promised control 2026-03-06 16:36:34 -08:00
Enrico Ros 5cdefc7b5e AttachmentSources: live streams support 2026-03-06 15:04:18 -08:00
Enrico Ros c1bdb1fc61 Merge pull request #1014 from enricoros/claude/issue-1013-20260306-1801
feat: add Ctrl+( / Ctrl+) shortcuts to toggle left drawer and right panel
2026-03-06 10:13:33 -08:00
claude[bot] dde22a080b feat: add Ctrl+( / Ctrl+) shortcuts to toggle left drawer and right panel
Add keyboard shortcuts for toggling left drawer (Ctrl+() and right panel
(Ctrl+)). Also adds a reusable `skipIfInput` flag on ShortcutObject that
skips shortcuts when a text input, textarea, or contenteditable element
(or child thereof) is focused - not applied to these layout shortcuts but
available for future use.

Co-authored-by: Enrico Ros <enricoros@users.noreply.github.com>
2026-03-06 18:05:06 +00:00
Enrico Ros 7f5ff30f97 Speex: unmarkdown 2026-03-05 19:16:54 -08:00
Enrico Ros 38e1708e91 AIX: Gemini: Parser: improve finish reason reporting 2026-03-05 18:36:12 -08:00
Enrico Ros fe4e755304 AIX: Dispatch: nit 2026-03-05 18:36:09 -08:00
Enrico Ros 67f1c87d3a AIX: OpenAI Responses: infer image type 2026-03-05 18:36:09 -08:00
Enrico Ros eef88ffae2 AIX: OpenAI Responses: Queued 2026-03-05 18:36:08 -08:00
Enrico Ros 319965c55c FormChipGroupControl: must stretch 2026-03-05 18:36:05 -08:00
Enrico Ros 1f309b5c81 Speex: future northbridge nav 2026-03-05 16:55:58 -08:00
Enrico Ros 5273352ae9 Speex: Engine: pass labels 2026-03-05 16:45:59 -08:00
Enrico Ros 5a48256d77 AIX: OpenAI: small fixes 2026-03-05 16:45:46 -08:00
Enrico Ros 1d41294c1d LLMs/Sweep: OpenAI GPT-5.4, -Pro, and non-thinking (with temperature control) 2026-03-05 16:27:55 -08:00
Enrico Ros ff76229706 LLMs: Bedrock: respell 2026-03-04 22:13:07 -08:00
Enrico Ros b0f4b30ebe ChipGroupControl: single chip multiple options 2026-03-04 16:31:31 -08:00
Enrico Ros 7be8f6c6a7 OptimaPanelGroupedList: absorb collapsed pad 2026-03-04 16:28:28 -08:00
Enrico Ros b003993961 No mdashes in comments 2026-03-04 14:29:22 -08:00
Enrico Ros 4878f361b5 CLAUDE.md: no emdashes 2026-03-04 14:27:56 -08:00
Enrico Ros a82a3899c5 Beam: strip reasoning traces per user's thinking policy. Fixes #1003 2026-03-04 13:28:05 -08:00
Enrico Ros ff0685e6e8 Nit 2026-03-04 13:19:24 -08:00
Enrico Ros a597489526 Merge pull request #1011 from Blacksuan19/fix-sherpa-ssr
store-logic-sherpa: guard usage count increment against SSR
2026-03-04 13:03:07 -08:00
Enrico Ros 32e8890f62 LLMs: Sync Sweep params 2026-03-04 12:44:50 -08:00
Enrico Ros 211a43eab4 Parameters sweep: 2026-03-04.2 2026-03-04 12:42:10 -08:00
Enrico Ros 8c28df77cc Parameters sweep: resorting 2026-03-04 12:23:22 -08:00
Enrico Ros 4e82a12899 AIX: Gemini: Disable URL Context for Nano Banana models 2026-03-04 12:20:04 -08:00
Enrico Ros 8d0e0dea89 Parameters sweep: 2026-03-04 2026-03-04 12:09:13 -08:00
Enrico Ros 5703f23b99 Roll AIX 2026-03-04 11:37:46 -08:00
Enrico Ros 196d08b4fd CLAUDE.md: try stopping compound 2026-03-04 11:37:38 -08:00
Enrico Ros 2f9738f6fb LLMs: Gemini: Nano Banana 2 (aka 3.1 flash image) and 3.1 Flash-Lite 2026-03-04 11:34:51 -08:00
Enrico Ros d4db225d1e LLMs: OpenAI: remove shut down 2026-03-04 11:30:10 -08:00
Enrico Ros efff785713 LLMs: OpenAI: 5.3 Instant 2026-03-04 11:29:40 -08:00
Enrico Ros 234accad3f LLMs: ANT: Sync retired 2026-03-04 11:15:57 -08:00
blacksuan19 588b4b2c64 store-logic-sherpa: guard usage count increment against SSR
The useLogicSherpaStore.setState() call at module level ran during
server-side rendering where localStorage is unavailable, causing a
hydration crash. Wrap with isBrowser so it only executes in the
browser context.

Signed-off-by: blacksuan19 <abubakaryagob@gmail.com>
2026-03-04 12:49:46 -06:00
Enrico Ros 7de34d8478 InReferenceToBubble: fix h-compression 2026-03-03 23:46:42 -08:00
Enrico Ros 741980adfc Allow new attachments for previous messages in a chat. Fixes #945 2026-03-03 20:18:07 -08:00
Enrico Ros 2690380bfd ChatMessage: support changing attachments in mesages. #945 2026-03-03 18:43:12 -08:00
Enrico Ros b482b07335 Composer: use the standard Attachment hanlders 2026-03-03 18:43:06 -08:00
Enrico Ros 03b4c6f941 Attachments: standard handlers 2026-03-03 18:43:06 -08:00
Enrico Ros b7fd1b13de Remove setLabsEnhanceCodeLiveFile 2026-03-03 10:47:02 -08:00
Enrico Ros 10a6f2d3c7 Rename getLabsHighPerformance 2026-03-03 10:03:21 -08:00
Enrico Ros ba149d3b43 Remove labsEnhanceCodeBlocks - always on now 2026-03-03 10:03:08 -08:00
Enrico Ros f175d071c4 Remove labsShowCosts - always on now 2026-03-03 10:00:16 -08:00
Enrico Ros 874d0bca05 Attachments: by default use the Menu on desktop, not the inlines 2026-03-03 09:53:50 -08:00
Enrico Ros 81ad0328b7 Remove labsAttachScreenCapture/labsCameraDesktop - always on now 2026-03-03 09:53:50 -08:00
Enrico Ros 5198fa66cf Attachments: consolidated/unified menu 2026-03-03 09:53:50 -08:00
Enrico Ros a807bdd6b6 InlineTextArea: remove the alt key - only usage 2026-03-02 21:18:05 -08:00
Enrico Ros 2b209bb679 LLMParametersEditor: improve config. Fixes #1004 2026-03-02 20:04:02 -08:00
Enrico Ros 2f018dce9f AIX: do not set a default fox max anymore - as the underlying APIs may change and it's a user param now. #1004 2026-03-02 20:03:33 -08:00
Enrico Ros 2eb77f532a FormNumberInput: add number|undefined input 2026-03-02 20:03:30 -08:00
Enrico Ros 69063bb544 ExpanderControlledBox - allow compression (issue introduced by f21fe411 on the ChatPanelModelParameters with log model names) 2026-03-02 20:03:30 -08:00
Enrico Ros 7fad2f8790 LLMs/AIX: Parameters: Anthropic: max Fetch/Search depth. #1004 2026-03-02 14:58:46 -08:00
Enrico Ros 620275a1f5 Attachments: move GDrive/Web sources 2026-03-02 14:36:55 -08:00
Enrico Ros ba583fc448 Attachments: move buttons 2026-03-02 14:28:29 -08:00
Enrico Ros 0b96870644 Camera: share and rationalize use 2026-03-02 13:40:25 -08:00
Enrico Ros eb2b682eb5 Attachments: centralize components, make composible 2026-03-02 11:59:52 -08:00
Enrico Ros 577b52120a Update #984 2026-03-01 20:33:07 -08:00
Enrico Ros b69ae3edae Beam: raise max rays to 24, add 16 to presets. Fixes #1001 2026-03-01 20:30:43 -08:00
Enrico Ros 624b177996 Merge pull request #999 from enricoros/dependabot/github_actions/actions/upload-artifact-7.0.0
chore(deps): bump actions/upload-artifact from 6.0.0 to 7.0.0
2026-03-01 20:30:07 -08:00
Enrico Ros bbf01b49c0 Merge pull request #998 from enricoros/dependabot/github_actions/actions/download-artifact-8.0.0
chore(deps): bump actions/download-artifact from 7.0.0 to 8.0.0
2026-03-01 20:29:42 -08:00
Enrico Ros 86b2d8ae71 LLMs: Anthropic PowerPoint -> PPT 2026-03-01 15:41:07 -08:00
dependabot[bot] d18af42d43 chore(deps): bump actions/upload-artifact from 6.0.0 to 7.0.0
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 6.0.0 to 7.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/b7c566a772e6b6bfb58ed0dc250532a479d7789f...bbbca2ddaa5d8feaa63e36b76fdaad77386f024f)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-28 11:32:12 +00:00
dependabot[bot] 4f6e110bf9 chore(deps): bump actions/download-artifact from 7.0.0 to 8.0.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 7.0.0 to 8.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/37930b1c2abaa49bbe596cd826c3c89aef350131...70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 8.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-28 11:32:08 +00:00
Enrico Ros 62cf334e2f AIX: Z.ai: handle their network errors 2026-02-28 02:12:02 -08:00
Enrico Ros 8bd6fd40fd Focus-mode for mobile 2026-02-28 01:59:16 -08:00
Enrico Ros f21fe41188 ExpanderControlledBox - fix lagging of content vs parent reveal
Instead of clipping on the Collapsee box, we just use it as the FR target
with a minHeight of 0; have the parent take the correct height, and clip all to the parent.
2026-02-28 01:29:08 -08:00
Enrico Ros cfff23164c Claude.md: CSF 2026-02-26 14:12:13 -08:00
Enrico Ros a8d9233dc4 Claude.md: improve in structure 2026-02-26 14:03:54 -08:00
Enrico Ros 9c973efbbf LLMs: Bedrock: support Converse API for Nova models 2026-02-26 03:39:44 -08:00
Enrico Ros e2c4255920 LLMs: Bedrock: hide inputs on prio 2026-02-26 02:52:47 -08:00
Enrico Ros e01b9ff6a9 LLMs: Bedrock: improve sort 2026-02-26 02:52:22 -08:00
Enrico Ros 0084a635f1 AIX: Debugger: fix URL display 2026-02-26 02:18:24 -08:00
Enrico Ros 0cd20b8d48 Update claude.md 2026-02-26 00:13:42 -08:00
Enrico Ros 7c4094b4c2 OpenAI Service config: rename provider when selecting the host 2026-02-25 23:52:44 -08:00
Enrico Ros acd8430d51 Models List: show free only 2026-02-25 23:50:15 -08:00
Enrico Ros 6ae2195d10 LLMs: add LLMAPI via OpenAI-Compatible and custom host. Fixes #993, Fixes #989. 2026-02-25 23:38:43 -08:00
Enrico Ros 6bcc0dd177 LLMs: Bedrock: auto-interfaces frmo model enumeration 2026-02-25 21:27:55 -08:00
Enrico Ros 2de42c2010 AIX/LLMs: Bedrock: support Mantle (OpenAI-compatible) including model enumeration. Fixes #965 2026-02-25 21:11:27 -08:00
Enrico Ros a231ccb492 LLMs: remove IF_OAI_Complete 2026-02-25 18:27:06 -08:00
Enrico Ros 35875d5837 AIX/LLMs: Bedrock: default to us-east-1 2026-02-25 17:13:59 -08:00
Enrico Ros c36ff1edfa AIX/LLMs: Bedrock: support Bedrock Long-term API Keys 2026-02-25 17:13:59 -08:00
Enrico Ros ed35d5b541 tRPC fetchers: improve local debug output 2026-02-25 17:13:59 -08:00
Enrico Ros 2b2a2d84a9 LLMs: Bedrock: report listModels issues up 2026-02-25 17:13:59 -08:00
Enrico Ros a645a4066c docs: bit 2026-02-25 17:13:58 -08:00
Enrico Ros 508a3beff7 CC: patch cd chaining 2026-02-25 14:26:37 -08:00
Enrico Ros df0c133056 AIX: OpenAI: fix return code 2026-02-24 23:25:06 -08:00
Enrico Ros 2da3942ce2 LLMs: OpenAI: Update models 2026-02-24 23:24:32 -08:00
Enrico Ros 26547dec0d Docs: update 2026-02-24 22:56:00 -08:00
Enrico Ros aa4804bdd5 Docs: update for bedrock 2026-02-24 22:46:00 -08:00
Enrico Ros eafa1f02cb AIX: Bedrock: update msg 2026-02-24 21:53:17 -08:00
Enrico Ros 836533a8c2 AIX: Bedrock: update icon 2026-02-24 21:49:30 -08:00
Enrico Ros cfeb134c20 AIX: Bedrock: disclaimer about unsupported functionality 2026-02-24 21:44:01 -08:00
Enrico Ros 35798b5568 AIX: Bedrock: bolster transformer 2026-02-24 21:43:47 -08:00
Enrico Ros 7a250f0848 AIX: Bedrock: chat generate. #965, #170, #980 2026-02-24 21:05:51 -08:00
Enrico Ros 0a4e6d5142 AIX: Anthropic: reuse model to beta 2026-02-24 20:45:22 -08:00
Enrico Ros f4254a5ffb LLMs: Bedrock: list models. #965 2026-02-24 20:35:45 -08:00
Enrico Ros 7b7718e578 LLMs: Anthropic: review headers 2026-02-24 20:35:39 -08:00
Enrico Ros c261b2b156 Bedrock: sigining utility (client and server compatible) 2026-02-24 17:44:24 -08:00
Enrico Ros 237065553e AIX: Anthropic: make beta headers reusable 2026-02-24 17:44:24 -08:00
Enrico Ros 6116af42df AIX: make createChatGenerateDispatch async 2026-02-24 17:44:24 -08:00
Enrico Ros 08b28cfde8 LLMs: IModelVendor: slight csf mention 2026-02-24 17:26:00 -08:00
Enrico Ros b019655518 LLMs: listModels: update dispatch 2026-02-24 17:14:40 -08:00
Enrico Ros 1264a2ebaf Icons: crab svg 2026-02-24 16:32:37 -08:00
Enrico Ros 1960b4f618 Wire: bits 2026-02-24 16:32:14 -08:00
Enrico Ros c75fbd89e6 Shortcuts: new symbols 2026-02-23 22:38:55 -08:00
Enrico Ros 3e67201665 Shortcuts: new modal 2026-02-23 22:34:52 -08:00
Enrico Ros b60e2bae65 LLM Params: bits2 2026-02-23 21:02:31 -08:00
Enrico Ros 19c7fa4285 LLM Params: bits 2026-02-23 20:58:56 -08:00
Enrico Ros f450dd3eac Models List: improve looks, content 2026-02-23 20:58:41 -08:00
Enrico Ros d366cdd542 BlockPartModelAux: render markdown and buttons appear at the end 2026-02-23 20:24:12 -08:00
Enrico Ros c1ba83fddb ViewDocPartModal/RenderCodePanelFrame: fix properties render on mobile (ellipsize) 2026-02-23 20:12:33 -08:00
Enrico Ros 617d6038b1 LLMs: LocalAI: restore n+1 render 2026-02-23 20:08:53 -08:00
Enrico Ros 0abee15c30 LLMs: LocalAI: safer parsing 2026-02-23 19:57:34 -08:00
Enrico Ros 1aa2e68e4a Merge pull request #982 from enricoros/dependabot/github_actions/docker/build-push-action-6.19.2
chore(deps): bump docker/build-push-action from 6.18.0 to 6.19.2
2026-02-23 15:49:53 -08:00
Enrico Ros cd692218ce Bits 2026-02-23 15:00:15 -08:00
Enrico Ros a5b7191185 DEV Mode: fully remove 2026-02-23 15:00:15 -08:00
Enrico Ros 56baba4cae DEV Mode: remove hardcoded leftover 2026-02-23 15:00:15 -08:00
Enrico Ros b696447be4 DEV Mode: graduated streaming 2026-02-23 15:00:15 -08:00
Enrico Ros e1ef2e72d7 ModelsList: Modal Submenus + DC-all config 2026-02-23 15:00:14 -08:00
Enrico Ros e85905e63c AIX Inspector: option to disable streaming for the current session. #980 2026-02-23 15:00:14 -08:00
Enrico Ros c6208a2900 CSF: global DC status 2026-02-23 12:14:04 -08:00
Enrico Ros 01299e4f19 CloseablePopup: workaround to keep the popup 2026-02-23 12:14:04 -08:00
Enrico Ros 1771575641 LLMs: services: type fix 2026-02-23 12:14:03 -08:00
Enrico Ros 88a796fd87 Tools: sweep: sync openai 2026-02-19 19:00:36 -08:00
Enrico Ros e403467d6d LLMs: Gemini 3.1 Pro. Fixes #987 2026-02-19 19:00:06 -08:00
Enrico Ros 1914a2a8a3 Tools: sweep: add sweeps for oai-thinking-depentent-temp 2026-02-18 17:19:37 -08:00
Enrico Ros 683892afef Tools: sweep: disable the no-temperature fix, as by default we don't set it, and it prevents our sweep with it 2026-02-18 17:19:37 -08:00
Enrico Ros 470f8aab70 LLMs: Together updates 2026-02-18 17:19:36 -08:00
Enrico Ros 7a561d6b42 LLMs: OpenPipe updates 2026-02-18 17:19:36 -08:00
Enrico Ros affff0df4a LLMs: Groq updates 2026-02-18 17:19:36 -08:00
Enrico Ros f5a81bdc94 LLMs: Gemini small updates 2026-02-18 17:19:36 -08:00
Enrico Ros 818ed53b53 LLMs: Sweep Alignment 2026-02-18 17:19:36 -08:00
Enrico Ros 12c875f4e3 AIX: OpenAI responses: fix for the older Deep Research models 2026-02-18 17:19:33 -08:00
Enrico Ros 6ff715c0f0 AIX: aixChatGenerateContent_DMessage_FromConversation: classify an errored outcome when the message is interrupted 2026-02-18 17:19:31 -08:00
Enrico Ros c4a89822d8 LLMs: typo 2026-02-18 15:51:18 -08:00
Enrico Ros a8a917f786 Roll AIX 2026-02-18 15:35:44 -08:00
Enrico Ros 3aa9a71a4b LLM Effort: split definition for UI namings with unified backend. #940 2026-02-18 14:55:00 -08:00
Enrico Ros 3758612ed6 LLMs: improve (Registry's) initialValue 2026-02-17 23:49:30 -08:00
Enrico Ros b71a4265f8 LLMs: dissolve requiredFallback 2026-02-17 23:07:55 -08:00
Enrico Ros 870cdb67cf Tools: sweep: update script and results 2026-02-17 22:21:03 -08:00
Enrico Ros 902c9dc3f4 AIX/LLMs: support search disablement client/server correctly 2026-02-17 22:20:59 -08:00
Enrico Ros 0d1db0a360 AIX: OpenAI Responses: remove forcing of no temperature, LLM_IF_HOTFIX_NoTemperature works well 2026-02-17 22:20:44 -08:00
Enrico Ros ddd784f041 LLM Effort: client-side domain check 2026-02-17 20:09:40 -08:00
Enrico Ros 830d45c06d LLM Effort: server-side dev check 2026-02-17 20:09:40 -08:00
Enrico Ros 6e27a31013 LLM Effort: Unified definition. #944, #940 2026-02-17 20:09:40 -08:00
Enrico Ros ed87595e17 LLMs: Anthropic: bit 2026-02-17 19:17:51 -08:00
Enrico Ros da01b59ae3 AIX: Anthropic: Effort is GA - no header needed 2026-02-17 19:17:51 -08:00
Enrico Ros 79046b808b AIX: Gemini: do not use alpha any longer 2026-02-17 19:17:51 -08:00
Enrico Ros 5a71153390 Custom Names: reset with warning. #970 2026-02-17 13:50:17 -08:00
Enrico Ros 94056cdf4b AutoBlocks: #983 option which does not improve things 2026-02-17 13:23:55 -08:00
Enrico Ros 41cb35c6b9 Custom Names: lingering. #970 2026-02-17 12:42:45 -08:00
Enrico Ros e133fc81f6 Custom Names: preserve. #970 2026-02-17 12:16:26 -08:00
Enrico Ros 418c2e496c LLMs: Anthropic: dMessageUtils 2026-02-17 12:01:46 -08:00
Enrico Ros 3690202b38 LLMs: Anthropic: Sonnet 4.6 2026-02-17 11:51:46 -08:00
Enrico Ros f069c2e5ab Fix: safe iteration over navItems.links in mobile nav
Fixes #984
2026-02-17 11:06:44 -08:00
dependabot[bot] 97bf6ca276 chore(deps): bump docker/build-push-action from 6.18.0 to 6.19.2
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.18.0 to 6.19.2.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/263435318d21b8e681c14492fe198d362a7d2c83...10e90e3645eae34f1e60eeb005ba3a3d33f178e8)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 6.19.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-14 11:32:12 +00:00
Enrico Ros a1390b152f CC: .gitignore 2026-02-13 18:55:05 -08:00
Enrico Ros 4e8c7d46f6 Cleanup: remove ems 2026-02-13 18:44:35 -08:00
Enrico Ros 02944d2015 AIX: Add no-op method for setting provider infra label 2026-02-13 18:41:49 -08:00
Enrico Ros 58726f0425 AIX: OpenRouter: provider infra label 2026-02-13 17:30:26 -08:00
Enrico Ros 85f796fb1d AIX: ContentReassembler: note 2026-02-13 16:46:47 -08:00
Enrico Ros 311a9c2bf2 Roll AIX 2026-02-13 15:58:27 -08:00
Enrico Ros 6768917d44 Bits 2026-02-13 15:56:52 -08:00
Enrico Ros 7beb412738 AIX: Report broken messages. #980 2026-02-13 15:56:50 -08:00
Enrico Ros cf724625cc AIX: CSF: emulate tRPC's client-side abort as a response to the abortSignal being fired. #980
This is because the exception gets actually trapped locally in the deeper layers
due to client-side processing, which then created a particle for the abort,
which then is never used because the outer will discard it without notice
2026-02-13 15:56:49 -08:00
Enrico Ros f60b2410dd AIX: do not fake logical ends. #980 2026-02-13 15:56:07 -08:00
Enrico Ros bbdc16b06a LLMs: Together.AI: fix wire parser 2026-02-13 12:11:38 -08:00
Enrico Ros 0fa2d06725 AIX: logging: bits 2026-02-13 12:08:04 -08:00
Enrico Ros 36cdc4b55f AIX: Parser: capitalized STOP reason 2026-02-13 12:04:25 -08:00
Enrico Ros c2b4a50bfa AIX: Retriers: consolidated denylist 2026-02-13 12:02:05 -08:00
Enrico Ros 73f88d4715 AIX: OpenRouter: don't log on empty reasoning 2026-02-13 12:01:52 -08:00
Enrico Ros af919be2ac AIX: store end reason - for further debug. #980 2026-02-12 16:31:41 -08:00
Enrico Ros facffbc6c8 AIX: require clean connection ends. #980 2026-02-12 16:31:41 -08:00
Enrico Ros dd5b7cb8c2 AIX: dispatch: increase debugging vendor-initiated disconnect. #980 2026-02-12 14:12:25 -08:00
Enrico Ros 3dc61109d7 AIX: Server: debug recovered packets 2026-02-12 01:34:10 -08:00
Enrico Ros 9ef84260b0 Z.ai: no bits 2026-02-11 22:09:54 -08:00
Enrico Ros cf2df7d7f9 Z.ai: dMessageUtils 2026-02-11 22:09:27 -08:00
Enrico Ros 16a883526b Z.ai: readme 2026-02-11 17:44:33 -08:00
Enrico Ros 7b66b1a2eb Z.ai: readme 2026-02-11 17:44:22 -08:00
Enrico Ros a4adce5c79 Z.ai: AIX: fix reasoning effort 2026-02-11 17:43:23 -08:00
Enrico Ros 9e4174df53 Z.ai: AIX: fix dispatch 2026-02-11 17:36:04 -08:00
Enrico Ros b5975713a3 Z.ai: OCR does not support WebP 2026-02-11 17:26:23 -08:00
Enrico Ros 0cd04266b7 Z.ai: improve model spec 2026-02-11 17:26:23 -08:00
Enrico Ros 5cbd162454 Z.ai: Reasoning settings support 2026-02-11 17:26:23 -08:00
Enrico Ros bea1600358 AIX: OpenAI ChatCompletions: empty reasoning_content yields to non-empty content 2026-02-11 17:26:22 -08:00
Enrico Ros 6a2e201cf5 Z.ai: discovered + curated models support 2026-02-11 17:26:22 -08:00
Enrico Ros 960551933e Z.ai LLM vendor support
Note: we don't include server-side config anymore starting from this. To stress test the config system.
2026-02-11 17:26:22 -08:00
Enrico Ros 8b38b6416d Z.ai: icon & sprite 2026-02-11 17:26:22 -08:00
Enrico Ros fac4c39f48 Fix copying of message Sources. Fixes #977. Fixes #978. 2026-02-11 13:02:32 -08:00
Enrico Ros 4c930efbf0 Fix GC on Beams with reference collectors. 2026-02-11 12:59:36 -08:00
Enrico Ros 5a2a47cb87 AIX: Anthropic: Fast mode - unsupported message 2026-02-10 13:31:28 -08:00
Enrico Ros 4912a03250 LLMs: Anthropic: Fast mode research preview 2026-02-10 13:22:47 -08:00
Enrico Ros 3b13580613 LLMs: parameter-value-based enum price multipliers 2026-02-10 13:04:05 -08:00
Enrico Ros 95905113ac LLMs: cached isLLMChatFree_cached 2026-02-10 12:17:21 -08:00
Enrico Ros c6b34bb252 LLMs: Parameters: type guard enums 2026-02-10 11:53:31 -08:00
Enrico Ros e5387c2323 AIX: Moonshot: remove empty messages 2026-02-10 11:07:09 -08:00
Enrico Ros d3b4447669 CLAUDE.md: update 2026-02-10 01:51:56 -08:00
Enrico Ros d5c5eac9ec CC: allow git mv 2026-02-10 01:51:56 -08:00
Enrico Ros 49b61495d0 LLMs: Vendor Settings: unbreak hide advanced despite initially in CSF. Fixes #969 2026-02-09 23:30:41 -08:00
Enrico Ros e8298e9d30 workflows: CC: enable auth 2026-02-09 13:41:23 -08:00
Enrico Ros b29681e1f7 workflows: CC: cleanups 2026-02-09 13:30:35 -08:00
Enrico Ros 1e0b9a2f0c workflows: CC: do not trigger triage on assignment 2026-02-09 13:08:45 -08:00
Enrico Ros 442b8e95b1 workflows: CC: lock in the dm 2026-02-09 12:53:10 -08:00
Enrico Ros 27090d9e28 -Spaces 2026-02-09 05:41:55 -08:00
Enrico Ros c37b4fa076 Chat: option to discard all reasoning traces 2026-02-09 04:51:42 -08:00
Enrico Ros 83161bbe98 AIX: Anthropic: Parser: hotfix for 4.6 to elide the double-newline at the beginning when present 2026-02-09 04:50:14 -08:00
Enrico Ros 4b166120e6 AIX: Anthropic: Dispatch: hotFix for 4.6 interleaved reasoning blocks back-to-back 2026-02-09 04:23:28 -08:00
Enrico Ros 04494ac752 AIX: Anthropic: Dispatch: hotFix for empty text blocks produced by 4.6 - incoming from the Anthropic API 2026-02-09 04:23:27 -08:00
Enrico Ros 979809ddb1 AIX: Anthropic: Parser: rename hotFix 2026-02-09 04:23:26 -08:00
Enrico Ros 5d797c3339 AIX: Anthropic: warn if blocks come out of order, now that Anthropic has fixed it 2026-02-09 04:22:35 -08:00
Enrico Ros 2ff74f6b80 Wire: separate debug wire request and response 2026-02-09 04:22:33 -08:00
Enrico Ros 06b1195f9a workflows: CC: triade with workarounds: restore some 2026-02-09 01:48:51 -08:00
Enrico Ros c337b70a42 LLMs: Anthropic: copy 2026-02-09 01:40:11 -08:00
Enrico Ros 5047354892 CC: /code:review-inflight bits 2026-02-09 01:40:11 -08:00
Enrico Ros ce4e405fc6 workflows: CC: r/o triage 2026-02-09 01:40:11 -08:00
Enrico Ros 30c8d66cd1 workflows: CC: update model 2026-02-09 01:38:37 -08:00
Enrico Ros fb5c8aad29 workflows: CC: update dm 2026-02-09 01:38:19 -08:00
Enrico Ros 08d221d00f Attachments: Text: warn if empty 2026-02-08 17:31:40 -08:00
Enrico Ros af918178f6 Attachments: Markdown table conversion issue fallback 2026-02-08 17:31:40 -08:00
Enrico Ros ed19896e3c LLMs: llms.parameters: remove 'as const' 2026-02-08 17:31:39 -08:00
Enrico Ros 47ad135e4b CC: slashcommands: update-models catch-all 2026-02-08 17:27:08 -08:00
Enrico Ros 0eff7825c8 CC: slashcommands: xAI Reponses API sync 2026-02-08 17:27:08 -08:00
Enrico Ros 5c8baee390 CC: /code:review-inflight 2026-02-07 13:46:01 -08:00
Enrico Ros 3f71facb49 CLAUDE.md: update 2026-02-07 13:46:01 -08:00
Enrico Ros eba42cc8f2 CLAUDE.md: dev env 2026-02-07 13:46:01 -08:00
Enrico Ros 53092cee51 CC: allow tsc, eslint 2026-02-07 13:46:01 -08:00
Enrico Ros 4bf621f128 LLMs: OpenAI GPT-5.3-Codex speculative support 2026-02-07 13:42:12 -08:00
Enrico Ros 33505dbb8e LLMs: Anthropic/OpenRouter: align behavior, align UI #962 2026-02-06 22:40:55 -08:00
Enrico Ros c81e1f144f AIX: OpenRouter: protocol bits 2026-02-06 20:56:39 -08:00
Enrico Ros ee788b967b Roll AIX 2026-02-06 20:11:10 -08:00
Enrico Ros 38ac8733f6 AIX: OpenRouter: comment on debug: too risky 2026-02-06 20:10:48 -08:00
Enrico Ros 737a20ee06 AIX: OpenRouter: enable the stricter 'require_parametrs' mode. #948 2026-02-06 20:05:05 -08:00
Enrico Ros 19f48b8001 AIX: OpenRouter: wires for OR debug parameters 2026-02-06 19:51:50 -08:00
Enrico Ros 3471d6b4f5 Roll AIX 2026-02-06 19:30:49 -08:00
Enrico Ros 2dc7ba72b3 AIX/LLMs: bits 2026-02-06 19:30:18 -08:00
Enrico Ros e12279dab0 AIX: Anthropic: show the US inference setting when on 2026-02-06 19:24:04 -08:00
Enrico Ros 2e0c79cb64 LLMs: OpenRouter: also inherit the initial temperature from upstreams 2026-02-06 19:19:33 -08:00
Enrico Ros aa697edb8c AIX: Anthropic: minor API changes 2026-02-06 19:18:54 -08:00
Enrico Ros c72e3c58dd AIX: Anthropic: allow US servers 2026-02-06 19:17:01 -08:00
Enrico Ros 1de30c8bd5 AIX: Anthropic: accomodate some API changes 2026-02-06 18:52:58 -08:00
Enrico Ros 3a8eea6fb7 Roll AIX 2026-02-06 18:37:05 -08:00
Enrico Ros b7fd0bdba7 LLMs: OpenRouter: auto-inherit configurable parameters from Anthropic, Gemini and OpenAI.
Fixes #948: OpenAI-through-OR verbosity is sync'd with OpenAI models.

Fixes #893: Gemini-through-OR parameters are synchronized with Gemini models

Fixes #940: OpenAI-through-OR reasoning effort is synced with OpenAI models and much improved. We will have to still fix #944 for OpenAI levels to be fully sync'd with upstream (in progress)
2026-02-06 18:27:38 -08:00
Enrico Ros 58457cac50 LLMs: OR/Anthropic: support effort and adaptive.
Fixes #962
2026-02-06 18:27:38 -08:00
Enrico Ros 0fbacee7dc LLMs: Anthropic: editable Max effort. #962 2026-02-06 18:27:38 -08:00
Enrico Ros a498f28d14 LLMs: Anthropic: support for max effort. #962 2026-02-06 18:26:07 -08:00
Enrico Ros 5b9c6a2d0e LLMs: Anthropic: support adaptive thinking correctly. #962 2026-02-06 18:26:07 -08:00
Enrico Ros 4c7f50ab98 LLMs: Anthropic: inline thinking budget 2026-02-06 18:26:07 -08:00
Enrico Ros ef03d33bbf LLMs: Anthropic: GA skills 2026-02-06 18:26:07 -08:00
Enrico Ros 22c9fc56c0 LLMs: Opus 4.6: naming 2026-02-06 18:26:07 -08:00
Enrico Ros c952fd734f LLMs: Opus 4.6: remove forcing 2026-02-06 18:26:07 -08:00
Enrico Ros 310e99af23 LLMs: Opus 4.6: sort order, unhide 4.5 2026-02-06 18:26:07 -08:00
Enrico Ros e78446904a Docker: remove broken command directive. Fixes #964 2026-02-06 18:25:24 -08:00
Enrico Ros 760e9d8279 CC: Anthropic: update sources of info 2026-02-06 18:25:24 -08:00
Enrico Ros 61a60c5b9f Markdown: bundle in main chunk instead of lazy-loading 2026-02-06 12:41:41 -08:00
Enrico Ros 3054e1b88d Node 24: add .nvmrc, drop 26 from engines 2026-02-06 12:41:41 -08:00
Enrico Ros 6f4fabf147 Claude Opus 4.6 baseline support 2026-02-05 12:02:21 -08:00
Enrico Ros b0c791a055 Sweep: bits 2026-02-05 03:35:40 -08:00
Enrico Ros 748991249a LLMs: OpenAI: Update tooling availabiltiy across models 2026-02-05 02:36:28 -08:00
Enrico Ros 1aea7122cc Sweep: improve detection of connection issues 2026-02-05 02:35:47 -08:00
Enrico Ros 9a83b428f1 AppBreadcrumbs: auto-ellipsize 2026-02-05 01:21:46 -08:00
Enrico Ros 2cd38bc02b Sweep: update baseline with improved OpenAI chatCompletion values. remove verbosity when the only value is medium (aka, no parameter) 2026-02-05 00:44:48 -08:00
Enrico Ros e586142190 AIX: OpenAI-compatible: ChatCompletions: support verbosity for all (not just openrouter) 2026-02-05 00:07:36 -08:00
Enrico Ros a10d0dcf5d LLMs: auto-inject image output 2026-02-05 00:07:36 -08:00
Enrico Ros 6fdff488a9 Sweep: neutered values 2026-02-05 00:07:36 -08:00
Enrico Ros 8af0d78127 Sweep: adapt to the interfaces like aix.client.ts 2026-02-04 23:07:21 -08:00
Enrico Ros 177686a7fc Sweep: add option to merge models instead of wiping the file 2026-02-04 23:01:40 -08:00
Enrico Ros 09b6e47036 Sweep: fix Responses interface application 2026-02-04 21:14:27 -08:00
Enrico Ros 704187ba3e Models Modal: change visibility 2026-02-04 20:49:39 -08:00
Enrico Ros 4ea8a06503 LLMs: auto-inject web search 2026-02-04 20:49:39 -08:00
Enrico Ros 80fcc7d3e3 Security: client-dominated credential isolation for OpenAI access 2026-02-04 20:09:16 -08:00
Enrico Ros a04c62da6f LLMs: OpenAI: fix verbosity (automated). Fixes #947 2026-02-04 19:57:50 -08:00
Enrico Ros fcb518a050 Security: prevent key exfil 2026-02-04 19:43:09 -08:00
Enrico Ros a222626933 CC: sweep: small note 2026-02-04 19:31:41 -08:00
Enrico Ros a3ceade738 Security: anti-dns-spoofing anthropic 2026-02-04 19:26:57 -08:00
Enrico Ros 51d58223b4 Sweep: more succinct output 2026-02-04 19:12:50 -08:00
Enrico Ros d37a603db2 LLMs: OpenAI: Auto 0-day Responses suport. Fixes e458bca1a. #937 2026-02-04 19:04:13 -08:00
Enrico Ros ea984f3ddf Security: anti-dns-spoofing matching 2026-02-04 18:49:31 -08:00
Enrico Ros a9d3e3dead CC: llms: verify-parameters 2026-02-04 18:49:31 -08:00
Enrico Ros 5499e57205 Tools: sweep: json: fold some sweeps into a 'tools' array 2026-02-04 17:45:50 -08:00
Enrico Ros 6f8ee0247f Tools: sweep: baselines 2026-02-04 17:33:23 -08:00
Enrico Ros 05ee5cc3d1 Tools: sweep: merge id-based parameters 2026-02-04 17:12:36 -08:00
Enrico Ros cb6b569330 Tools: sweep: remove unnecessary configs 2026-02-04 17:05:30 -08:00
Enrico Ros 53073ff109 Tools: sweep: remove opanti summary 2026-02-04 17:05:16 -08:00
Enrico Ros 26d362d7a6 Tools: sweep: partition per-dialect 2026-02-04 16:40:35 -08:00
Enrico Ros 91d99e1a63 Tools: sweep: improvements for Gemini and Anthropic, and to save/load of results 2026-02-04 16:17:19 -08:00
Enrico Ros a20917c971 Tools: sweep: incremental output save 2026-02-04 15:23:00 -08:00
Enrico Ros af9bf9e5b3 Tools: sweep: parallel support 2026-02-04 15:13:39 -08:00
Enrico Ros 46b473b8a0 Tools: sweep: Gemini sweeps. #953 2026-02-04 15:03:31 -08:00
Enrico Ros e2b4028223 Tools: sweep: only select from the predefined sweeps inside the config file, #944, #947, #953 2026-02-04 14:52:09 -08:00
Enrico Ros bac2a31782 Tools: sweep: add opeanai image generation and search tool presence, #944, #947, #953 2026-02-04 14:51:57 -08:00
Enrico Ros 3d20e6bf91 Tools: llm parameter sweep. #944, #947, #953 2026-02-04 14:12:44 -08:00
Enrico Ros 9337216092 tRPC fetchers: console logging on connect/response/parsing can be disabled via env 2026-02-04 14:12:44 -08:00
Enrico Ros cd35d0ca55 Add TSX as a dev dependency 2026-02-04 10:54:44 -08:00
Enrico Ros 6d591b98b8 Roll packages (deep) 2026-02-04 10:53:53 -08:00
Enrico Ros 486381ab9d Sprites: run the gen node native, as module 2026-02-04 10:34:14 -08:00
Enrico Ros c619b4debb ListItemGroupCollapser: sm everywhere 2026-02-04 01:35:55 -08:00
Enrico Ros 383a3085ec Chat Dropdown: adapt Optima Dropdown. #955 2026-02-04 01:03:18 -08:00
Enrico Ros 5a3bb3d817 Chat Dropdown: adapt llmSelect. #955 2026-02-04 01:03:02 -08:00
Enrico Ros d1ba758887 Chat Dropdown: reuse toggleable set and Collapser. #955 2026-02-04 00:55:39 -08:00
Enrico Ros 6fef149997 Sprites: port models-modal 2026-02-03 23:38:50 -08:00
Enrico Ros aad3b16ff2 Sprites: port useLLMSelect, Beam 2026-02-03 23:38:50 -08:00
Enrico Ros 819ba14523 Sprites: Generate and wire 2026-02-03 23:38:50 -08:00
Enrico Ros d3c25ca16a Sprites: update generator with class 2026-02-03 23:38:27 -08:00
Enrico Ros 99a65f72ac Sprites: generator update 2026-02-03 22:35:55 -08:00
Enrico Ros be9080d392 Sprites: generator 2026-02-03 22:35:55 -08:00
Enrico Ros f32d991413 Chat Dropdown: reusable parts. #955 2026-02-03 22:34:12 -08:00
Enrico Ros 94b68ebefa CloseablePopup: memo. #955 2026-02-03 22:33:35 -08:00
Enrico Ros 0450eaaceb CC: rel:release-open 2026-02-03 09:20:11 -08:00
Enrico Ros 408c5ce088 Readme: update counter 2026-02-02 17:13:13 -08:00
319 changed files with 14994 additions and 4039 deletions
+1
View File
@@ -0,0 +1 @@
commands/code/apply-issue-main.md
+56
View File
@@ -0,0 +1,56 @@
---
description: Sync xAI Responses API implementation with latest upstream documentation
argument-hint: specific feature to check
---
Review the xAI Responses API implementation:
- xAI wire types: `src/modules/aix/server/dispatch/wiretypes/xai.wiretypes.ts` (xAI-specific request schema, tools)
- Request adapter: `src/modules/aix/server/dispatch/chatGenerate/adapters/xai.responsesCreate.ts` (AIX → xAI Responses API)
- Response parser: `src/modules/aix/server/dispatch/chatGenerate/parsers/openai.responses.parser.ts` (shared with OpenAI Responses)
- Dispatch routing: `src/modules/aix/server/dispatch/chatGenerate/chatGenerate.dispatch.ts` (dialect='xai' routing)
- OpenAI shared types: `src/modules/aix/server/dispatch/wiretypes/openai.wiretypes.ts` (InputItem/OutputItem schemas reused by xAI)
IMPORTANT context:
- We use ONLY the xAI Responses API (`POST /v1/responses`). We do NOT use the Chat Completions API (`/v1/chat/completions`) for xAI anymore.
- xAI's Responses API is similar to OpenAI's but has key differences - the skill should find what changed since our last sync.
- Response streaming/parsing reuses the OpenAI Responses parser since the format is compatible.
- We do NOT implement: Files API, Collections Search, Remote MCP tools, Voice Agent API, Image/Video generation, Batch API, or Deferred Completions.
Then take a look at the newest API information available. Try these sources, and be creative if some are blocked:
**Primary Sources (guide pages work well with WebFetch despite being JS-rendered):**
- Responses API Guide: https://docs.x.ai/docs/guides/chat
- Stateful Responses: https://docs.x.ai/docs/guides/responses-api
- Tools Overview: https://docs.x.ai/docs/guides/tools/overview
- Search Tools (web_search, x_search): https://docs.x.ai/docs/guides/tools/search-tools
- Code Execution Tool: https://docs.x.ai/docs/guides/tools/code-execution-tool
- Function Calling: https://docs.x.ai/docs/guides/function-calling
- Streaming: https://docs.x.ai/docs/guides/streaming-response
- Reasoning: https://docs.x.ai/docs/guides/reasoning
- Structured Outputs: https://docs.x.ai/docs/guides/structured-outputs
- Models & Pricing: https://docs.x.ai/developers/models
- Release Notes: https://docs.x.ai/developers/release-notes
- API Reference: https://docs.x.ai/developers/api-reference#create-new-response
**Alternative Sources if primary blocked:**
- xAI Python SDK: https://github.com/xai-org/xai-sdk-python
- Web Search for "xai grok api changelog 2026" or "xai responses api new features"
**If all blocked:** Explain what you attempted and ask user to provide documentation manually.
$ARGUMENTS
Check carefully for discrepancies between our implementation and the current API docs:
1. **Request fields**: Compare `XAIWire_API_Responses.Request_schema` against current docs - any new, changed, or deprecated parameters?
2. **Tool definitions**: Compare `XAIWire_Responses_Tools` - any new parameters on web_search/x_search/code_interpreter? Any new hosted tool types?
3. **Input/Output item types**: Any xAI-specific output items not handled by the shared OpenAI parser (e.g., x_search_call, web_search_call, code_interpreter_call)?
4. **Streaming events**: Any xAI-specific SSE event types beyond what the OpenAI Responses parser handles?
5. **Response shape**: Usage reporting differences, new fields in the response object?
6. **Adapter logic**: Message role mapping, content type handling, system message approach - still correct?
7. **Include options**: Any new values for the `include` array?
8. **Reasoning config**: Which models support it and with what values?
Prioritize breaking changes and new capabilities that would improve the user experience.
When making changes, add comments with date: `// [xAI, 2026-MM-DD]: explanation`
**Self-update this skill**: After completing the sync, if your research reveals that assumptions in THIS skill file (`.claude/commands/aix/sync-xai-api.md`) are wrong or outdated - e.g., new APIs we now implement, new tool types added, URLs moved, file paths changed - update this skill file to stay accurate for next time.
+34
View File
@@ -0,0 +1,34 @@
---
description: Review in-flight changes for coherence, completeness, and quality
---
Review the current in-flight changes in the big-agi-private repository (dev branch, continuously rebased ~1800 commits on top of main).
**Step 1: Scope and read**
`git diff --stat` + `git status` for breadth. Then full `git diff` (if empty: `git diff --cached`, then `git diff HEAD~1`).
For every file in the diff, read surrounding context in the actual source file - the diff alone hides bugs in adjacent untouched code.
**Step 2: Reverse-engineer the intent**
From the diff, determine the **what**, **how**, and **why**. Present this concisely so the author can confirm or correct,
but don't stop here, continue to the full review in the same response.
**Step 3: Validate**
Run `tsc --noEmit --pretty` and `npm run lint` (in parallel). Report any errors with the review.
If the diff removes/renames identifiers, grep the codebase for stale references to the OLD names. This catches broken guards, stale imports, and incomplete migrations.
**Step 4: Deep review**
Evaluate every file in the diff.
Leave no rocks unturned - correctness, coherence, completeness, excess, generalization, maintenance burden,
codebase consistency, etc.
**Step 5: Prioritized next steps**
Think about what happens when the next developer touches this code.
Rank findings by severity (bug > correctness > cleanup > cosmetic). Be specific about what to change and where.
Remember: design values for this codebase: orthogonal features, features that generalize well, modularized and reusable code,
type-discriminated data, optimized code, zero maintenance burden. Minimize future pain, etc.
@@ -4,17 +4,46 @@ description: Update Anthropic model definitions with latest pricing and capabili
Update `src/modules/llms/server/anthropic/anthropic.models.ts` with latest model definitions.
Reference `src/modules/llms/server/llm.server.types.ts` and `src/modules/llms/server/models.mappings.ts` for context only. Focus on the model file, do not descend into other code.
Reference files (for context only, do not modify):
- `src/modules/llms/server/llm.server.types.ts`
- `src/modules/llms/server/models.mappings.ts`
- `src/common/stores/llms/llms.parameters.ts`
**Primary Sources:**
- Models: https://docs.claude.com/en/docs/about-claude/models/overview
- Pricing: https://claude.com/pricing#api
- Deprecations: https://docs.claude.com/en/docs/about-claude/model-deprecations
**Workflow: Start with recent changes, then verify the full model list.**
**Fallbacks if blocked:** Check Anthropic TypeScript SDK at https://github.com/anthropics/anthropic-sdk-typescript, search "anthropic models latest pricing", "anthropic latest models", or search GitHub for latest model prices and context windows
**Primary Sources (append `.md` to any path for clean markdown):**
1. Recent changes: https://platform.claude.com/docs/en/release-notes/overview.md
2. Models & IDs: https://platform.claude.com/docs/en/about-claude/models/overview.md
3. Pricing (base, cache, batch, long context): https://platform.claude.com/docs/en/about-claude/pricing.md
4. Deprecations & retirement dates: https://platform.claude.com/docs/en/about-claude/model-deprecations.md
**Discovering feature docs:** The release notes and models overview markdown
contain inline links to feature-specific pages (thinking modes, effort,
context windows, what's-new pages, etc.). When a new capability is
referenced, follow those links - append `.md` to get markdown. Examples of
pages you might discover this way:
- `about-claude/models/whats-new-claude-*` - per-generation changes
- `build-with-claude/extended-thinking` - thinking budget configuration
- `build-with-claude/effort` - effort parameter levels
- `build-with-claude/adaptive-thinking` - adaptive thinking mode
**Fallback web pages** (crawl if `.md` paths break or structure changes):
- https://platform.claude.com/docs/en/about-claude/models/overview
- https://platform.claude.com/docs/en/about-claude/pricing
- https://platform.claude.com/docs/en/release-notes/overview
- https://claude.com/pricing
**Fallbacks if blocked:** Check the Anthropic TypeScript SDK at
https://github.com/anthropics/anthropic-sdk-typescript, or web-search
for "anthropic models latest pricing" / "anthropic latest models".
**Important:**
- Review the full model list for additions, removals, and price changes
- For new models: check which `parameterSpecs` are needed (thinking mode,
effort levels, 1M context, skills, web tools) by reading the linked
feature docs and comparing with existing model entries
- When thinking/effort semantics change between generations
(e.g. adaptive vs manual thinking), document in comments
- Minimize whitespace/comment changes, focus on content
- Preserve comments to make diffs easy to review
- Flag broken links or unexpected content
@@ -0,0 +1,91 @@
---
description: Update/validate dynamic vendor model parsers (OpenRouter, TogetherAI, Alibaba, Azure, Novita, ChutesAI, FireworksAI, TLUS, LM Studio, LocalAI, FastAPI)
---
Validate that the dynamic (API-fetched) vendor model parsers are up to date and not silently broken.
These vendors do NOT have hardcoded model lists - they fetch models from APIs at runtime. But their parsers, filters, heuristic detection, and capability mapping can break if upstream APIs change. This skill covers all dynamic vendors NOT covered by the other `llms:update-models-{vendor}` skills.
## Vendors to Validate
### High Risk
**OpenRouter** - `src/modules/llms/server/openai/models/openrouter.models.ts`
- Most complex parser. Vendor-specific parameter inheritance (Anthropic thinking variants, Gemini thinking/image, OpenAI reasoning effort, xAI/DeepSeek reasoning).
- Hardcoded family ordering list (lines ~24-37) - check if new leading vendors are missing.
- Hardcoded old/deprecated model hiding list (lines ~39-49) - check if stale.
- Cache pricing detection (Anthropic-style vs OpenAI-style) - verify format still valid.
- Variant injection for Anthropic thinking/non-thinking - verify still correct.
- Reference: https://openrouter.ai/docs/models
### Medium Risk
**Novita** - `src/modules/llms/server/openai/models/novita.models.ts`
- Features array mapping (`function-calling`, `reasoning`, `structured-outputs`) and input modalities parsing.
- Pricing unit conversion (hundredths of cent per million → dollars per 1K).
- Hostname heuristic: `novita.ai`.
**ChutesAI** - `src/modules/llms/server/openai/models/chutesai.models.ts`
- Custom `max_model_len` field for context window.
- Assumes all models support Vision + Functions (aggressive).
- Hostname heuristic: `.chutes.ai`.
**FireworksAI** - `src/modules/llms/server/openai/models/fireworksai.models.ts`
- Relies on provider capability flags: `supports_chat`, `supports_image_input`, `supports_tools`.
- Hostname heuristic: `fireworks.ai/`.
**TogetherAI** - `src/modules/llms/server/openai/models/together.models.ts`
- Type allow-list (`type: 'chat'`), vision detection by string match.
- Custom wire schema with pricing conversion.
**TLUS** - `src/modules/llms/server/openai/models/tlusapi.models.ts`
- Detected by response structure (`total_models`, `free_models`, `pro_models` fields).
- Capability enum mapping (`text`, `vision`, `audio`, `tool-calling`, `reasoning`, `websearch`).
- Tier-based pricing (`free` vs paid).
**Alibaba** - `src/modules/llms/server/openai/models/alibaba.models.ts`
- Model list was cleared (dynamic-only). Exclusion patterns for non-chat models.
- Assumes 128K context and Vision+Functions for all models (overly permissive).
- Check if hardcoded data should be restored now that naming has stabilized.
### Low Risk (local/generic - validate only if issues reported)
**Azure** - `src/modules/llms/server/openai/models/azure.models.ts`
- Custom deployments API, not `/v1/models`. User-specific. Deployment name fallback logic.
**LM Studio** - `src/modules/llms/server/openai/models/lmstudio.models.ts`
- Local service, native API (`/api/v1/models`). GGUF metadata parsing, capability flags.
**LocalAI** - `src/modules/llms/server/openai/models/localai.models.ts`
- Local service. String-based hide list, vision/reasoning detection by name pattern.
**FastAPI** - `src/modules/llms/server/openai/models/fastapi.models.ts`
- Generic passthrough. Detected by `owned_by === 'fastchat'`. Minimal parsing.
## Validation Checklist
For each vendor (prioritize High > Medium > Low):
1. **Read the parser file** and check for:
- Deny/allow lists that may be stale (new model families missing)
- Capability assumptions that may be wrong (e.g. "all models support vision")
- Field names that may have changed upstream
- Pricing conversion math that may use wrong units
2. **Check upstream docs** (where available) for:
- API response schema changes
- New model types or capability fields
- Deprecated fields
3. **Cross-reference with OpenRouter** (aggregator):
- OpenRouter surfaces models from many of these vendors
- If OpenRouter shows capabilities that a vendor's parser misses, the parser is stale
4. **Fix issues found** - update parsers, filters, deny lists as needed.
5. Run `tsc --noEmit` after changes.
**Important:**
- Do NOT convert dynamic vendors to hardcoded lists - the dynamic approach is intentional
- Focus on parser correctness, not model coverage
- Flag any vendor whose API response format seems to have changed substantially
@@ -0,0 +1,57 @@
---
description: Verify model parameterSpecs match API-validated sweep data
argument-hint: openai | anthropic | gemini | xai (or empty for all)
---
# Verify LLM Parameters
Compare model `parameterSpecs` in definition files against API-validated sweep data.
If `$ARGUMENTS` provided, verify only that dialect, which includes reading the pair of sweep results and model defintions. Otherwise verify all four, and read the pairs in sequence.
## Files
**Sweep results** (source of truth for select parameters):
- `tools/develop/llm-parameter-sweep/llm-{dialect}-parameters-sweep.json`
By the time you see these files, the repo owner has already updated them via `tools/develop/llm-parameter-sweep/sweep.sh` (very long running, 15 min per vendor).
**Model definitions (source of truth for model defintions for the user and application, including constants, interfaces, supported parameters and sometimes allowed parameter values)**:
- OpenAI: `src/modules/llms/server/openai/models/openai.models.ts`
- Anthropic: `src/modules/llms/server/anthropic/anthropic.models.ts`
- Gemini: `src/modules/llms/server/gemini/gemini.models.ts`
- xAI: `src/modules/llms/server/openai/models/xai.models.ts`
## Task
The sweep data is the source of truth for allowed model parameter values or value ranges.
For each model in the sweep, verify the model definition exposes exactly those capabilities - no more, no less. This includes:
- The parameter is present in parameterSpecs
- The paramId variant covers exactly the values from the sweep, if applicable
- etc.
Report models where the definition doesn't match the sweep.
## Parameter Mapping
Example parameter mapping. Note that new parameters may have been added to both the definition, and the sweep.
The objective of the sweep is to hint at model definition values, but the model definitions are what matters for Big-AGI,
and need to be carefully updated, otherwise thousands of clients may break.
| Dialect | Sweep Key | Model paramId |
|-----------|--------------------------|------------------------------|
| OpenAI | `oai-reasoning-effort` | `llmVndOaiEffort` |
| OpenAI | `oai-verbosity` | `llmVndOaiVerbosity` |
| OpenAI | `oai-image-generation` | `llmVndOaiImageGeneration` |
| OpenAI | `oai-web-search` | `llmVndOaiWebSearchContext` |
| Anthropic | `ant-effort` | `llmVndAntEffort` |
| Anthropic | `ant-thinking-budget` | `llmVndAntThinkingBudget` |
| Gemini | `gemini-thinking-level` | `llmVndGemEffort` |
| Gemini | `gemini-thinking-budget` | `llmVndGeminiThinkingBudget` |
| xAI | `xai-web-search` | `llmVndXaiWebSearch` |
## Output
Report first for every model the expected values from the sweep, then the actual values from the definition, then the mismatches.
Finally make one table for each dialect listing all models with mismatches and the specific issues.
+113
View File
@@ -0,0 +1,113 @@
---
description: Execute the Big-AGI release process
argument-hint: version like "2.0.4" or empty to auto-increment patch
---
Execute the release process for Big-AGI. Go step-by-step, waiting for user approval between major steps.
## Step 1: Determine Version
If `$ARGUMENTS` provided, use it. Otherwise, read `package.json` and increment patch version.
## Step 2: Update Files
1. **package.json** - Update `version` field
2. **src/common/app.release.ts** - Increment `Monotonics.NewsVersion` (e.g., 203 → 204)
3. **src/apps/news/news.data.tsx** - Add new entry at top of `NewsItems` array
For the news entry, ask user for release name and key highlights.
**News entry style** - Draft is a starting point, user will refine:
- Models lead when model-heavy, grouped together
- Callout features get own bullet with colon explanation
- UX items grouped, minimal bold
- Fixes last, brief
- Release name stays subtle - don't oversell the theme
Use `<B>`, `<B issue={N}>`, `<B href='url'>`. Re-read file after user edits.
4. User runs `npm i` to update lockfile
## Step 3: README
Update `README.md`:
- Line ~46: Update model examples if new flagship models
- Line ~147: Add release bullet above previous version
**Style:** `- Open X.Y.Z: **Name** feature1, feature2, feature3`
## Step 4: Git Operations
User commits changes, then:
```bash
git tag vX.Y.Z
git push opensource vX.Y.Z
```
## Step 5: GitHub Release
Create release with `gh release create`. Structure:
```
# Big-AGI X.Y.Z - Name
## What's New
### **Headline Feature**
1-2 sentences explaining the main theme. Then bullet points for specifics.
### **Also New**
- Bullet list of other features
- Keep it scannable
**Full Changelog**: https://github.com/enricoros/big-AGI/compare/vPREV...vNEW
## Get Started
Available now at [big-agi.com](https://big-agi.com), via Docker, or self-host from source.
```
## Step 6: Announcements
Draft for user to post:
**Twitter** - Thematic, not feature dumps. Talk about what it means, not what it lists:
```
Big-AGI Open X.Y.Z is out!
[Theme - e.g., "Lots of love to models: native support, latest protocols, total configuration - puts you in control."]
[One more angle, natural prose]
[Optional link]
```
**Discord** - Structured with bold headers:
```
## :partyblob: Big-AGI **Open** X.Y.Z
**Category:** Items
**Category:** Items
**More:** Count of commits/fixes
```
## Tone Guide
**Good:**
- "Lots of love to models: native support, latest protocols, total configuration"
- "UX quality of life improvements, from Google Drive to message reorder"
- "Gemini 3 Flash support with 4-level thinking: high, medium, low, minimal"
**Bad:**
- "Rolling out the red carpet for top models!" (too salesy)
- "Enhanced and streamlined the robust model experience" (corporate speak)
- "Added support for Gemini 3 Flash model with multiple thinking levels" (verb prefix, vague)
## Reference
Find previous copy at:
- **GitHub releases:** https://github.com/enricoros/big-AGI/releases
- **News entries:** `src/apps/news/news.data.tsx`
- **README:** `README.md` release notes section
- **Changelog:** https://big-agi.com/changes
Match the existing tone - professional but human, specific not generic, features not marketing.
+5
View File
@@ -4,6 +4,7 @@
"Bash(cat:*)",
"Bash(cp:*)",
"Bash(curl:*)",
"Bash(eslint:*)",
"Bash(find:*)",
"Bash(gh issue list:*)",
"Bash(gh issue view:*)",
@@ -13,8 +14,10 @@
"Bash(git grep:*)",
"Bash(git log:*)",
"Bash(git ls-tree:*)",
"Bash(git mv:*)",
"Bash(git show:*)",
"Bash(grep:*)",
"Bash(head:*)",
"Bash(ls:*)",
"Bash(mkdir:*)",
"Bash(node:*)",
@@ -26,7 +29,9 @@
"Bash(rg:*)",
"Bash(rm:*)",
"Bash(sed:*)",
"Bash(tail:*)",
"Bash(tree:*)",
"Bash(tsc:*)",
"Read(//tmp/**)",
"Skill(llms:update-models*)",
"WebFetch",
+12 -11
View File
@@ -12,27 +12,30 @@ on:
jobs:
claude-dm:
# Only allow repository owner to trigger DMs with @claude (blocks other users and bots)
if: |
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) ||
github.actor == 'enricoros' &&
github.triggering_actor == 'enricoros' &&
((github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) ||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude'))
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')))
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
pull-requests: write
contents: write # Required for code creation and commits
issues: write
id-token: write
pull-requests: write
actions: read # Required for Claude to read CI results on PRs
id-token: write # required to use OIDC to authenticate to Claude Code API
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 1
fetch-depth: 0 # 1 -> 0: full history helps with git blame, etc.
- name: Run Claude Code DM Response
id: claude
@@ -41,6 +44,7 @@ jobs:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# Security: Only users with write access can trigger (DMs allow code execution)
# Note: contents:write permission enables code creation and commits
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
@@ -49,10 +53,7 @@ jobs:
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'
# disabling opus for now claude-opus-4-1-20250805
# former: claude-sonnet-4-5-20250929
claude_args: |
--model claude-opus-4-5-20251101
--model claude-opus-4-6
--max-turns 100
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools,SlashCommand"
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),SlashCommand"
+15 -9
View File
@@ -2,7 +2,7 @@ name: Claude Code Auto-Triage Issues
on:
issues:
types: [ opened, assigned ]
types: [ opened ]
jobs:
claude-issue-triage:
@@ -17,15 +17,15 @@ jobs:
permissions:
contents: read
issues: write
pull-requests: write
id-token: write
pull-requests: read # was write, but we're not altering PRs here
actions: read
id-token: write # required to use OIDC to authenticate to Claude Code API
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 1
fetch-depth: 0 # 1 -> 0: full history helps with git blame, etc.
- name: Analyze issue and provide help
uses: anthropics/claude-code-action@v1
@@ -35,6 +35,7 @@ jobs:
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: '*'
# track_progress: true # Enables tracking comments
show_full_output: ${{ github.event.repository.private }} # security: do not log verbosely in private repo
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
@@ -54,9 +55,11 @@ jobs:
**Use web search**: When potentially outside Big-AGI (e.g. user configuration), search the web for similar errors or related issues
**Provide a solution**:
- Provide multiple solutions if uncertain, and say so
- If you can fix it in code, propose the fix
- If possible also suggest fixes or workarounds for immediate relief
- Analyze the code and suggest specific fixes with code examples
- If possible also suggest fixes or workarounds for immediate relief
- Reference specific files and line numbers
- Suggest workarounds for immediate relief if applicable
- Use web search to find similar issues and solutions
- Test selectively and even npm install and run build if needed to verify the solution
2. Always add the 'claude-triage' issue label to indicate this issue was triaged by Claude
3. Comment with:
@@ -65,13 +68,16 @@ jobs:
- Next steps or clarification needed
- Link duplicates if found
Remember: design values for this codebase: orthogonal features, features that generalize well, modularized and reusable code,
type-discriminated data, optimized code, zero maintenance burden. Minimize future pain, etc.
IMPORTANT: You are in READ-ONLY triage mode. Analyze and suggest solutions in your comment, but do NOT attempt to push code changes.
If you're uncertain, say so and suggest next steps.
If you write any code make sure that it compiles and that you push it.
Be welcoming, helpful, professional, solution-focused and no-BS.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--model claude-opus-4-5-20251101
--model claude-opus-4-6
--max-turns 75
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools,SlashCommand"
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(npm run:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),SlashCommand"
-77
View File
@@ -1,77 +0,0 @@
name: Claude Code PR Review
on:
pull_request:
types: [ opened, synchronize, ready_for_review ]
# Limit branches
branches: [ main, dev, v1 ]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
jobs:
claude-pr-review:
# Skip draft PRs
# Optional: filter authors: github.event.pull_request.user.login != 'enricoros'
if: |
github.event.pull_request.draft == false
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
pull-requests: write
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 1
- name: Run PR Review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# Security: Allow any user to trigger reviews (read-only PR analysis is safe)
github_token: ${{ secrets.GITHUB_TOKEN }}
allowed_non_write_users: '*'
# track_progress: true # Enables tracking comments
# This setting allows Claude to read CI results on PRs
additional_permissions: |
actions: read
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Potential bugs or issues
- Adherence to Big-AGI architecture and design patterns
- Code quality and best practices, including TypeScript types, error handling, and edge cases
- Performance considerations: bundle size, React patterns, streaming efficiency
- Security concerns if applicable
Use the repository's CLAUDE.md for guidance on style and conventions.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
Use `gh pr review comment` for inline suggestions on specific lines.
IMPORTANT: After completing your review, always add the 'claude-review' label to the PR to indicate it was reviewed by Claude:
gh pr edit ${{ github.event.pull_request.number }} --add-label "claude-review"
Be constructive, helpful, no-BS, and specific with file:line references.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--model claude-opus-4-5-20251101
--max-turns 100
--allowedTools "Edit,Read,Write,WebFetch,WebSearch,Bash(cat:*),Bash(cp:*),Bash(find:*),Bash(git branch:*),Bash(grep:*),Bash(ls:*),Bash(mkdir:*),Bash(gh issue:*),Bash(gh search:*),Bash(gh label:*),Bash(gh pr:*),mcp__chrome-devtools"
+9 -9
View File
@@ -57,10 +57,10 @@ jobs:
fetch-depth: 1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Log in to the Container registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -68,7 +68,7 @@ jobs:
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
labels: |
@@ -79,7 +79,7 @@ jobs:
- name: Build and push by digest
id: build
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
uses: docker/build-push-action@d08e5c354a6adb9ed34480a06d141179aa583294 # v7.0.0
with:
context: .
file: Dockerfile
@@ -102,7 +102,7 @@ jobs:
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6.0.0
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: ${{ runner.temp }}/digests/*
@@ -125,17 +125,17 @@ jobs:
run: echo "IMAGE_NAME_LC=${IMAGE_NAME,,}" >> $GITHUB_ENV
- name: Download digests
uses: actions/download-artifact@37930b1c2abaa49bbe596cd826c3c89aef350131 # v7.0.0
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@8d2750c68a42422c14e847fe6c8ac0403b4cbd6f # v3.12.0
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Log in to the Container registry
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3.7.0
uses: docker/login-action@b45d80f862d83dbcd57f89517bcf500b2ab88fb2 # v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -143,7 +143,7 @@ jobs:
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
uses: docker/metadata-action@030e881283bb7a6894de51c315a6bfe6a94e05cf # v6.0.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
+1
View File
@@ -0,0 +1 @@
24
+89 -94
View File
@@ -1,22 +1,42 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Guidance to Claude Code when working with code in this repository.
## Development Commands
```bash
# Targeted Code Quality (safe while dev server runs)
npx tsc --noEmit # Type check without building
npx eslint src/path/to/file.ts # Lint specific file
npm run lint # Lint entire project
```
## Architecture Overview
Big-AGI is a Next.js 15 application with a modular architecture built for advanced AI interactions. The codebase follows a three-layer structure with distinct separation of concerns.
Big-AGI is a Next.js 15 application with a sophisticated modular architecture built for professional AI interactions.
### Development Commands
Dev servers may be already running on ports 3000, 3001, 3002, or 3003 (not always this app - other projects may occupy these ports). Never start or stop dev servers, let the user do it.
```bash
# Validate (~5s, safe while dev server runs, do NOT use `next build` ~45s for same checks)
tsc --noEmit --pretty && npm run lint # Type check (~3.5s) + ESLint (~2s)
eslint src/path/to/file.ts # Lint specific file
# Full build (~60s+, only when suspecting runtime/bundle issues)
npm run build # next build runs compile+lint+types but stops at first type-error file; tsc shows all at once
# Database & External Services
# npm run supabase:local-update-types # Generate TypeScript types
# npm run stripe:listen # Listen for Stripe webhooks
```
### Git/GitHub remotes
The `gh` command is available to interact with GitHub from the terminal, but **NEVER PUSH TO ANY BRANCH**. The user manages all 'write' git operations.
- `opensource` -> `enricoros/big-AGI` (public, default branch: `main`, MIT) - community issues/PRs/releases
- `private` -> `big-agi/big-agi-private` (private, default branch: `dev`) - main dev repo with `dev`->`staging`->`prod` pipeline
### Core Directory Structure
You are started from the root of the repository (i.e. where the git folder is or scripts should be run from).
**ISSUE ALL COMMANDS FROM THE ROOT, OMITTING 'cd' COMMANDS. DO NOT CHAIN CD AND OTHER COMMANDS**
**NEVER RUN COMPOUND `cd` COMMANDS LIKE `cd some-folder && command` - ONLY RUN `command` FROM THE ROOT, ALWAYS.**
The directory structure is as follows:
```
/app/api/ # Next.js App Router (API routes only, mostly -> /src/server/)
/pages/ # Next.js Pages Router (file-based, mostly -> /src/apps/)
@@ -31,11 +51,11 @@ Big-AGI is a Next.js 15 application with a modular architecture built for advanc
### Key Technologies
- **Frontend**: Next.js 15, React 18, Material-UI Joy, Emotion (CSS-in-JS)
- **State Management**: Zustand with localStorge/IndexedDB (single cell) persistence
- **API Layer**: tRPC with React Query for type-safe communication
- **State Management**: Zustand with localStorage/IndexedDB (single cell) persistence
- **API Layer**: tRPC with TanStack React Query for type-safe communication
- **Runtime**: Edge Runtime for AI operations, Node.js for data processing
### Apps Architecture Pattern
### "Apps" Architecture Pattern
Each app in `/src/apps/` is a self-contained feature module:
- Main component (`App*.tsx`)
@@ -51,20 +71,20 @@ Modules in `/src/modules/` provide reusable business logic:
- **`aix/`** - AI communication framework for real-time streaming
- **`beam/`** - Multi-model AI reasoning system (scatter/gather pattern)
- **`blocks/`** - Content rendering (markdown, code, images, etc.)
- **`llms/`** - Language model abstraction supporting 16 vendors
- **`llms/`** - Language model abstraction supporting 20+ vendors
### Key Subsystems & Their Patterns
#### 1. AIX - Real-time AI Communication
#### AIX - Real-time AI Communication
**Location**: `/src/modules/aix/`
**Pattern**: Client-server streaming architecture with provider abstraction
- **Client** tRPC **Server** **AI Providers**
- **Client** -> tRPC -> **Server** -> **AI Providers**
- Handles streaming/non-streaming responses with batching and error recovery
- Particle-based streaming: `AixWire_Particles` `ContentReassembler` `DMessage`
- Particle-based streaming: `AixWire_Particles` -> `ContentReassembler` -> `DMessage`
- Provider-agnostic through adapter pattern (OpenAI, Anthropic, Gemini protocols)
#### 3. Beam - Multi-Model Reasoning
#### Beam - Multi-Model Reasoning
**Location**: `/src/modules/beam/`
**Pattern**: Scatter/Gather for parallel AI processing
@@ -73,15 +93,24 @@ Modules in `/src/modules/` provide reusable business logic:
- Real-time UI updates via vanilla Zustand stores
- BeamStore per conversation via ConversationHandler
#### 4. Conversation Management
#### Conversation Management
**Location**: `/src/common/stores/chat/` and `/src/common/chat-overlay/`
**Pattern**: Overlay architecture with handler per conversation
- `ConversationHandler` orchestrates chat, beam, ephemerals
- Per-chat stores: `PerChatOverlayStore` + `BeamStore`
- Message structure: `DMessage` `DMessageFragment[]`
- Message structure: `DMessage` -> `DMessageFragment[]`
- Supports multi-pane with independent conversation states
#### Layout System ("Optima")
The Optima layout system provides:
- **Responsive design** adapting desktop/mobile
- **Drawer(left)/Toolbar/Panel(right)** composition
- **Portal-based rendering** for flexible component placement
Located in `/src/common/layout/optima/`
### Storage System
Big-AGI uses a local-first architecture with Zustand + IndexedDB:
@@ -89,7 +118,6 @@ Big-AGI uses a local-first architecture with Zustand + IndexedDB:
- **localStorage** for persistent settings/all storage (via Zustand persist middleware)
- **IndexedDB** for persistent chat-only storage (via Zustand persist middleware) on a single key-val cell
- **Local-first** architecture with offline capability
- **Migration system** for upgrading data structures across versions
Key storage patterns:
- Stores use `createIDBPersistStorage()` for IndexedDB persistence
@@ -101,16 +129,6 @@ Located in `/src/common/stores/` with stores like:
- `chat/store-chats.ts`: Conversations and messages
- `llms/store-llms.ts`: Model configurations
### Layout System ("Optima")
The Optima layout system provides:
- **Responsive design** adapting desktop/mobile
- **Drawer/Panel/Toolbar** composition
- **Split-pane support** for multi-conversation views
- **Portal-based rendering** for flexible component placement
Located in `/src/common/layout/optima/`
### State Management Patterns
1. **Global Stores** (Zustand with IndexedDB persistence)
@@ -122,6 +140,7 @@ Located in `/src/common/layout/optima/`
2. **Per-Instance Stores** (Vanilla Zustand)
- `store-beam_vanilla`: Beam scatter/gather state
- `store-perchat_vanilla`: Chat overlay state
- `store-attachment-drafts_vanilla`: Attachment drafts
- High-performance, no React integration
3. **Module Stores**
@@ -131,94 +150,60 @@ Located in `/src/common/layout/optima/`
### User Flows & Interdependencies
#### Chat Message Flow
1. User input `Composer` `DMessage` creation
2. `ConversationHandler.messageAppend()` Store update
3. `_handleExecute()` / `ConversationHandler.executeChatMessages()` AIX client request
4. AIX streaming `ContentReassembler` UI updates
5. Zustand auto-persistence IndexedDB
1. User input -> `Composer` -> `DMessage` creation
2. `ConversationHandler.messageAppend()` -> Store update
3. `_handleExecute()` / `ConversationHandler.executeChatMessages()` -> AIX client request
4. AIX streaming -> `ContentReassembler` -> UI updates
5. Zustand auto-persistence -> IndexedDB
#### Beam Multi-Model Flow
1. User triggers Beam `BeamStore.open()` state update
1. User triggers Beam -> `BeamStore.open()` state update
2. Scatter: Parallel `aixChatGenerateContent()` to N models
3. Real-time ray updates UI progress
4. Gather: User selects fusion Combined output
5. Result New message in conversation
3. Real-time ray updates -> UI progress
4. Gather: User selects fusion -> Combined output
5. Result -> New message in conversation
### Development Patterns
#### TypeScript & Code Quality
- Type-safe through strict TypeScript interfaces
- Clear interface-first approach for modules and components
- Use latest TypeScript 5.9+ features
- Use forward-looking patterns to minimize future refactors (e.g., discriminated unions, `satisfies` operator, as const assertions)
- Type guards and exhaustiveChecks for robustness
- Type inference where possible
- Runtime validation with Zod schemas for API inputs/outputs (usually server-side, with the client importing as types the inferred types)
#### Module Integration
- Each module exports its functionality through index files
- Modules register with central registries (e.g., `vendors.registry.ts`)
- Configuration objects define module behavior
- Type-safe integration through strict TypeScript interfaces
#### Component Patterns
- **Controlled components** with clear prop interfaces
- **Hook-based logic** extraction for reusability
- **Portal rendering** for overlays and modals
- **Suspense boundaries** for async operations
#### API Patterns
- **tRPC routers** for type-safe API endpoints
- **Zod schemas** for runtime validation
- **Middleware** for request/response processing
- **Edge functions** for performance-critical AI operations
- **tRPC procedures middleware** for authorization and logging (authorization is on a httpOnly cookie)
- **Edge functions** for performance-critical operations
## Security Considerations
- API keys stored client-side in localStorage (user-provided)
- Server-side API keys in environment variables only
#### Security Considerations
- API keys in environment variables only (server-side); on the client they're in localStorage for now, but we want to move away from this
- XSS protection through proper content escaping
- No credential transmission to third parties
## Knowledge Base
#### Writing Style
- **Never use emdashes (—).** Use normal dashes (-) instead, in all generated text, code comments, and documentation.
Architecture and system documentation is available in the `/kb/` knowledge base:
@kb/KB.md
## Common Development Tasks
### Testing & Quality
- Run `npm run lint` before committing
- Type-check with `npx tsc --noEmit`
- Type-check with `tsc --noEmit`
- Test critical user flows manually
### Adding a New LLM Vendor
1. Create vendor in `/src/modules/llms/vendors/[vendor]/`
2. Implement `IModelVendor` interface
3. Register in `vendors.registry.ts`
4. Add environment variables to `env.ts` (if server-side keys needed)
### Debugging Storage Issues
- Check IndexedDB: DevTools Application IndexedDB `app-chats`
- Check IndexedDB: DevTools -> Application -> IndexedDB -> `app-chats`
- Monitor Zustand state: Use Zustand DevTools
- Check migration logs in console during rehydration
## Code Examples
### AIX Streaming Pattern
```typescript
// Efficient streaming with decimation
aixChatGenerateContent_DMessage(
llmId,
request,
{ abortSignal, throttleParallelThreads: 1 },
async (update, isDone) => {
// Real-time UI updates
}
);
```
### Model Registry Pattern
```typescript
// Registry pattern for extensibility
const MODEL_VENDOR_REGISTRY: Record<ModelVendorId, IModelVendor> = {
openai: ModelVendorOpenAI,
anthropic: ModelVendorAnthropic,
// ... 14 more vendors
};
```
## Server Architecture
@@ -226,9 +211,13 @@ The server uses a split architecture with two tRPC routers:
### Edge Network (`trpc.router-edge`)
Distributed edge runtime for low-latency AI operations:
- **AIX** - AI streaming and communication
- **LLM Routers** - Direct vendor integrations (OpenAI, Anthropic, Gemini, Ollama)
- **External Services** - ElevenLabs (TTS), Inworld (TTS), Google Search, YouTube transcripts
- **AIX** [1] - AI streaming and communication
- **LLM Routers** [1] - Vendor-specific operations such as list models (OpenAI, Anthropic, Gemini, Ollama)
- **Speex** [1] - Unified TTS router (ElevenLabs, Inworld, and other TTS vendors)
- **External Services** - Google Search, YouTube transcripts
[1]: also supports client-side fetch (CSF) via client-side inclusion (rebundling with stubs),
for direct browser-to-API communication when possible (CORS), to reduce latency and network barriers
Located at `/src/server/trpc/trpc.router-edge.ts`
@@ -240,3 +229,9 @@ Centralized server for data processing operations:
Located at `/src/server/trpc/trpc.router-cloud.ts`
**Key Pattern**: Edge runtime for AI (fast, distributed), Cloud runtime for data ops (centralized, Node.js)
@kb/KB.md
@kb/vision-inlined.md
As a side note, the product tiers (independent, non-VC-funded) are: **Open** (self-host, MIT) · **Free** (big-agi.com) · **Pro** (paid, includes Sync + backup). All tiers use the user's own API keys.
+22 -17
View File
@@ -10,7 +10,7 @@
[![Discord](https://img.shields.io/discord/1098796266906980422?style=for-the-badge&label=Discord&logo=discord&logoColor=white&labelColor=000000&color=purple)](https://discord.gg/MkH4qj2Jp9)
<br/>
[![GitHub Monthly Commits](https://img.shields.io/github/commit-activity/m/enricoros/big-agi?style=for-the-badge&x=3&logo=github&logoColor=white&label=commits&labelColor=000&color=green)](https://github.com/enricoros/big-agi/commits)
[![GHCR Pulls](https://img.shields.io/badge/ghcr.io-767k_dl-12b76a?style=for-the-badge&logo=Xdocker&logoColor=white&labelColor=000&color=A8E6CF)](https://github.com/enricoros/big-AGI/pkgs/container/big-agi)
[![GHCR Pulls](https://img.shields.io/badge/ghcr.io-800k_dl-12b76a?style=for-the-badge&logo=Xdocker&logoColor=white&labelColor=000&color=A8E6CF)](https://github.com/enricoros/big-AGI/pkgs/container/big-agi)
[![Contributors](https://img.shields.io/github/contributors/enricoros/big-agi?style=for-the-badge&x=2&logo=Xgithub&logoColor=white&label=cooks&labelColor=000&color=A8E6CF)](https://github.com/enricoros/big-AGI/graphs/contributors)
[![License: MIT](https://img.shields.io/badge/License-MIT-A8E6CF?style=for-the-badge&labelColor=000)](https://opensource.org/licenses/MIT)
<br/>
@@ -37,13 +37,13 @@ You need to think broader, decide faster, and build with confidence, then you ne
It comes packed with **world-class features** like Beam, and is praised for its **best-in-class AI chat UX**.
**As an independent, non-VC-funded project, Pro subscriptions at $10.99/mo fund development for everyone, including the free and open-source tiers.**
![LLM Vendors](https://img.shields.io/badge/18+_LLM_Services-500+_Models-black?style=for-the-badge&logo=anthropic&logoColor=white&labelColor=purple)&nbsp;
![LLM Vendors](https://img.shields.io/badge/20+_LLM_Services-500+_Models-black?style=for-the-badge&logo=anthropic&logoColor=white&labelColor=purple)&nbsp;
[![Feature Beam](https://img.shields.io/badge/AI--Validation-BEAM-000?style=for-the-badge&labelColor=purple)](https://big-agi.com/beam)&nbsp;
[![Feature Inspector](https://img.shields.io/badge/Expert_Mode-AI_Inspector-000?style=for-the-badge&labelColor=purple)](https://big-agi.com/inspector)
### What makes Big-AGI different:
**Intelligence**: with [Beam & Merge](https://big-agi.com/beam) for multi-model de-hallucination, native search, and bleeding-edge AI models like Opus 4.5, Nano Banana Pro, Kimi K2.5 or GPT 5.2 -
**Intelligence**: with [Beam & Merge](https://big-agi.com/beam) for multi-model de-hallucination, native search, and bleeding-edge AI models like Opus 4.6, Nano Banana Pro, Kimi K2.5 or GPT 5.4 -
**Control**: with personas, data ownership, requests inspection, unlimited usage with API keys, and *no vendor lock-in* -
and **Speed**: with a local-first, over-powered, zero-latency, madly optimized web app.
@@ -74,7 +74,7 @@ Purest AI outputs
</td>
<td align="center" valign="top">
Flow-state interface<br/>
Higly customizable<br/>
Highly customizable<br/>
Best-in-class UX
</td>
<td align="center" valign="top">
@@ -144,6 +144,7 @@ NOTE: this is a powerful tool - if you need a toy UI or clone, this ain't it.
## Release Notes
👉 **[See the Live Release Notes](https://big-agi.com/changes)**
- Open 2.0.4: **Hyper Params** **Opus 4.6**, **GPT-5.4**, **Gemini 3.1 Pro**, AWS Bedrock, parameter accuracy, Anthropic continuation/Fast mode
- Open 2.0.3: **Red Carpet** **Kimi K2.5**, **Gemini 3 Flash**, **GPT 5.2**, Google Drive, Inworld, Novita.ai, Speech/UX improvements
- Open 2.0.2: **Speex** multi-vendor speech synthesis, **Opus 4.5**, **Gemini 3 Pro**, **Nano Banana Pro**, **Grok 4.1**, **GPT-5.1**, **Kimi K2** + 280 fixes
@@ -182,8 +183,11 @@ The new architecture is solid and the speed improvements are real.
</details>
<details>
<summary>What's New in 1.16.1...1.16.10 · 2024-2025 (patch releases)</summary>
<summary>What's New in 1.16.1...1.16.13 · (patch releases)</summary>
- 1.16.13: Docker fix ([#840](https://github.com/enricoros/big-AGI/issues/840))
- 1.16.12: Dockerfile update ([#840](https://github.com/enricoros/big-AGI/issues/840))
- 1.16.11: v1 final release, documentation updates
- 1.16.10: OpenRouter models support
- 1.16.9: Docker Gemini fix, R1 models support
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
@@ -245,7 +249,7 @@ The new architecture is solid and the speed improvements are real.
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
- **Mistral** Large and Google **Gemini 1.5** support
- Performance optimizations: runs [much faster](https://twitter.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
- Performance optimizations: runs [much faster](https://x.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
@@ -313,7 +317,7 @@ For full details and former releases, check out the [archived versions changelog
## 👉 Supported Models & Integrations
Delightful UX with latest models exclusive features like Beam for **multi-model AI validation**.
> ![LLM Vendors](https://img.shields.io/badge/18_LLM_Services-500+_Models-black?style=for-the-badge&logo=openai&logoColor=white&labelColor=purple)&nbsp;
> ![LLM Vendors](https://img.shields.io/badge/20_LLM_Services-500+_Models-black?style=for-the-badge&logo=openai&logoColor=white&labelColor=purple)&nbsp;
> [![Feature Beam](https://img.shields.io/badge/AI--Validation-BEAM-000?style=for-the-badge&logo=anthropic&labelColor=purple)](https://big-agi.com/beam)
| ![Advanced AI](https://img.shields.io/badge/Advanced%20AI-32383e?style=for-the-badge&logo=ai&logoColor=white) | ![500+ AI Models](https://img.shields.io/badge/500%2B%20AI%20Models-32383e?style=for-the-badge&logo=ai&logoColor=white) | ![Flow-state UX](https://img.shields.io/badge/Flow--state%20UX-32383e?style=for-the-badge&logo=flow&logoColor=white) | ![Privacy First](https://img.shields.io/badge/Privacy%20First-32383e?style=for-the-badge&logo=privacy&logoColor=white) | ![Advanced Tools](https://img.shields.io/badge/Fun%20To%20Use-f22a85?style=for-the-badge&logo=tools&logoColor=white) |
@@ -324,16 +328,17 @@ Delightful UX with latest models exclusive features like Beam for **multi-model
### AI Models & Vendors
Configure 100s of AI models from 18+ providers:
Configure 100s of AI models from 20+ providers:
| **AI models** | _supported vendors_ |
|:--------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Opensource Servers | [LocalAI](https://localai.io/) · [Ollama](https://ollama.com/) |
| Local Servers | [LM Studio](https://lmstudio.ai/) (non-open) |
| Multimodal services | [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Anthropic](https://anthropic.com) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
| LLM services | [Alibaba](https://www.alibabacloud.com/en/product/modelstudio) · [DeepSeek](https://deepseek.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [Moonshot](https://www.moonshot.cn/) · [OpenPipe](https://openpipe.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) · [xAI](https://x.ai/) |
| Image services | OpenAI · Google Gemini |
| Speech services | [ElevenLabs](https://elevenlabs.io) · [Inworld](https://inworld.ai) · [OpenAI TTS](https://platform.openai.com/docs/guides/text-to-speech) · LocalAI · Browser (Web Speech API) |
| **AI models** | _supported vendors_ |
|:--------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Opensource Servers | [LocalAI](https://localai.io/) · [Ollama](https://ollama.com/) |
| Local Servers | [LM Studio](https://lmstudio.ai/) (non-open) |
| Multimodal services | [Anthropic](https://anthropic.com) · [AWS Bedrock](https://aws.amazon.com/bedrock/) · [Azure](https://azure.microsoft.com/en-us/products/ai-services/openai-service) · [Google Gemini](https://ai.google.dev/) · [OpenAI](https://platform.openai.com/docs/overview) |
| LLM services | [Alibaba](https://www.alibabacloud.com/en/product/modelstudio) · [DeepSeek](https://deepseek.com) · [Groq](https://wow.groq.com/) · [Mistral](https://mistral.ai/) · [Moonshot](https://www.moonshot.cn/) · [OpenPipe](https://openpipe.ai/) · [OpenRouter](https://openrouter.ai/) · [Perplexity](https://www.perplexity.ai/) · [Together AI](https://www.together.ai/) · [xAI](https://x.ai/) · [Z.ai](https://z.ai/) |
| OpenAI-compatible | Any OpenAI-compatible endpoint - models, pricing, and capabilities are auto-detected |
| Image services | OpenAI · Google Gemini (Nano Banana) · LocalAI |
| Speech services | [ElevenLabs](https://elevenlabs.io) · [Inworld](https://inworld.ai) · [OpenAI TTS](https://platform.openai.com/docs/guides/text-to-speech) · LocalAI · Browser (Web Speech API) |
### Additional Integrations
@@ -389,4 +394,4 @@ When you open an issue, our custom AI triage system (powered by [Claude Code](ht
MIT License · [Third-Party Notices](src/modules/3rdparty/THIRD_PARTY_NOTICES.md)
**2023-2026** · Enrico Ros × [Big-AGI](https://big-agi.com)
**2023-2026** · [Enrico Ros](https://www.enricoros.com) × [Token Fabrics](https://www.tokenfabrics.com)
-1
View File
@@ -9,4 +9,3 @@ services:
- "3000:3000"
env_file:
- .env
command: [ "next", "start", "-p", "3000" ]
+4
View File
@@ -1,3 +1,7 @@
---
unlisted: true
---
# AIX dispatch server - API features comparison
This is updated as of 2024-07-09, and includes the latest features and capabilities of the three major AI APIs: Anthropic, Gemini, and OpenAI.
+12 -4
View File
@@ -10,6 +10,8 @@ Essential guides:
- **[FAQ](help-faq.md)**: Common questions and answers
- **[Enabling Microphone](help-feature-microphone.md)**: Configure speech recognition in your browser
- **[Data Ownership](help-data-ownership.md)**: How your data is stored and managed
- **[Live File](help-feature-livefile.md)**: Live file attachment feature
## AI Services
@@ -21,18 +23,21 @@ How to set up AI models and features in big-AGI.
- Easy API key configuration:
[Alibaba](https://bailian.console.alibabacloud.com/?apiKey=1#/api-key),
[Anthropic](https://console.anthropic.com/settings/keys),
[AWS Bedrock](https://console.aws.amazon.com/bedrock/),
[Deepseek](https://platform.deepseek.com/api_keys),
[Google Gemini](https://aistudio.google.com/app/apikey),
[Groq](https://console.groq.com/keys),
[Mistral](https://console.mistral.ai/api-keys/),
[Moonshot](https://platform.moonshot.cn/console/api-keys),
[OpenAI](https://platform.openai.com/api-keys),
[OpenPipe](https://app.openpipe.ai/settings),
[Perplexity](https://www.perplexity.ai/settings/api),
[TogetherAI](https://api.together.xyz/settings/api-keys),
[xAI](http://x.ai/api)
[xAI](https://x.ai/api),
[Z.ai](https://z.ai/)
- **[Azure OpenAI](config-azure-openai.md)** guide
- **FireworksAI** ([API keys](https://fireworks.ai/account/api-keys), via custom OpenAI endpoint: https://api.fireworks.ai/inference)
- **[OpenRouter](config-openrouter.md)** guide
- **OpenAI-compatible endpoints**: Any provider with an OpenAI-compatible API works out of the box - models, pricing, and capabilities are auto-detected
- **Local AI Integrations**:
@@ -42,8 +47,9 @@ How to set up AI models and features in big-AGI.
- **Enhanced AI Features**:
- **[Web Browsing](config-feature-browse.md)**: Enable web page download through third-party services or your own cloud
- **Web Search**: Google Search API (see '[Environment Variables](environment-variables.md)')
- **Image Generation**: GPT Image (gpt-image-1), DALL·E 3 and 2
- **Image Generation**: GPT Image (gpt-image-1), Nano Banana, DALL·E 3 and 2
- **Voice Synthesis**: ElevenLabs, Inworld, OpenAI TTS, LocalAI, or browser Web Speech API
- **[Google Drive](config-feature-google-drive.md)**: Attach files from Google Drive
## Deployment & Customization
@@ -60,8 +66,10 @@ For deploying a custom big-AGI instance:
- **Advanced Setup**:
- **[Source Code Customization](customizations.md)**: Modify the source code
- **[Access Control](deploy-authentication.md)**: Optional, add basic user authentication
- **[Database Setup](deploy-database.md)**: Optional, enables "Chat Link Sharing"
- **[Reverse Proxy](deploy-reverse-proxy.md)**: Optional, enables custom domains and SSL
- **[Docker Deployment](deploy-docker.md)**: Deploy with Docker containers
- **[Kubernetes](deploy-k8s.md)**: Deploy on Kubernetes clusters
- **[Analytics](deploy-analytics.md)**: Set up usage analytics
- **[Environment Variables](environment-variables.md)**: Pre-configures models and services
## Community & Support
+5 -2
View File
@@ -20,8 +20,11 @@ by release.
- And all of the [Big-AGI 2 changes](https://github.com/enricoros/big-AGI/issues/567#issuecomment-2262187617) and more
- Built for the future, madly optimized
### What's New in 1.16.1...1.16.9 · Jan 21, 2025 (patch releases)
### What's New in 1.16.1...1.16.13 · (patch releases)
- 1.16.13: Docker fix (#840)
- 1.16.12: Dockerfile update (#840)
- 1.16.11: v1 final release, documentation updates
- 1.16.10: OpenRouter models support
- 1.16.9: Docker Gemini fix, R1 models support
- 1.16.8: OpenAI ChatGPT-4o Latest, o1 models support
@@ -70,7 +73,7 @@ by release.
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
- **[LocalAI](https://localai.io/models/)** deep integration, including support for [model galleries](https://github.com/enricoros/big-AGI/issues/411)
- **Mistral** Large and Google **Gemini 1.5** support
- Performance optimizations: runs [much faster](https://twitter.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
- Performance optimizations: runs [much faster](https://x.com/enricoros/status/1756553038293303434?utm_source=localhost:3000&utm_medium=big-agi), saves lots of power, reduces memory usage
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
- [Release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.14.0), and changes [v1.13.1...v1.14.0](https://github.com/enricoros/big-AGI/compare/v1.13.1...v1.14.0) (233 commits, 8,000+ lines changed)
+2
View File
@@ -41,6 +41,8 @@ In addition to using the UI, configuration can also be done using
### Integration: Models Gallery
> Note: The Gallery Admin feature described below may have been removed or renamed in recent versions of big-AGI.
If the running LocalAI instance is configured with a [Model Gallery](https://localai.io/models/):
- Go to Models > LocalAI
+4 -5
View File
@@ -1,8 +1,7 @@
# OpenRouter Configuration
[OpenRouter](https://openrouter.ai) is a standalone, premium service
that provides access to <Link href='https://openrouter.ai/docs#models' target='_blank'>exclusive AI models</Link>
such as GPT-4 32k, Claude, and more. These models are typically not available to the public.
that provides access to a wide range of AI models from multiple providers through a single API.
This document details the process of integrating OpenRouter with big-AGI.
### 1. OpenRouter Account Setup and API Key Generation
@@ -20,7 +19,7 @@ This document details the process of integrating OpenRouter with big-AGI.
![feature-openrouter-add.png](pixels/feature-openrouter-add.png)
3. Input the API key into the **OpenRouter API Key** field, and load the Models.
![feature-openrouter-configure.png](pixels/feature-openrouter-configure.png)
4. OpenAI GPT4-32k and other models will now be accessible and selectable in the application.
4. Models from all supported providers will now be accessible and selectable in the application.
In addition to using the UI, configuration can also be done using
[environment variables](environment-variables.md).
@@ -30,5 +29,5 @@ In addition to using the UI, configuration can also be done using
OpenRouter independently manages its service and pricing and is not affiliated with big-AGI.
For more detailed information, please visit [this page](https://openrouter.ai/docs#models).
Please note that running large models such as GPT-4 32k can be costly and may rapidly consume
credits - a single prompt may cost $1 or more, at the time of writing.
Please note that running large models can be costly and may rapidly consume credits.
Check model pricing on the OpenRouter website before use.
+3 -3
View File
@@ -49,8 +49,8 @@ Edit the `src/data.ts` file to customize personas. This file houses the default
Adapt the UI to match your project's aesthetic, incorporate new features, or exclude unnecessary ones.
- [ ] Adjust `src/common/app.theme.ts` for theme changes: colors, spacing, button appearance, animations, etc
- [ ] Modify `src/common/app.config.tsx` to alter the application's name
- [ ] Update `src/common/app.nav.tsx` to revise the navigation bar
- [ ] Modify `src/common/app.release.ts` to alter the application's name
- [ ] Update `src/common/app.nav.ts` to revise the navigation bar
### Add a Message of the Day
@@ -71,7 +71,7 @@ Example: `NEXT_PUBLIC_MOTD=🚀 New features available in {{app_build_pkgver}}!
Test your application thoroughly using local development (refer to README.md for local build instructions). Deploy using your preferred hosting service. big-AGI supports deployment on platforms like Vercel, Docker, or any Node.js-compatible service, especially those supporting NextJS's "Edge Runtime."
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Workers deployment
- [deploy-cloudflare.md](deploy-cloudflare.md): for Cloudflare Pages deployment (limited support)
- [deploy-docker.md](deploy-docker.md): for Docker deployment instructions and examples
- [deploy-k8s.md](deploy-k8s.md): for Kubernetes deployment instructions and examples
+3 -3
View File
@@ -51,13 +51,13 @@ Vercel Analytics and Speed Insights are local API endpoints deployed to your dom
domain. Furthermore, the Vercel Analytics service is privacy-friendly, and does not track individual users.
This service is avaialble to system administrators when deploying to Vercel. It is automatically enabled when deploying to Vercel.
The code that activates Vercel Analytics is located in the `src/pages/_app.tsx` file:
The code that activates Vercel Analytics is located in the `pages/_app.tsx` file:
```tsx
const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) => <>
...
{isVercelFromFrontend && <VercelAnalytics debug={false} />}
{isVercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
{Is.Deployment.VercelFromFrontend && <VercelAnalytics debug={false} />}
{Is.Deployment.VercelFromFrontend && <VercelSpeedInsights debug={false} sampleRate={1 / 2} />}
...
</>;
```
+11 -9
View File
@@ -1,18 +1,20 @@
---
unlisted: true
---
# Deploying a Next.js App on Cloudflare Pages
> WARNING: Cloudflare Pages does not support traditional NodeJS runtimes, but only Edge Runtime functions.
> WARNING: Cloudflare Pages only supports Edge Runtime functions, not the full Node.js runtime.
>
> In this project we use Prisma connected to serverless Postgres, which at the moment cannot run on
> edge functions, so we cannot deploy this project on Cloudflare Pages.
> The cloud router in this project requires a Node.js runtime for Supabase SDK, authentication,
> sync, and other server-side features that cannot run on Cloudflare's edge runtime.
>
> Workaround: Step 3.4. has been added below, to DELETE the NodeJS traditional runtime - which means that some
> Workaround: Step 3.4. has been added below, to DELETE the Node.js cloud router - which means that some
> parts of this application will not work.
> - [Side effects](https://github.com/enricoros/big-agi/blob/main/src/apps/chat/trade/server/trade.router.ts#L19):
> Sharing functionality to DB, and import from ChatGPT share, and post to Paste.GG will not work
> - [Side effects](https://github.com/enricoros/big-agi/blob/main/src/modules/trade/server/trade.router.ts):
> Sharing functionality, import from ChatGPT share, and post to Paste.GG will not work
> - Cloud features (sync, auth, payments) will not be available
> - See [Issue 174](https://github.com/enricoros/big-agi/issues/174).
>
> Longer term: follow [prisma/prisma: Support Edge Function deployments](https://github.com/prisma/prisma/issues/21394)
> and convert the Node runtime to Edge runtime once Prisma supports it.
This guide provides steps to deploy your Next.js app on Cloudflare Pages.
It is based on the [official Cloudflare developer documentation](https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/),
@@ -19,7 +19,6 @@ services:
- .env
environment:
- PUPPETEER_WSS_ENDPOINT=ws://browserless:3000
command: [ "next", "start", "-p", "3000" ]
depends_on:
- browserless
-14
View File
@@ -1,14 +0,0 @@
# Why big-AGI?
Placeholder for a document that demonstrates the productivity and unique features of Big-AGI.
## Exclusive features
- [x] Call AGI
- [x] Continuous Voice mode
- [x] Diagram generation
- [ ] ...
## Productivity Features
- [x] Multi-window to never wait
- [x] Multi-Chat to explore different solutions
- [x] Rendering of graphs, charts, mindmaps
- [ ] ...
+12 -2
View File
@@ -3,7 +3,7 @@
This document provides an explanation of the environment variables used in the big-AGI application.
**All variables are optional**; and _UI options_ take precedence over _backend environment variables_,
which take place over _defaults_. This file is kept in sync with [`../src/server/env.ts`](../src/server/env.ts).
which take place over _defaults_. This file is kept in sync with [`../src/server/env.server.ts`](../src/server/env.server.ts).
### Setting Environment Variables
@@ -29,6 +29,11 @@ AZURE_OPENAI_API_ENDPOINT=
AZURE_OPENAI_API_KEY=
ANTHROPIC_API_KEY=
ANTHROPIC_API_HOST=
BEDROCK_BEARER_TOKEN=
BEDROCK_ACCESS_KEY_ID=
BEDROCK_SECRET_ACCESS_KEY=
BEDROCK_SESSION_TOKEN=
BEDROCK_REGION=
DEEPSEEK_API_KEY=
GEMINI_API_KEY=
GROQ_API_KEY=
@@ -100,7 +105,12 @@ requiring the user to enter an API key
| `AZURE_OPENAI_API_VERSION` | API version for traditional deployment-based endpoints | Optional, defaults to '2025-04-01-preview' |
| `AZURE_DEPLOYMENTS_API_VERSION` | API version for the deployments listing endpoint | Optional, defaults to '2023-03-15-preview' |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as AWS Bedrock | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, for proxies or custom endpoints | Optional |
| `BEDROCK_BEARER_TOKEN` | Bedrock long-term API key (`ABSK...`). Takes priority over IAM credentials. Short-term keys only work for runtime, not model listing | Optional |
| `BEDROCK_ACCESS_KEY_ID` | AWS IAM Access Key ID for Bedrock (Claude models via AWS) | Optional, but if set `BEDROCK_SECRET_ACCESS_KEY` must also be set |
| `BEDROCK_SECRET_ACCESS_KEY` | AWS IAM Secret Access Key for Bedrock | Optional, but if set `BEDROCK_ACCESS_KEY_ID` must also be set |
| `BEDROCK_SESSION_TOKEN` | AWS Session Token for temporary/STS credentials | Optional |
| `BEDROCK_REGION` | AWS region for Bedrock (e.g., `us-east-1`, `us-west-2`, `eu-west-1`) | Optional, defaults to `us-east-1` |
| `DEEPSEEK_API_KEY` | The API key for Deepseek AI | Optional |
| `GEMINI_API_KEY` | The API key for Google AI's Gemini | Optional |
| `GROQ_API_KEY` | The API key for Groq Cloud | Optional |
+4
View File
@@ -1,3 +1,7 @@
---
unlisted: true
---
# Big-AGI Advanced Tips & Tricks
> 🚨 This file is not meant for publication, and it's just been created as a handbook with tips
+7 -1
View File
@@ -30,6 +30,12 @@ You can see your data in your browser's local storage and IndexedDB - try it you
![Browser local storage showing API keys and chat data](pixels/data_ownership_local_storage.png)
### Sync for Authenticated Users
Users with accounts on big-agi.com who opt into Sync (a Pro feature) have their entity data - such as conversations and personas - replicated to the server for multi-device access.
Server-side data is isolated per-user using Row Level Security (RLS), ensuring that no other user can access your synced data.
Sync is entirely optional; without it, all data remains local to your browser.
### What This Means For You
Storing data in your browser means:
@@ -43,7 +49,7 @@ Storing data in your browser means:
Big-AGI generates a _device identifier_ that combines timestamp and random components, stored only on your device. This identifier:
- Is used only for the **optional sync functionality** between your devices (not yet ready)
- Is used only for the **optional sync functionality** between your devices
- Helps maintain data consistency when using Big-AGI across multiple devices
- Remains completely local unless you explicitly enable sync
- Is not used for tracking, analytics, or telemetry
+4 -5
View File
@@ -7,7 +7,7 @@ process for your own instance of big-AGI and related products.
**Try big-AGI** - You don't need to install anything if you want to play with big-AGI
and have your API keys to various model services. You can access our free instance on [big-AGI.com](https://big-agi.com).
The free instance runs the latest `main-stable` branch from this repository.
The free instance runs the latest `main` branch from this repository.
## 🧩 Build-your-own
@@ -72,9 +72,8 @@ Create your GitHub fork, create a Vercel project over that fork, and deploy it.
### Deploy on Cloudflare
Deploy on Cloudflare's global network by installing big-AGI on
Cloudflare Pages. Check out the [Cloudflare Installation Guide](deploy-cloudflare.md)
for step-by-step instructions.
> Note: Cloudflare Pages deployment has limitations due to Edge Runtime constraints.
> See the [Cloudflare guide](deploy-cloudflare.md) for details and known issues.
### Docker Deployments
@@ -146,6 +145,6 @@ Enjoy all the features of big-AGI without the hassle of infrastructure managemen
Join our vibrant community of developers, researchers, and AI enthusiasts. Share your projects, get help, and collaborate with others.
- [Discord Community](https://discord.gg/MkH4qj2Jp9)
- [Twitter](https://twitter.com/enricoros)
- [X (Twitter)](https://x.com/enricoros)
For any questions or inquiries, please don't hesitate to [reach out to our team](mailto:hello@big-agi.com).
+4
View File
@@ -1,3 +1,7 @@
---
unlisted: true
---
# ReAct: question answering with Reasoning and Actions
## What is ReAct?
+9 -8
View File
@@ -1,13 +1,13 @@
# Knowledge Base
## Knowledge Base
Internal documentation for Big-AGI architecture and systems, for use by AI agents and developers.
Architecture and system documentation is available in the `/kb/` knowledge base, for use by AI agents and developers.
**Structure:**
- `/kb/KB.md` - Already in context: this text
- `/kb/vision-inlined.md` - Already in context (next section): long-term vision and north stars
- `/kb/modules/` - Core business logic (e.g. AIX)
- `/kb/systems/` - Infrastructure (routing, startup)
## Index
### Modules Documentation
#### AIX - AI Communication Framework
@@ -22,17 +22,18 @@ Internal documentation for Big-AGI architecture and systems, for use by AI agent
#### Core Platform Systems
- **[app-routing.md](systems/app-routing.md)** - Next.js routing, provider stack, and display state hierarchy
- **[LLM-parameters-system.md](systems/LLM-parameters-system.md)** - Language model parameter flow across the system
- **[LLM-vendor-integration.md](modules/LLM-vendor-integration.md)** - Adding new LLM providers
## Guidelines
### KB Guidelines
### Writing Style
#### Writing Style
- **Direct and factual** - No marketing language
- **Present tense** - "AIX handles streaming" not "AIX will handle"
- **Active voice** - "The system processes" not "Processing is done by"
- **Concrete examples** - Show actual code/config when helpful, briefly
### Maintenance
#### Maintenance
- Remove outdated information when detected!
- Remove outdated knowledge base information when detected
- Keep cross-references current when files move
+3 -2
View File
@@ -7,8 +7,8 @@ This document analyzes all AIX function callers and their patterns for message r
### Three-Tier Call Hierarchy
**Core AIX Functions** (Direct tRPC API callers):
- `aixChatGenerateContent_DMessage_FromConversation` - 8 callers (conversation streaming)
- `aixChatGenerateContent_DMessage` - 6 callers (direct request/response)
- `aixChatGenerateContent_DMessage_FromConversation` - 9 callers (conversation streaming)
- `aixChatGenerateContent_DMessage_orThrow` - 6 callers (direct request/response)
- `aixChatGenerateText_Simple` - 12 callers (text-only utilities)
**Utility Layer** (Hooks & Functions):
@@ -24,6 +24,7 @@ This document analyzes all AIX function callers and their patterns for message r
| **Caller** | **Context** | **Message Removal** | **Placeholder** | **Error Handling** |
|------------|-------------|-------------------|----------------|-------------------|
| **Chat Persona** | `'conversation'` | `messageWasInterruptedAtStart()``removeMessage()` | None | Error fragments |
| **XE Chat Generate** | `'conversation'` | `messageWasInterruptedAtStart()``removeMessage()` | `'...'` placeholder | Error fragments via messageEditor |
| **Beam Scatter** | `'beam-scatter'` | `messageWasInterruptedAtStart()` → empty message | `SCATTER_PLACEHOLDER` | Ray status update |
| **Beam Gather** | `'beam-gather'` | `messageWasInterruptedAtStart()` → clear fragments | `GATHER_PLACEHOLDER` | Re-throw errors |
| **Beam Follow-up** | `'beam-followup'` | `messageWasInterruptedAtStart()` → remove message | `FOLLOWUP_PLACEHOLDER` | Status updates |
+5 -4
View File
@@ -37,6 +37,7 @@ Built with tRPC, it manages the lifecycle of AI-generated content from request t
| Perplexity | ✅ | ❌ (rejected) | | ✅ | Yes + 📦 | |
| TogetherAI | ✅ | ✅ | | ✅ | Yes + 📦 | |
| xAI | | | | | | |
| Z.ai | ✅ | ✅ | Img: ✅ | ✅ | Yes + 📦 | Thinking mode |
| Ollama (2) | ❌ (broken) | ? | | | | |
Notes:
@@ -91,12 +92,12 @@ AIX is organized into the following files and folders:
- Dispatch (`/server/dispatch/`) - Server to AI Provider communication:
- `/server/dispatch/chatGenerate/`: Content Generation with chat-style inputs:
- `./adapters/`: Adapters for creating API requests for different AI protocols (Anthropic, Gemini, OpenAI).
- `./parsers/`: Parsers for parsing streaming/non-streamin responses from different AI protocols (same 3).
- `./adapters/`: Adapters for creating API requests for different AI protocols (Anthropic, Bedrock, Gemini, OpenAI Chat Completions, OpenAI Responses, xAI Responses).
- `./parsers/`: Parsers for parsing streaming/non-streaming responses from different AI protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, OpenAI Responses).
- `chatGenerate.dispatch.ts`: Creates a pipeline to execute Chat Generation to a specific provider.
- `ChatGenerateTransmitter.ts`: Used to serialize and transmit AixWire_Particles to the client.
- `/server/dispatch/wiretypes/`: AI provider Wire Types:
- Type definitions for different AI providers/protocols (Anthropic, Gemini, OpenAI).
- Type definitions for different AI providers/protocols (Anthropic, Bedrock Converse, Gemini, OpenAI, xAI).
- `stream.demuxers.ts`: Handles demuxing of different stream formats.
## 3. Architecture Diagram
@@ -159,7 +160,7 @@ sequenceDiagram
AIX Client ->> AIX Client: Display error message
else DMessageDocPart
AIX Client ->> AIX Client: Process and display document
else DMetaPlaceholderPart
else DVoidPlaceholderPart
AIX Client ->> AIX Client: Handle placeholder (non-submitted)
end
end
+126
View File
@@ -0,0 +1,126 @@
# LLM Vendor Integration Guide
How to add support for new LLM providers in Big-AGI. There are two integration paths, and
the dynamic backend path is strongly preferred for new vendors.
## Integration Paths
### Path 1: Dynamic Backend (preferred)
For any provider with an **OpenAI-compatible API** (which is nearly all new providers).
**Surface area**: 1-2 files, no UI changes, no registry changes.
A dynamic backend provides:
- Hostname-based auto-detection when the user adds the provider's API URL
- Automatic model list parsing with vendor-specific metadata (pricing, context windows, capabilities)
- Zero UI code - uses the existing "Custom OpenAI-compatible" service setup
**Files touched**:
- `src/modules/llms/server/openai/models/{vendor}.models.ts` (required) - model definitions + hostname heuristic
- `src/modules/llms/server/openai/wiretypes/{vendor}.wiretypes.ts` (optional) - Zod schemas for vendor-specific wire format
- `src/modules/llms/server/listModels.dispatch.ts` - add heuristic to the detection chain (2 lines)
**What the model file must export**:
```typescript
// 1. Hostname heuristic - returns true when the user's API URL matches this vendor
export function vendorHeuristic(hostname: string): boolean {
return hostname.includes('.vendor-domain.com');
}
// 2. Model converter - transforms vendor's /v1/models response to ModelDescriptionSchema[]
export function vendorModelsToModelDescriptions(wireModels: unknown): ModelDescriptionSchema[] {
// Parse wire format, map to ModelDescriptionSchema with:
// - id, label, description
// - contextWindow, maxCompletionTokens
// - interfaces (Chat, Vision, Fn, Reasoning, etc.)
// - chatPrice (input/output per token)
// - parameterSpecs (temperature, etc.)
}
```
**Existing examples**: `novita.models.ts`, `chutesai.models.ts`, `fireworksai.models.ts`
MUST also provide the updated vendor icon like other icons in `src/common/components/icons/vendors/`.
Make sure all the information is available if in the future we want to promote those to full registered vendors.
### Path 2: Registered Vendor (heavyweight, discouraged for new providers)
Full first-class integration with dedicated UI, own dialect, and registry entry. Reserved for
providers with **non-OpenAI protocols** (Anthropic, Gemini, Ollama) or providers with enough
user demand to warrant a dedicated setup flow.
**Surface area**: 5+ files across 3 directories.
**Files touched**:
- `src/modules/llms/vendors/{vendor}/{vendor}.vendor.ts` - IModelVendor implementation
- `src/modules/llms/vendors/{vendor}/{VendorName}ServiceSetup.tsx` - React UI setup component
- `src/modules/llms/vendors/vendors.registry.ts` - registry entry + ModelVendorId union
- `src/modules/llms/server/openai/models/{vendor}.models.ts` - model definitions
- `src/modules/llms/server/listModels.dispatch.ts` - dispatch case
- Possibly server protocol adapter if not OpenAI-compatible
- Possibly more files, e.g. wires, etc.
- See existing providers and commits that added them for full scope
**When to use this path**: Only when the provider has a meaningfully different API protocol
(not OpenAI-compatible), or when there is significant user demand AND the provider offers
unique capabilities that benefit from dedicated UI (e.g., Ollama's local model management).
When using this path, please add links to upstream documentation. Make sure all constants
are correctly handled everywhere, especially for provider-based switches.
## Decision Criteria
| Question | Dynamic | Registered |
|----------|---------|------------|
| OpenAI-compatible API? | Yes - use dynamic | Only if not OAI-compatible |
| Needs custom auth UI? | No - uses generic fields | Yes - custom setup form |
| Unique protocol? | No | Yes (Anthropic, Gemini, Ollama) |
| User demand level | Any | High + sustained |
| Maintenance burden | Minimal | Significant (5+ files) |
## For External Contributors / Vendor Requests
When vendors or community members request integration via GitHub issues:
1. **Point them to the dynamic backend path** - it's faster to implement, review, and maintain
2. **Requirements for a dynamic backend PR**:
- Model file with heuristic + converter exporting `ModelDescriptionSchema[]`
- Wire types if the vendor's `/v1/models` response has non-standard fields
- Vendor icon (SVG preferred) in `src/common/components/icons/vendors/`
- Two-line addition to the heuristic chain in `listModels.dispatch.ts`
3. **Do not accept**: New registered vendors for OpenAI-compatible providers. The maintenance
cost of a full vendor (UI component, registry entry, dispatch case) is not justified when
dynamic detection achieves the same result with a fraction of the code.
## Architecture Notes
### How Dynamic Detection Works
In `listModels.dispatch.ts`, the `case 'openai':` handler:
1. Fetches `/v1/models` from the user-provided API host
2. Runs the hostname through a chain of heuristics (in order)
3. First matching heuristic's converter is used to parse models
4. Falls back to stock OpenAI parsing if no heuristic matches
### Hostname Security
Hostname matching uses `llmsHostnameMatches()` from `openai.access.ts` which parses the
URL properly to prevent DNS spoofing. Always use `.includes()` on the parsed hostname,
never on the raw URL string.
### Key Types
- `ModelDescriptionSchema` (`llm.server.types.ts`) - output type for all model converters
- `DModelInterfaceV1` (`llms.types.ts`) - capability flags (Chat, Vision, Fn, Reasoning, etc.)
- `IModelVendor` (`vendors/IModelVendor.ts`) - interface for registered vendors only
- `ManualMappings` / `KnownModel` (`models.mappings.ts`) - server-side model patches
### File Locations
- Dynamic backends: `src/modules/llms/server/openai/models/`
- Wire types: `src/modules/llms/server/openai/wiretypes/`
- Dispatch: `src/modules/llms/server/listModels.dispatch.ts`
- Registered vendors: `src/modules/llms/vendors/*/`
- Vendor icons: `src/common/components/icons/vendors/`
- Type definitions: `src/modules/llms/server/llm.server.types.ts`
+7 -18
View File
@@ -13,12 +13,9 @@ The LLM parameters system operates across five layers that transform parameters
The `DModelParameterRegistry` defines all available parameters with their constraints and metadata. Each parameter includes type information, validation rules, and default behavior.
**Example**: `llmVndOaiReasoningEffort4` defines a 4-value enum with 'medium' as the required fallback.
**Default Value System**: The registry supports multiple default mechanisms:
- `initialValue` - Parameter's base default (e.g., `llmVndOaiRestoreMarkdown: true`)
- `requiredFallback` - Fallback for required parameters (e.g., `llmTemperature: 0.5`)
- `nullable` - Parameters that can be explicitly null to skip API transmission
- `initialValue` - Parameter's base default (e.g., `llmVndOaiRestoreMarkdown: true`)
### Layer 2: Model Specifications
**File**: `src/modules/llms/server/llm.server.types.ts`
@@ -27,7 +24,6 @@ Models declare which parameters they support through `parameterSpecs` arrays. Ea
```typescript
parameterSpecs: [
{ paramId: 'llmVndOaiReasoningEffort4' },
{ paramId: 'llmVndAntThinkingBudget', initialValue: 1024 }, // Override default
{ paramId: 'llmVndGeminiThinkingBudget', rangeOverride: [0, 8192] }, // Custom range
]
@@ -51,20 +47,14 @@ Shows only parameters that are:
- Not marked as `hidden`
**Value Resolution**: Both UIs use `getAllModelParameterValues()` to merge:
1. **Fallback values** - Required parameters get their `requiredFallback` values
1. **Fallback values** - Implicit parameters get their `LLMImplicitParametersRuntimeFallback` values
2. **Initial values** - Model's `initialParameters` (populated during model creation)
3. **User values** - User's `userParameters` (highest priority)
### Layer 4: AIX Translation
**File**: `src/modules/aix/client/aix.client.ts`
The AIX client transforms DLLM parameters to wire protocol format. This layer handles parameter precedence rules and name transformations:
```
// Parameter precedence: newer 4-value version takes priority over 3-value
...((llmVndOaiReasoningEffort4 || llmVndOaiReasoningEffort) ?
{ vndOaiReasoningEffort: llmVndOaiReasoningEffort4 || llmVndOaiReasoningEffort } : {})
```
The AIX client transforms DLLM parameters to wire protocol format. This layer handles parameter precedence rules and name transformations.
**Client Options**: The system supports parameter overrides through `llmOptionsOverride` and complete replacement via `llmUserParametersReplacement`.
@@ -73,7 +63,7 @@ The AIX client transforms DLLM parameters to wire protocol format. This layer ha
Server-side adapters translate AIX parameters to vendor APIs. Each vendor may interpret parameters differently:
- **OpenAI**: `vndOaiReasoningEffort` `reasoning_effort`
- **OpenAI**: `vndEffort` -> `reasoning_effort`
- **Perplexity**: Reuses OpenAI parameter format
- **OpenAI Responses API**: Maps to structured reasoning config with additional logic
@@ -81,8 +71,8 @@ Server-side adapters translate AIX parameters to vendor APIs. Each vendor may in
When a model is loaded:
1. **Model Creation**: `modelDescriptionToDLLM()` creates the DLLM with empty `initialParameters`
2. **Initial Value Application**: `applyModelParameterInitialValues()` populates initial values from:
1. **Model Creation**: `_createDLLMFromModelDescription()` creates the DLLM with empty `initialParameters`
2. **Initial Value Application**: `applyModelParameterSpecsInitialValues()` populates initial values from:
- Model spec `initialValue` (highest priority)
- Registry `initialValue` (fallback)
3. **Runtime Resolution**: `getAllModelParameterValues()` creates final parameter set:
@@ -117,7 +107,6 @@ Some vendors use model variants to enable features, for instance:
## Migration and Compatibility
The architecture supports parameter evolution:
- **Version Coexistence**: Both `llmVndOaiReasoningEffort` and `llmVndOaiReasoningEffort4` exist simultaneously
- **Precedence Rules**: Newer parameters take priority during AIX translation
- **Graceful Degradation**: Unknown parameters log warnings but don't break functionality
@@ -128,4 +117,4 @@ The architecture supports parameter evolution:
- **UI Controls**: `src/modules/llms/models-modal/LLMParametersEditor.tsx`
- **AIX Translation**: `src/modules/aix/client/aix.client.ts`
- **Wire Types**: `src/modules/aix/server/api/aix.wiretypes.ts`
- **Vendor Adapters**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
- **Vendor Adapters**: `src/modules/aix/server/dispatch/chatGenerate/adapters/*.ts`
+1 -1
View File
@@ -6,7 +6,7 @@ Client-Side Fetch (CSF) enables direct browser-to-API communication, bypassing t
CSF is implemented as an opt-in setting stored as `csf: boolean` in each vendor's service settings. The vendor interface exposes `csfAvailable?: (setup) => boolean` to determine if CSF can be enabled (typically checking if an API key or host is configured). The actual execution happens in `aix.client.direct-chatGenerate.ts` which dynamically imports when CSF is active, making direct fetch calls using the same wire protocols as the server.
All 16 supported vendors (OpenAI, Anthropic, Gemini, Ollama, LocalAI, Deepseek, Groq, Mistral, xAI, OpenRouter, Perplexity, Together AI, Alibaba, Moonshot, OpenPipe, LM Studio) support CSF. Cloud vendors require CORS support from the API provider (all tested vendors return `access-control-allow-origin: *`). Local vendors (Ollama, LocalAI, LM Studio) require CORS to be enabled on the local server.
All 20+ supported vendors (OpenAI, Anthropic, Gemini, Ollama, LocalAI, Deepseek, Groq, Mistral, xAI, OpenRouter, Perplexity, Together AI, Alibaba, Moonshot, OpenPipe, LM Studio, Z.ai, Azure, Bedrock) support CSF. Cloud vendors require CORS support from the API provider (all tested vendors return `access-control-allow-origin: *`). Local vendors (Ollama, LocalAI, LM Studio) require CORS to be enabled on the local server.
## UI
+3
View File
@@ -0,0 +1,3 @@
## Strategic Vision
If provided, the following influences the long-term vision, product and architectural goals/north stars for Big-AGI.
+662 -136
View File
File diff suppressed because it is too large Load Diff
+16 -12
View File
@@ -1,8 +1,9 @@
{
"name": "big-agi",
"version": "2.0.3",
"version": "2.0.4",
"private": true,
"author": "Enrico Ros <enrico.ros@gmail.com>",
"author": "Enrico Ros <enrico@big-agi.com> (https://www.enricoros.com)",
"homepage": "https://big-agi.com",
"repository": "https://github.com/enricoros/big-agi",
"scripts": {
"dev": "next dev --turbopack",
@@ -12,6 +13,7 @@
"start": "next start",
"lint": "next lint",
"postinstall": "prisma generate --no-hints",
"gen:icon-sprites": "node tools/develop/gen-icon-sprites/generate-llm-sprites.ts",
"db:push": "prisma db push",
"db:studio": "prisma studio",
"vercel:env:pull": "npx vercel env pull .env.development.local",
@@ -34,14 +36,15 @@
"@mui/joy": "^5.0.0-beta.52",
"@next/bundle-analyzer": "~15.1.12",
"@prisma/client": "~5.22.0",
"@tanstack/react-query": "5.90.10",
"@tanstack/react-virtual": "^3.13.18",
"@tanstack/react-query": "5.90.21",
"@tanstack/react-virtual": "^3.13.22",
"@trpc/client": "11.5.1",
"@trpc/next": "11.5.1",
"@trpc/react-query": "11.5.1",
"@trpc/server": "11.5.1",
"@vercel/analytics": "^1.6.1",
"@vercel/speed-insights": "^1.3.1",
"aws4fetch": "^1.0.20",
"browser-fs-access": "^0.38.0",
"cheerio": "^1.1.2",
"csv-stringify": "^6.6.0",
@@ -55,13 +58,13 @@
"next": "~15.1.12",
"nprogress": "^0.2.0",
"pdfjs-dist": "5.4.54",
"posthog-js": "^1.336.4",
"posthog-node": "^5.24.7",
"posthog-js": "^1.360.2",
"posthog-node": "^5.28.2",
"prismjs": "^1.30.0",
"puppeteer-core": "^24.36.1",
"puppeteer-core": "^24.39.1",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-hook-form": "^7.71.1",
"react-hook-form": "^7.71.2",
"react-markdown": "^10.1.0",
"react-player": "^3.4.0",
"react-resizable-panels": "^3.0.6",
@@ -80,10 +83,10 @@
},
"devDependencies": {
"@posthog/nextjs-config": "~1.6.4",
"@types/node": "^25.1.0",
"@types/node": "^25.5.0",
"@types/nprogress": "^0.2.3",
"@types/prismjs": "^1.26.5",
"@types/react": "^19.2.10",
"@types/prismjs": "^1.26.6",
"@types/react": "^19.2.14",
"@types/react-csv": "^1.1.10",
"@types/react-dom": "^19.2.3",
"@types/turndown": "^5.0.6",
@@ -92,9 +95,10 @@
"eslint-config-next": "~15.1.12",
"prettier": "^3.8.1",
"prisma": "~5.22.0",
"tsx": "^4.21.0",
"typescript": "^5.9.3"
},
"engines": {
"node": "^26.0.0 || ^24.0.0 || ^22.0.0 || ^20.0.0"
"node": "^24.0.0 || ^22.0.0 || ^20.0.0"
}
}
+19 -3
View File
@@ -37,14 +37,30 @@ export default function MyDocument({ emotionStyleTags }: MyDocumentProps) {
<meta property='og:site_name' content={Brand.Meta.SiteName} />
<meta property='og:type' content='website' />
{/* Twitter */}
<meta property='twitter:card' content='summary_large_image' />
{/* Twitter / X */}
<meta name='twitter:card' content='summary_large_image' />
<meta property='twitter:url' content={Brand.URIs.Home} />
<meta property='twitter:title' content={Brand.Title.Common} />
<meta property='twitter:description' content={Brand.Meta.Description} />
{Brand.URIs.CardImage && <meta property='twitter:image' content={Brand.URIs.CardImage} />}
<meta name='twitter:site' content={Brand.Meta.TwitterSite} />
<meta name='twitter:card' content='summary_large_image' />
<meta name='twitter:creator' content='@enricoros' />
{/* Author & Structured Data */}
<meta name='author' content='Enrico Ros' />
<link rel='author' href='https://www.enricoros.com' />
<script type='application/ld+json' dangerouslySetInnerHTML={{ __html: JSON.stringify({
'@context': 'https://schema.org',
'@type': 'SoftwareApplication',
'name': 'Big-AGI',
'url': 'https://big-agi.com',
'applicationCategory': 'ProductivityApplication',
'operatingSystem': 'All, Web',
'description': Brand.Meta.Description,
'sameAs': ['https://github.com/enricoros/big-agi', 'https://discord.gg/MkH4qj2Jp9',],
'author': { '@type': 'Person', 'name': 'Enrico Ros', 'url': 'https://www.enricoros.com' },
'publisher': { '@type': 'Organization', 'name': 'Token Fabrics LLC', 'url': 'https://www.tokenfabrics.com' },
}) }} />
{/* Style Sheets (injected and server-side) */}
<meta name='emotion-insertion-point' content='' />
+1 -1
View File
@@ -3,7 +3,7 @@
"short_name": "big-AGI",
"theme_color": "#32383E",
"background_color": "#9FA6AD",
"description": "Your Generative AI Suite",
"description": "Open-source AI workspace. Multi-model reasoning and personas for maximum control.",
"categories": [
"productivity",
"AI",
+3 -4
View File
@@ -22,7 +22,6 @@ import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
import { Link } from '~/common/components/Link';
import { OptimaPanelGroupedList } from '~/common/layout/optima/panel/OptimaPanelGroupedList';
import { OptimaPanelIn, OptimaToolbarIn } from '~/common/layout/optima/portals/OptimaPortalsIn';
import { PhVoice } from '~/common/components/icons/phosphor/PhVoice';
import { SpeechResult, useSpeechRecognition } from '~/common/components/speechrecognition/useSpeechRecognition';
import { clipboardInterceptCtrlCForCleanup } from '~/common/util/clipboardUtils';
import { conversationTitle, remapMessagesSysToUsr } from '~/common/stores/chat/chat.conversation';
@@ -31,7 +30,7 @@ import { createErrorContentFragment } from '~/common/stores/chat/chat.fragments'
import { launchAppChat, navigateToIndex } from '~/common/app.routes';
import { useChatStore } from '~/common/stores/chat/store-chats';
import { useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { usePlayUrl } from '~/common/util/audio/usePlayUrl';
import { usePlayUrlInterval } from './state/usePlayUrlInterval';
import type { AppCallIntent } from './AppCall';
import { CallAvatar } from './components/CallAvatar';
@@ -128,11 +127,11 @@ export function Telephone(props: {
// pickup / hangup
React.useEffect(() => {
!isRinging && AudioPlayer.playUrl(isConnected ? '/sounds/chat-begin.mp3' : '/sounds/chat-end.mp3');
!isRinging && void AudioPlayer.playUrl(isConnected ? '/sounds/chat-begin.mp3' : '/sounds/chat-end.mp3').catch(() => {/* autoplay may be blocked */});
}, [isRinging, isConnected]);
// ringtone
usePlayUrl(isRinging ? '/sounds/chat-ringtone.mp3' : null, 300, 2800 * 2);
usePlayUrlInterval(isRinging ? '/sounds/chat-ringtone.mp3' : null, 300, 2800 * 2);
/// Shortcuts
@@ -1,4 +1,5 @@
import * as React from 'react';
import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
@@ -8,15 +9,16 @@ import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
* @param firstDelay The delay before the first play, in milliseconds.
* @param repeatMs The delay between each repeat, in milliseconds. If 0, the sound will only play once.
*/
export function usePlayUrl(url: string | null, firstDelay: number = 0, repeatMs: number = 0) {
export function usePlayUrlInterval(url: string | null, firstDelay: number = 0, repeatMs: number = 0) {
React.useEffect(() => {
if (!url) return;
const abortController = new AbortController();
let timer2: any = null;
const playFirstTime = () => {
const playAudio = () => AudioPlayer.playUrl(url);
void playAudio();
const playAudio = () => void AudioPlayer.playUrl(url, abortController.signal).catch(() => {/* autoplay may be blocked */});
playAudio();
timer2 = repeatMs > 0 ? setInterval(playAudio, repeatMs) : null;
};
@@ -24,8 +26,8 @@ export function usePlayUrl(url: string | null, firstDelay: number = 0, repeatMs:
return () => {
clearTimeout(timer1);
if (timer2)
clearInterval(timer2);
timer2 && clearInterval(timer2);
abortController?.abort();
};
}, [firstDelay, repeatMs, url]);
}
+6 -10
View File
@@ -4,8 +4,6 @@ import { Panel, PanelGroup, PanelResizeHandle } from 'react-resizable-panels';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, useTheme } from '@mui/joy';
import { DEV_MODE_SETTINGS } from '../settings-modal/UxLabsSettings';
import type { DiagramConfig } from '~/modules/aifn/digrams/DiagramsModal';
import type { TradeConfig } from '~/modules/trade/TradeModal';
import { downloadSingleChat, importConversationsFromFilesAtRest, openConversationsAtRestPicker } from '~/modules/trade/trade.client';
@@ -32,7 +30,7 @@ import { createErrorContentFragment, createTextContentFragment, DMessageAttachme
import { gcChatImageAssets } from '~/common/stores/chat/chat.gc';
import { getChatLLMId } from '~/common/stores/llms/store-llms';
import { getConversation, getConversationSystemPurposeId, useConversation } from '~/common/stores/chat/store-chats';
import { optimaActions, optimaOpenModels, optimaOpenPreferences } from '~/common/layout/optima/useOptima';
import { optimaActions, optimaOpenModels, optimaOpenPreferences, useOptimaChromeless } from '~/common/layout/optima/useOptima';
import { useFolderStore } from '~/common/stores/folders/store-chat-folders';
import { useIsMobile, useIsTallScreen } from '~/common/components/useMatchMedia';
import { useLLM } from '~/common/stores/llms/llms.hooks';
@@ -40,8 +38,6 @@ import { useModelDomain } from '~/common/stores/llms/hooks/useModelDomain';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
import { useRouterQuery } from '~/common/app.routes';
import { useUIComplexityIsMinimal } from '~/common/stores/store-ui';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
import { ChatPane } from './components/layout-pane/ChatPane';
import { ChatBarBeam } from './components/layout-bar/ChatBarBeam';
import { ChatBarAltTitle } from './components/layout-bar/ChatBarAltTitle';
@@ -151,8 +147,6 @@ export function AppChat() {
const intent = useRouterQuery<Partial<AppChatIntent>>();
const showAltTitleBar = useUXLabsStore(state => DEV_MODE_SETTINGS && state.labsChatBarAlt === 'title');
const { domainModelId: chatLLMId } = useModelDomain('primaryChat');
const chatLLM = useLLM(chatLLMId) ?? null;
@@ -215,7 +209,8 @@ export function AppChat() {
});
// Composer Auto-hiding
const forceComposerHide = !!beamOpenStoreInFocusedPane /* || !focusedPaneConversationId */; // auto-hide when no chat (the 'please select a conversation...' state) doesn't feel good
const isChromeless = useOptimaChromeless() && isMobile; // auto-hide on Chromeless too
const forceComposerHide = isChromeless || !!beamOpenStoreInFocusedPane /* || !focusedPaneConversationId */; // auto-hide when no chat (the 'please select a conversation...' state) doesn't feel good
const composerAutoHide = useComposerAutoHide(forceComposerHide, composerHasContent);
// Window actions
@@ -463,7 +458,7 @@ export function AppChat() {
// Pluggable Optima components
const barAltTitle = showAltTitleBar ? focusedChatTitle ?? 'No Chat' : null;
const barAltTitle = null;
const focusedBarContent = React.useMemo(() => beamOpenStoreInFocusedPane
? <ChatBarBeam conversationTitle={focusedChatTitle ?? 'No Chat'} beamStore={beamOpenStoreInFocusedPane} isMobile={isMobile} />
@@ -498,6 +493,7 @@ export function AppChat() {
const focusedChatPanelContent = React.useMemo(() => !focusedPaneConversationId ? null :
<ChatPane
isMobile={isMobile}
conversationId={focusedPaneConversationId}
disableItems={!focusedPaneConversationId || isFocusedChatEmpty}
hasConversations={hasConversations}
@@ -774,7 +770,7 @@ export function AppChat() {
</Box>
{/* Hover zone for auto-hide */}
{!forceComposerHide && composerAutoHide.isHidden && <Box {...composerAutoHide.detectorProps} />}
{!isChromeless && !forceComposerHide && composerAutoHide.isHidden && <Box {...composerAutoHide.detectorProps} />}
{/* Diagrams */}
{!!diagramConfig && (
+73 -127
View File
@@ -1,10 +1,8 @@
import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import type { FileWithHandle } from 'browser-fs-access';
import { Box, Button, ButtonGroup, Card, Dropdown, Grid, IconButton, Menu, MenuButton, MenuItem, Textarea, Typography } from '@mui/joy';
import { ColorPaletteProp, SxProps, VariantProp } from '@mui/joy/styles/types';
import AddCircleOutlineIcon from '@mui/icons-material/AddCircleOutline';
import type { ColorPaletteProp, SxProps, VariantProp } from '@mui/joy/styles/types';
import { Box, Button, ButtonGroup, Card, Grid, IconButton, Textarea, Typography } from '@mui/joy';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import PsychologyIcon from '@mui/icons-material/Psychology';
import SendIcon from '@mui/icons-material/Send';
@@ -17,7 +15,8 @@ import { useChatAutoSuggestAttachmentPrompts, useChatMicTimeoutMsValue } from '.
import { useAgiAttachmentPrompts } from '~/modules/aifn/agiattachmentprompts/useAgiAttachmentPrompts';
import { useBrowseCapability } from '~/modules/browse/store-module-browsing';
import { DLLM, getLLMContextTokens, getLLMPricing, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { DLLM, getLLMContextTokens, LLM_IF_OAI_Vision } from '~/common/stores/llms/llms.types';
import { llmChatPricing_adjusted } from '~/common/stores/llms/llms.pricing';
import { AudioGenerator } from '~/common/util/audio/AudioGenerator';
import { AudioPlayer } from '~/common/util/audio/AudioPlayer';
import { ButtonAttachFilesMemo, openFileForAttaching } from '~/common/components/ButtonAttachFiles';
@@ -25,6 +24,7 @@ import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { ConversationsManager } from '~/common/chat-overlay/ConversationsManager';
import { DMessageId, DMessageMetadata, DMetaReferenceItem, messageFragmentsReduceText } from '~/common/stores/chat/chat.message';
import { PhPaintBrush } from '~/common/components/icons/phosphor/PhPaintBrush';
import { ShortcutKey, ShortcutObject, useGlobalShortcuts } from '~/common/components/shortcuts/useGlobalShortcuts';
import { addSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import { animationEnterBelow } from '~/common/util/animUtils';
@@ -34,12 +34,14 @@ import { copyToClipboard, supportsClipboardRead } from '~/common/util/clipboardU
import { createTextContentFragment, DMessageAttachmentFragment, DMessageContentFragment, duplicateDMessageFragments } from '~/common/stores/chat/chat.fragments';
import { glueForMessageTokens, marshallWrapDocFragments } from '~/common/stores/chat/chat.tokens';
import { isValidConversation, useChatStore } from '~/common/stores/chat/store-chats';
import { getModelParameterValueOrThrow } from '~/common/stores/llms/llms.parameters';
import { getModelParameterValueWithFallback } from '~/common/stores/llms/llms.parameters';
import { launchAppCall, removeQueryParam, useRouterQuery } from '~/common/app.routes';
import { lineHeightTextareaMd, themeBgAppChatComposer } from '~/common/app.theme';
import { optimaOpenPreferences } from '~/common/layout/optima/useOptima';
import { platformAwareKeystrokes } from '~/common/components/KeyStroke';
import { supportsCameraCapture } from '~/common/components/camera/useCameraCapture';
import { supportsScreenCapture } from '~/common/util/screenCaptureUtils';
import { useAttachHandler_CameraOpen, useAttachHandler_Files, useAttachHandler_PasteIntercept, useAttachHandler_ScreenCapture, useAttachHandler_UrlWebLinks } from '~/common/attachment-drafts/attachment-sources/useAttachmentSourceHandlers';
import { useChatComposerOverlayStore } from '~/common/chat-overlay/store-perchat_vanilla';
import { useComposerStartupText, useLogicSherpaStore } from '~/common/logic/store-logic-sherpa';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
@@ -52,21 +54,15 @@ import { providerCommands } from './actile/providerCommands';
import { providerStarredMessages, StarredMessageItem } from './actile/providerStarredMessage';
import { useActileManager } from './actile/useActileManager';
import type { AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import { LLMAttachmentDraftsAction, LLMAttachmentsList } from './llmattachments/LLMAttachmentsList';
import { PhPaintBrush } from '~/common/components/icons/phosphor/PhPaintBrush';
import type { AttachmentDraftId, AttachmentDraftsAction } from '~/common/attachment-drafts/attachment.types';
import { AttachmentSourcesMemo } from '~/common/attachment-drafts/attachment-sources/AttachmentSources';
import { useAttachmentDrafts } from '~/common/attachment-drafts/useAttachmentDrafts';
import { useLLMAttachmentDrafts } from './llmattachments/useLLMAttachmentDrafts';
import { useAttachmentDraftsEnrichment } from '~/common/attachment-drafts/llm-enrichment/useAttachmentDraftsEnrichment';
import { useGoogleDrivePicker } from '~/common/attachment-drafts/attachment-sources/useGoogleDrivePicker';
import type { ChatExecuteMode } from '../../execute-mode/execute-mode.types';
import { chatExecuteModeCanAttach, useChatExecuteMode } from '../../execute-mode/useChatExecuteMode';
import { ButtonAttachCameraMemo, useCameraCaptureModalDialog } from './buttons/ButtonAttachCamera';
import { ButtonAttachClipboardMemo } from './buttons/ButtonAttachClipboard';
import { ButtonAttachGoogleDriveMemo } from './buttons/ButtonAttachGoogleDrive';
import { ButtonAttachScreenCaptureMemo } from './buttons/ButtonAttachScreenCapture';
import { ButtonAttachWebMemo } from './buttons/ButtonAttachWeb';
import { hasGoogleDriveCapability, useGoogleDrivePicker } from '~/common/attachment-drafts/useGoogleDrivePicker';
import { ButtonBeamMemo } from './buttons/ButtonBeam';
import { ButtonCallMemo } from './buttons/ButtonCall';
import { ButtonGroupDrawRepeat } from './buttons/ButtonGroupDrawRepeat';
@@ -74,6 +70,7 @@ import { ButtonMicContinuationMemo } from './buttons/ButtonMicContinuation';
import { ButtonMicMemo } from './buttons/ButtonMic';
import { ButtonMultiChatMemo } from './buttons/ButtonMultiChat';
import { ButtonOptionsDraw } from './buttons/ButtonOptionsDraw';
import { ComposerAttachmentDraftsList } from './llmattachments/ComposerAttachmentDraftsList';
import { ComposerTextAreaActions } from './textarea/ComposerTextAreaActions';
import { ComposerTextAreaDrawActions } from './textarea/ComposerTextAreaDrawActions';
import { StatusBarMemo } from '../StatusBar';
@@ -81,7 +78,6 @@ import { TokenBadgeMemo } from './tokens/TokenBadge';
import { TokenProgressbarMemo } from './tokens/TokenProgressbar';
import { useComposerDragDrop } from './useComposerDragDrop';
import { useTextTokenCount } from './tokens/useTextTokenCounter';
import { useWebInputModal } from './WebInputModal';
// configuration
@@ -138,10 +134,8 @@ export function Composer(props: {
// external state
const { showPromisedOverlay } = useOverlayComponents();
const { newChat: appChatNewChatIntent } = useRouterQuery<Partial<AppChatIntent>>();
const { labsAttachScreenCapture, labsCameraDesktop, labsShowCost, labsShowShortcutBar } = useUXLabsStore(useShallow(state => ({
labsAttachScreenCapture: state.labsAttachScreenCapture,
labsCameraDesktop: state.labsCameraDesktop,
labsShowCost: state.labsShowCost,
const { labsComposerAttachmentsInline, labsShowShortcutBar } = useUXLabsStore(useShallow(state => ({
labsComposerAttachmentsInline: state.labsComposerAttachmentsInline,
labsShowShortcutBar: state.labsShowShortcutBar,
})));
const timeToShowTips = useLogicSherpaStore(state => state.usageCount >= SHOW_TIPS_AFTER_RELOADS);
@@ -176,8 +170,8 @@ export function Composer(props: {
const chatLLMSupportsImages = !!props.chatLLM?.interfaces?.includes(LLM_IF_OAI_Vision);
// don't load URLs if the user is typing a command or there's no capability
const hasComposerBrowseCapability = useBrowseCapability().inComposer;
const enableLoadURLsInComposer = hasComposerBrowseCapability && !composeText.startsWith('/');
const browseCapability = useBrowseCapability();
const enableLoadURLsInComposer = browseCapability.inComposer && !composeText.startsWith('/');
// user message for attachments
const { onConversationBeamEdit, onConversationsImportFromFiles } = props;
@@ -204,7 +198,7 @@ export function Composer(props: {
} = useAttachmentDrafts(conversationOverlayStore, enableLoadURLsInComposer, chatLLMSupportsImages, handleFilterAGIFile, showChatAttachments === 'only-images');
// attachments derived state
const llmAttachmentDraftsCollection = useLLMAttachmentDrafts(attachmentDrafts, props.chatLLM, chatLLMSupportsImages);
const { enrichment: attEnrichment, summary: attEnrichSummary } = useAttachmentDraftsEnrichment(attachmentDrafts, props.chatLLM, chatLLMSupportsImages);
// drag/drop
const { dragContainerSx, dropComponent, handleContainerDragEnter, handleContainerDragStart } = useComposerDragDrop(!props.isMobile, attachAppendDataTransfer);
@@ -229,13 +223,13 @@ export function Composer(props: {
// tokens derived state
const tokensComposerTextDebounced = useTextTokenCount(composeText, props.chatLLM, 800, 1600);
let tokensComposer = (tokensComposerTextDebounced ?? 0) + (llmAttachmentDraftsCollection.llmTokenCountApprox || 0);
let tokensComposer = (tokensComposerTextDebounced ?? 0) + (attEnrichSummary.totalTokensApprox || 0);
if (props.chatLLM && tokensComposer > 0)
tokensComposer += glueForMessageTokens(props.chatLLM);
const tokensHistory = _historyTokenCount;
const tokensResponseMax = getModelParameterValueOrThrow('llmResponseTokens', props.chatLLM?.initialParameters, props.chatLLM?.userParameters, 0) ?? 0;
const tokensResponseMax = getModelParameterValueWithFallback('llmResponseTokens', props.chatLLM?.initialParameters, props.chatLLM?.userParameters, 0) ?? 0 /* if null, assume 0*/;
const tokenLimit = getLLMContextTokens(props.chatLLM) ?? 0;
const tokenChatPricing = getLLMPricing(props.chatLLM)?.chat;
const tokenChatPricing = React.useMemo(() => llmChatPricing_adjusted(props.chatLLM), [props.chatLLM]);
// Effect: load initial text if queued up (e.g. by /link/share_targetF)
@@ -273,7 +267,7 @@ export function Composer(props: {
// Confirmation Modals
const confirmProceedIfAttachmentsNotSupported = React.useCallback(async (): Promise<boolean> => {
if (llmAttachmentDraftsCollection.canAttachAllFragments) return true;
if (attEnrichSummary.allCompatible) return true;
return await showPromisedOverlay('composer-unsupported-attachments', { rejectWithValue: false }, ({ onResolve, onUserReject }) => (
<ConfirmationModal
open
@@ -285,7 +279,7 @@ export function Composer(props: {
title='Attachment Compatibility Notice'
/>
));
}, [llmAttachmentDraftsCollection.canAttachAllFragments, showPromisedOverlay]);
}, [attEnrichSummary.allCompatible, showPromisedOverlay]);
// Primary button
@@ -594,43 +588,19 @@ export function Composer(props: {
const handleToggleMinimized = React.useCallback(() => setIsMinimized(hide => !hide), []);
// Attachment Up
const handleAttachCtrlV = React.useCallback(async (event: React.ClipboardEvent) => {
if (await attachAppendDataTransfer(event.clipboardData, 'paste', false) === 'as_files')
event.preventDefault();
}, [attachAppendDataTransfer]);
const handleAttachCameraImage = React.useCallback((file: FileWithHandle) => {
void attachAppendFile('camera', file);
}, [attachAppendFile]);
const { openCamera, cameraCaptureComponent } = useCameraCaptureModalDialog(handleAttachCameraImage);
const handleAttachScreenCapture = React.useCallback((file: File) => {
void attachAppendFile('screencapture', file);
}, [attachAppendFile]);
const handleAttachFiles = React.useCallback(async (files: FileWithHandle[], errorMessage: string | null) => {
if (errorMessage)
addSnackbar({ key: 'attach-files-open-fail', message: `Unable to open files: ${errorMessage}`, type: 'issue' });
for (let file of files)
await attachAppendFile('file-open', file)
.catch((error: any) => addSnackbar({ key: 'attach-file-open-fail', message: `Unable to attach the file "${file.name}" (${error?.message || error?.toString() || 'unknown error'})`, type: 'issue' }));
}, [attachAppendFile]);
const handleAttachWebLinks = React.useCallback(async (links: { url: string }[]) => {
links.forEach(link => void attachAppendUrl('input-link', link.url));
}, [attachAppendUrl]);
const { openWebInputDialog, webInputDialogComponent } = useWebInputModal(handleAttachWebLinks, composeText);
// Attachments Up
const handleAttachCtrlV = useAttachHandler_PasteIntercept(attachAppendDataTransfer);
const handleAttachFiles = useAttachHandler_Files(attachAppendFile);
const handleOpenCamera = useAttachHandler_CameraOpen(attachAppendFile);
const handleAttachScreenCapture = useAttachHandler_ScreenCapture(attachAppendFile);
const { openWebInputDialog, webInputDialogComponent } = useAttachHandler_UrlWebLinks(attachAppendUrl, composeText);
const { openGoogleDrivePicker, googleDrivePickerComponent } = useGoogleDrivePicker(attachAppendCloudFile, isMobile);
// Attachments Down
const handleAttachmentDraftsAction = React.useCallback((attachmentDraftIdOrAll: AttachmentDraftId | null, action: LLMAttachmentDraftsAction) => {
const handleAttachmentDraftsAction = React.useCallback((attachmentDraftIdOrAll: AttachmentDraftId | null, action: AttachmentDraftsAction) => {
switch (action) {
case 'copy-text':
const copyFragments = attachmentsTakeFragmentsByType('doc', attachmentDraftIdOrAll, false);
@@ -659,7 +629,7 @@ export function Composer(props: {
if (supportsClipboardRead())
composerShortcuts.push({ key: 'v', ctrl: true, shift: true, action: attachAppendClipboardItems, description: 'Attach Clipboard' });
// Future: keep reactive state here to support Live Screen Capture and more
// if (labsAttachScreenCapture && supportsScreenCapture)
// if (supportsScreenCapture)
// composerShortcuts.push({ key: 's', ctrl: true, shift: true, action: openScreenCaptureDialog, description: 'Attach Screen Capture' });
}
if (recognitionState.isActive) {
@@ -697,7 +667,7 @@ export function Composer(props: {
const sendButtonColor: ColorPaletteProp =
assistantAbortible ? 'warning'
: !llmAttachmentDraftsCollection.canAttachAllFragments ? 'warning'
: !attEnrichSummary.allCompatible ? 'warning'
: chatExecuteModeSendColor;
const sendButtonLabel = chatExecuteModeSendLabel;
@@ -711,7 +681,7 @@ export function Composer(props: {
: <TelegramIcon />;
const beamButtonColor: ColorPaletteProp | undefined =
!llmAttachmentDraftsCollection.canAttachAllFragments ? 'warning'
!attEnrichSummary.allCompatible ? 'warning'
: undefined;
const showTint: ColorPaletteProp | undefined = isDraw ? 'warning' : isReAct ? 'success' : undefined;
@@ -782,42 +752,24 @@ export function Composer(props: {
{/* [mobile] Mic button */}
{recognitionState.isAvailable && <ButtonMicMemo variant={micVariant} color={micColor === 'danger' ? 'danger' : showTint || micColor} errorMessage={recognitionState.errorMessage} onClick={handleToggleMic} />}
{/* Responsive Camera OCR button */}
{showChatAttachments && <ButtonAttachCameraMemo color={showTint} isMobile onOpenCamera={openCamera} />}
{/* [mobile] Attach file button (in draw with image mode) */}
{showChatAttachments === 'only-images' && <ButtonAttachFilesMemo color={showTint} isMobile onAttachFiles={handleAttachFiles} fullWidth multiple />}
{showChatAttachments === 'only-images' && <ButtonAttachFilesMemo color={showTint} isMobile onAttachFiles={handleAttachFiles} multiple />}
{/* [mobile] [+] button */}
{/* [mobile] [+] attachment sources menu */}
{showChatAttachments === true && (
<Dropdown>
<MenuButton slots={{ root: IconButton }}>
<AddCircleOutlineIcon />
</MenuButton>
<Menu>
{/* Responsive Open Files button */}
<MenuItem>
<ButtonAttachFilesMemo onAttachFiles={handleAttachFiles} fullWidth multiple />
</MenuItem>
{/* Responsive Web button */}
<MenuItem>
<ButtonAttachWebMemo disabled={!hasComposerBrowseCapability} onOpenWebInput={openWebInputDialog} />
</MenuItem>
{/* Responsive Google Drive button */}
{hasGoogleDriveCapability && <MenuItem>
<ButtonAttachGoogleDriveMemo onOpenGoogleDrivePicker={openGoogleDrivePicker} fullWidth />
</MenuItem>}
{/* Responsive Paste button */}
{supportsClipboardRead() && <MenuItem>
<ButtonAttachClipboardMemo onAttachClipboard={attachAppendClipboardItems} />
</MenuItem>}
</Menu>
</Dropdown>
<AttachmentSourcesMemo
mode='menu-compact'
canBrowse={browseCapability.mayWork}
hasScreenCapture={supportsScreenCapture}
hasCamera={supportsCameraCapture()}
onlyImages={false /* because if yes, we only show the attach files above */}
onAttachClipboard={attachAppendClipboardItems}
onAttachFiles={handleAttachFiles}
onAttachScreenCapture={handleAttachScreenCapture}
onOpenCamera={handleOpenCamera}
onOpenGoogleDrivePicker={openGoogleDrivePicker}
onOpenWebInput={openWebInputDialog}
/>
)}
{/* [Mobile] MultiChat button */}
@@ -828,31 +780,27 @@ export function Composer(props: {
{/* [Desktop, Col1] Insert Multi-modal content buttons */}
{isDesktop && showChatAttachments && (
<Box sx={{ flexGrow: 0, display: 'grid', gap: (labsAttachScreenCapture && labsCameraDesktop) ? 0.5 : 1, alignSelf: 'flex-start' }}>
<Box sx={{ flexGrow: 0, display: 'grid', gap: 0.5, alignSelf: 'flex-start' }}>
{/*<FormHelperText sx={{ mx: 'auto' }}>*/}
{/* Attach*/}
{/*</FormHelperText>*/}
{/* [desktop] Attachment Sources: dropdown menu or inline buttons */}
<AttachmentSourcesMemo
mode={!labsComposerAttachmentsInline ? 'menu-rich' : 'inline-buttons'}
color={!labsComposerAttachmentsInline ? (showTint || 'neutral') : showTint}
richButtonStandOut={!isText && !isAppend}
canBrowse={browseCapability.mayWork}
hasScreenCapture={supportsScreenCapture}
hasCamera={supportsCameraCapture()}
onlyImages={showChatAttachments === 'only-images'}
onAttachClipboard={attachAppendClipboardItems}
onAttachFiles={handleAttachFiles}
onAttachScreenCapture={handleAttachScreenCapture}
onOpenCamera={handleOpenCamera}
onOpenGoogleDrivePicker={openGoogleDrivePicker}
onOpenWebInput={openWebInputDialog}
/>
{/* Responsive Open Files button */}
<ButtonAttachFilesMemo color={showTint} onAttachFiles={handleAttachFiles} fullWidth multiple />
{/* Responsive Web button */}
{showChatAttachments !== 'only-images' && <ButtonAttachWebMemo color={showTint} disabled={!hasComposerBrowseCapability} onOpenWebInput={openWebInputDialog} />}
{/* Responsive Google Drive button */}
{hasGoogleDriveCapability && showChatAttachments !== 'only-images' && <ButtonAttachGoogleDriveMemo color={showTint} onOpenGoogleDrivePicker={openGoogleDrivePicker} />}
{/* Responsive Paste button */}
{supportsClipboardRead() && showChatAttachments !== 'only-images' && <ButtonAttachClipboardMemo color={showTint} onAttachClipboard={attachAppendClipboardItems} />}
{/* Responsive Screen Capture button */}
{labsAttachScreenCapture && supportsScreenCapture && <ButtonAttachScreenCaptureMemo color={showTint} onAttachScreenCapture={handleAttachScreenCapture} />}
{/* Responsive Camera OCR button */}
{labsCameraDesktop && <ButtonAttachCameraMemo color={showTint} onOpenCamera={openCamera} />}
</Box>)}
</Box>
)}
{/* Top: Textarea & Mic & Overlays, Bottom, Attachment Drafts */}
@@ -920,7 +868,7 @@ export function Composer(props: {
)}
{!showChatInReferenceTo && !isDraw && tokenLimit > 0 && (
<TokenBadgeMemo hideBelowDollars={0.01} chatPricing={tokenChatPricing} direct={tokensComposer} history={tokensHistory} responseMax={tokensResponseMax} limit={tokenLimit} showCost={labsShowCost} enableHover={!isMobile} showExcess absoluteBottomRight />
<TokenBadgeMemo showCost hideBelowDollars={0.01} chatPricing={tokenChatPricing} direct={tokensComposer} history={tokensHistory} responseMax={tokensResponseMax} limit={tokenLimit} enableHover={!isMobile} showExcess absoluteBottomRight />
)}
</Box>
@@ -999,11 +947,12 @@ export function Composer(props: {
{/* Render any Attachments & menu items */}
{!!conversationOverlayStore && showChatAttachments && (
<LLMAttachmentsList
agiAttachmentPrompts={agiAttachmentPrompts}
<ComposerAttachmentDraftsList
attachmentDraftsStoreApi={conversationOverlayStore}
canInlineSomeFragments={llmAttachmentDraftsCollection.canInlineSomeFragments}
llmAttachmentDrafts={llmAttachmentDraftsCollection.llmAttachmentDrafts}
attachmentDrafts={attachmentDrafts}
enrichment={attEnrichment}
enrichmentSummary={attEnrichSummary}
agiAttachmentPrompts={agiAttachmentPrompts}
onAttachmentDraftsAction={handleAttachmentDraftsAction}
/>
)}
@@ -1135,9 +1084,6 @@ export function Composer(props: {
{/* Execution Mode Menu */}
{chatExecuteMenuComponent}
{/* Camera (when open) */}
{cameraCaptureComponent}
{/* Google Drive Picker (when open) */}
{googleDrivePickerComponent}
@@ -0,0 +1,76 @@
import * as React from 'react';
import { CircularProgress, ListDivider, ListItemDecorator, MenuItem } from '@mui/joy';
import AutoFixHighIcon from '@mui/icons-material/AutoFixHigh';
import type { AgiAttachmentPromptsData } from '~/modules/aifn/agiattachmentprompts/useAgiAttachmentPrompts';
import type { AttachmentDraft, AttachmentDraftId, AttachmentDraftsAction } from '~/common/attachment-drafts/attachment.types';
import type { AttachmentDraftsStoreApi } from '~/common/attachment-drafts/store-attachment-drafts_slice';
import type { AttachmentEnrichmentSummary, IAttachmentEnrichment } from '~/common/attachment-drafts/llm-enrichment/attachment.enrichment';
import { AttachmentDraftsList } from '~/common/attachment-drafts/attachment-drafts-ui/AttachmentDraftsList';
import { LLMAttachmentsPromptsButtonMemo } from './LLMAttachmentsPromptsButton';
import { ViewDocPartModal } from '../../message/fragments-content/ViewDocPartModal';
import { ViewImageRefPartModal } from '../../message/fragments-content/ViewImageRefPartModal';
/**
* Composer-specific wrapper around the generic AttachmentDraftsList.
* Provides: viewer modals, AI prompts button, "What can I do?" menu item.
*/
export function ComposerAttachmentDraftsList(props: {
attachmentDrafts: AttachmentDraft[],
attachmentDraftsStoreApi: AttachmentDraftsStoreApi,
enrichment: IAttachmentEnrichment,
enrichmentSummary: AttachmentEnrichmentSummary,
agiAttachmentPrompts: AgiAttachmentPromptsData,
onAttachmentDraftsAction: (attachmentDraftId: AttachmentDraftId | null, actionId: AttachmentDraftsAction) => void,
}) {
const { agiAttachmentPrompts, attachmentDrafts } = props;
// memo components
const startDecorator = React.useMemo(() =>
!agiAttachmentPrompts.isVisible && !agiAttachmentPrompts.hasData ? undefined
: <LLMAttachmentsPromptsButtonMemo data={agiAttachmentPrompts} />
, [agiAttachmentPrompts]);
// memo rendering functions
const renderDocViewer = React.useCallback(
(part: React.ComponentProps<typeof ViewDocPartModal>['docPart'], onClose: () => void) =>
<ViewDocPartModal docPart={part} onClose={onClose} />
, []);
const renderImageViewer = React.useCallback(
(part: React.ComponentProps<typeof ViewImageRefPartModal>['imageRefPart'], onClose: () => void) =>
<ViewImageRefPartModal imageRefPart={part} onClose={onClose} />
, []);
const renderOverallMenuExtra = React.useCallback(() => <>
<MenuItem color='primary' variant='soft' onClick={agiAttachmentPrompts.refetch} disabled={!attachmentDrafts.length || agiAttachmentPrompts.isFetching}>
<ListItemDecorator>{agiAttachmentPrompts.isFetching ? <CircularProgress size='sm' /> : <AutoFixHighIcon />}</ListItemDecorator>
What can I do?
</MenuItem>
<ListDivider />
</>, [agiAttachmentPrompts.isFetching, agiAttachmentPrompts.refetch, attachmentDrafts.length]);
return (
<AttachmentDraftsList
attachmentDraftsStoreApi={props.attachmentDraftsStoreApi}
attachmentDrafts={attachmentDrafts}
enrichment={props.enrichment}
enrichmentSummary={props.enrichmentSummary}
onAttachmentDraftsAction={props.onAttachmentDraftsAction}
startDecorator={startDecorator}
renderDocViewer={renderDocViewer}
renderImageViewer={renderImageViewer}
renderOverallMenuExtra={renderOverallMenuExtra}
/>
);
}
@@ -1,98 +0,0 @@
import * as React from 'react';
import type { AttachmentDraft } from '~/common/attachment-drafts/attachment.types';
import type { DLLM } from '~/common/stores/llms/llms.types';
import type { DMessageAttachmentFragment } from '~/common/stores/chat/chat.fragments';
import { estimateTokensForFragments } from '~/common/stores/chat/chat.tokens';
export interface LLMAttachmentDraftsCollection {
llmAttachmentDrafts: LLMAttachmentDraft[];
canAttachAllFragments: boolean;
canInlineSomeFragments: boolean;
llmTokenCountApprox: number | null;
hasImageFragments: boolean;
}
export interface LLMAttachmentDraft {
attachmentDraft: AttachmentDraft;
llmSupportsAllFragments: boolean;
llmSupportsTextFragments: boolean;
llmTokenCountApprox: number | null;
hasImageFragments: boolean;
}
export function useLLMAttachmentDrafts(attachmentDrafts: AttachmentDraft[], chatLLM: DLLM | null, chatLLMSupportsImages: boolean): LLMAttachmentDraftsCollection {
/* [Optimization] Use a Ref to store the previous state of llmAttachmentDrafts and chatLLM
*
* Note that this works on 2 levels:
* - 1. avoids recomputation, but more importantly,
* - 2. avoids re-rendering by keeping those llmAttachmentDrafts objects stable.
*
* Important to notice that the attachmentDraft objects[] are stable to start with, so we can
* safely use reference equality to check if internal properties (or order) have changed.
*/
const prevStateRef = React.useRef<{
chatLLM: DLLM | null;
llmAttachmentDrafts: LLMAttachmentDraft[];
}>({ llmAttachmentDrafts: [], chatLLM: null });
return React.useMemo(() => {
// [Optimization]
const equalChatLLM = chatLLM === prevStateRef.current.chatLLM;
// LLM-dependent multi-modal enablement
// TODO: consider also Audio inputs, maybe PDF binary inputs
// FIXME: reference fragments could refer to non-image as well
const imageTypes: DMessageAttachmentFragment['part']['pt'][] = ['reference', 'image_ref'];
const supportedTypes: DMessageAttachmentFragment['part']['pt'][] = chatLLMSupportsImages ? [...imageTypes, 'doc'] : ['doc'];
const supportedTextTypes: DMessageAttachmentFragment['part']['pt'][] = supportedTypes.filter(pt => pt === 'doc');
// Add LLM-specific properties to each attachment draft
const llmAttachmentDrafts = attachmentDrafts.map((a, index) => {
// [Optimization] If not change in LLM and the attachmentDraft is the same object reference, reuse the previous LLMAttachmentDraft
let prevDraft: LLMAttachmentDraft | undefined = prevStateRef.current.llmAttachmentDrafts[index];
// if not found, search by id
if (!prevDraft)
prevDraft = prevStateRef.current.llmAttachmentDrafts.find(_pd => _pd.attachmentDraft.id === a.id);
if (equalChatLLM && prevDraft && prevDraft.attachmentDraft === a)
return prevDraft;
// Otherwise, create a new LLMAttachmentDraft
return {
attachmentDraft: a,
llmSupportsAllFragments: !a.outputFragments ? false : a.outputFragments.every(op => supportedTypes.includes(op.part.pt)),
llmSupportsTextFragments: !a.outputFragments ? false : a.outputFragments.some(op => supportedTextTypes.includes(op.part.pt)),
llmTokenCountApprox: chatLLM
? estimateTokensForFragments(chatLLM, 'user', a.outputFragments, true, 'useLLMAttachmentDrafts')
: null,
hasImageFragments: !a.outputFragments ? false : a.outputFragments.some(op => imageTypes.includes(op.part.pt)),
};
});
// Calculate the overall properties
const canAttachAllFragments = llmAttachmentDrafts.every(a => a.llmSupportsAllFragments);
const canInlineSomeFragments = llmAttachmentDrafts.some(a => a.llmSupportsTextFragments);
const llmTokenCountApprox = chatLLM
? llmAttachmentDrafts.reduce((acc, a) => acc + (a.llmTokenCountApprox || 0), 0)
: null;
const hasImageFragments = llmAttachmentDrafts.some(a => a.hasImageFragments);
// [Optimization] Update the ref with the new state
prevStateRef.current = { llmAttachmentDrafts, chatLLM };
return {
llmAttachmentDrafts,
canAttachAllFragments,
canInlineSomeFragments,
llmTokenCountApprox,
hasImageFragments,
};
}, [attachmentDrafts, chatLLM, chatLLMSupportsImages]); // Dependencies for the outer useMemo
}
@@ -8,7 +8,7 @@ import SettingsIcon from '@mui/icons-material/Settings';
import { findModelVendor } from '~/modules/llms/vendors/vendors.registry';
import type { DModelsServiceId } from '~/common/stores/llms/llms.service.types';
import { DLLM, DLLMId, isLLMVisible } from '~/common/stores/llms/llms.types';
import { DLLM, DLLMId, getLLMLabel, isLLMVisible } from '~/common/stores/llms/llms.types';
import { DebouncedInputMemo } from '~/common/components/DebouncedInput';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { KeyStroke } from '~/common/components/KeyStroke';
@@ -65,7 +65,7 @@ function LLMDropdown(props: {
return true;
// filter-out models that don't contain the search string
if (lcFilterString && !llm.label.toLowerCase().includes(lcFilterString))
if (lcFilterString && !getLLMLabel(llm).toLowerCase().includes(lcFilterString))
return false;
// filter-out hidden models from the dropdown
@@ -89,7 +89,7 @@ function LLMDropdown(props: {
// add the model item
llmItems[llm.id] = {
title: llm.label,
title: getLLMLabel(llm),
...(llm.userStarred ? { symbol: '⭐' } : {}),
// icon: llm.id.startsWith('some vendor') ? <VendorIcon /> : undefined,
};
@@ -292,6 +292,17 @@ function ChatDrawer(props: {
toggleFilterHasDocFragments, toggleFilterHasImageAssets, toggleFilterHasStars, toggleFilterIsArchived, toggleShowPersonaIcons, toggleShowRelativeSize,
]);
const displayNavItems = React.useMemo(() => {
if (renderLimit === Infinity || renderLimit >= renderNavItems.length) return renderNavItems;
// return sliced if it contains the active conversation
const sliced = renderNavItems.slice(0, renderLimit);
if (!props.activeConversationId || sliced.some(i => i.type === 'nav-item-chat-data' && i.conversationId === props.activeConversationId)) return sliced;
// include the active conversation if it's beyond the fold
const activeItem = renderNavItems.find((i, idx) => idx >= renderLimit && i.type === 'nav-item-chat-data' && i.conversationId === props.activeConversationId);
return activeItem ? [...sliced, activeItem] : sliced;
}, [renderNavItems, renderLimit, props.activeConversationId]);
return <>
@@ -380,7 +391,7 @@ function ChatDrawer(props: {
{/* Chat Titles List (shrink as half the rate as the Folders List) */}
<Box sx={{ flexGrow: 1, flexShrink: 1, flexBasis: '20rem', overflowY: 'auto', ...themeScalingMap[contentScaling].chatDrawerItemSx }}>
{renderNavItems.slice(0, renderLimit).map((item, idx) => item.type === 'nav-item-chat-data' ? (
{displayNavItems.map((item, idx) => item.type === 'nav-item-chat-data' ? (
<ChatDrawerItemMemo
key={'nav-chat-' + item.conversationId}
item={item}
@@ -6,7 +6,6 @@ import AddIcon from '@mui/icons-material/Add';
import ArchiveOutlinedIcon from '@mui/icons-material/ArchiveOutlined';
import CleaningServicesOutlinedIcon from '@mui/icons-material/CleaningServicesOutlined';
import CompressIcon from '@mui/icons-material/Compress';
import EngineeringIcon from '@mui/icons-material/Engineering';
import ForkRightIcon from '@mui/icons-material/ForkRight';
import KeyboardArrowDownIcon from '@mui/icons-material/KeyboardArrowDown';
import RestartAltIcon from '@mui/icons-material/RestartAlt';
@@ -14,15 +13,14 @@ import SettingsSuggestOutlinedIcon from '@mui/icons-material/SettingsSuggestOutl
import UnarchiveOutlinedIcon from '@mui/icons-material/UnarchiveOutlined';
import type { DConversationId } from '~/common/stores/chat/chat.conversation';
import { ChromelessItemButton } from '~/common/layout/optima/ChromelessItemButton';
import { CodiconSplitHorizontal } from '~/common/components/icons/CodiconSplitHorizontal';
import { CodiconSplitHorizontalRemove } from '~/common/components/icons/CodiconSplitHorizontalRemove';
import { CodiconSplitVertical } from '~/common/components/icons/CodiconSplitVertical';
import { CodiconSplitVerticalRemove } from '~/common/components/icons/CodiconSplitVerticalRemove';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { OptimaPanelGroupedList, OptimaPanelGroupGutter } from '~/common/layout/optima/panel/OptimaPanelGroupedList';
import { optimaActions } from '~/common/layout/optima/useOptima';
import { useChatStore } from '~/common/stores/chat/store-chats'; // may be replaced with a dedicated hook for the chat pane
import { useLabsDevMode } from '~/common/stores/store-ux-labs';
import { useChatShowSystemMessages } from '../../store-app-chat';
import { panesManagerActions, usePaneDuplicateOrClose } from '../panes/store-panes-manager';
@@ -40,6 +38,7 @@ function VariformPaneFrame() {
export function ChatPane(props: {
isMobile: boolean,
conversationId: DConversationId | null,
disableItems: boolean,
hasConversations: boolean,
@@ -55,7 +54,6 @@ export function ChatPane(props: {
// external state
const { canAddPane, isMultiPane } = usePaneDuplicateOrClose();
const [showSystemMessages, setShowSystemMessages] = useChatShowSystemMessages();
const labsDevMode = useLabsDevMode();
const { isArchived, setArchived } = useChatStore(useShallow((state) => {
const conversation = state.conversations.find(_c => _c.id === props.conversationId);
@@ -147,6 +145,8 @@ export function ChatPane(props: {
</ListItemButton>
</ListItem>
{props.isMobile && <ChromelessItemButton />}
</OptimaPanelGroupedList>
{/* Chat Actions group */}
@@ -213,15 +213,5 @@ export function ChatPane(props: {
</ListItemButton>
</OptimaPanelGroupedList>
{/* [DEV] Development */}
{labsDevMode && (
<OptimaPanelGroupedList title='[Developers]'>
<MenuItem onClick={optimaActions().openAIXDebugger}>
<ListItemDecorator><EngineeringIcon /></ListItemDecorator>
AIX: Show Last Request...
</MenuItem>
</OptimaPanelGroupedList>
)}
</>;
}
@@ -36,7 +36,7 @@ const optionGroupSx: SxProps = {
flexDirection: 'column',
alignItems: 'flex-start',
gap: 0,
};
} as const;
const optionSx: SxProps = {
// style
@@ -52,7 +52,19 @@ const optionSx: SxProps = {
// layout
justifyContent: 'flex-start',
};
} as const;
const optionBoldSx: SxProps = {
...optionSx,
fontWeight: 'lg',
} as const;
// '1. **text**' -> '1. text', or: **text** -> text
function _stripMarkdownBold(text: string): { text: string; isBold: boolean } {
const stripped = text.replace(/(\*{2,})(.+)\1\s*$/, '$2').trimEnd();
return { text: stripped, isBold: stripped !== text };
}
export function optionsExtractFromFragments_dangerModifyFragment(enabled: boolean, fragments: InterleavedFragment[]): { fragments: InterleavedFragment[], options: string[] } {
@@ -164,21 +176,25 @@ export function BlockOpOptions(props: {
options: string[],
onContinue: (continueText: null | string) => void,
}) {
const buttonSx = React.useMemo(() => ({ ...optionSx, fontSize: props.contentScaling }), [props.contentScaling]);
const normalSx = React.useMemo(() => ({ ...optionSx, fontSize: props.contentScaling }), [props.contentScaling]);
const boldSx = React.useMemo(() => ({ ...optionBoldSx, fontSize: props.contentScaling }), [props.contentScaling]);
return (
<Box sx={optionGroupSx}>
{props.options.map((option, index) => (
<Button
key={index}
color={OPTION_ACTIVE_COLOR}
variant='soft'
size={props.contentScaling === 'md' ? 'md' : 'sm'}
onClick={() => props.onContinue(option.endsWith('?') ? option.slice(0, -1) : option)}
sx={buttonSx}
>
{option}
</Button>
))}
{props.options.map((option, index) => {
const { text, isBold } = _stripMarkdownBold(option);
return (
<Button
key={index}
color={OPTION_ACTIVE_COLOR}
variant='soft'
size={props.contentScaling === 'md' ? 'md' : 'sm'}
onClick={() => props.onContinue(text.endsWith('?') ? text.slice(0, -1) : text)}
sx={isBold ? boldSx : normalSx}
>
{text}
</Button>
);
})}
</Box>
);
}
@@ -5,7 +5,6 @@ import TimeAgo from 'react-timeago';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, ButtonGroup, CircularProgress, Divider, IconButton, ListDivider, ListItem, ListItemDecorator, MenuItem, Switch, Tooltip, Typography } from '@mui/joy';
import { ClickAwayListener, Popper } from '@mui/base';
import AccountTreeOutlinedIcon from '@mui/icons-material/AccountTreeOutlined';
import AlternateEmailIcon from '@mui/icons-material/AlternateEmail';
import CheckRoundedIcon from '@mui/icons-material/CheckRounded';
import CloseRoundedIcon from '@mui/icons-material/CloseRounded';
@@ -39,20 +38,21 @@ import { CloseablePopup } from '~/common/components/CloseablePopup';
import { DMessage, DMessageId, DMessageUserFlag, DMetaReferenceItem, MESSAGE_FLAG_AIX_SKIP, MESSAGE_FLAG_NOTIFY_COMPLETE, MESSAGE_FLAG_STARRED, MESSAGE_FLAG_VND_ANT_CACHE_AUTO, MESSAGE_FLAG_VND_ANT_CACHE_USER, messageFragmentsReduceText, messageHasUserFlag } from '~/common/stores/chat/chat.message';
import { KeyStroke } from '~/common/components/KeyStroke';
import { MarkHighlightIcon } from '~/common/components/icons/MarkHighlightIcon';
import { PhTreeStructure } from '~/common/components/icons/phosphor/PhTreeStructure';
import { PhVoice } from '~/common/components/icons/phosphor/PhVoice';
import { Release } from '~/common/app.release';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { adjustContentScaling, themeScalingMap, themeZIndexChatBubble } from '~/common/app.theme';
import { avatarIconSx, makeMessageAvatarIcon, messageBackground, useMessageAvatarLabel } from '~/common/util/dMessageUtils';
import { clipboardCopyDOMSelectionOrFallback } from '~/common/util/clipboardUtils';
import { clipboardCopyDOMSelectionOrFallback, copyToClipboard } from '~/common/util/clipboardUtils';
import { createTextContentFragment, DMessageFragment, DMessageFragmentId, updateFragmentWithEditedText } from '~/common/stores/chat/chat.fragments';
import { useFragmentBuckets } from '~/common/stores/chat/hooks/useFragmentBuckets';
import { useUIPreferencesStore } from '~/common/stores/store-ui';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
import { BlockOpContinue } from './BlockOpContinue';
import { BlockOpOptions, optionsExtractFromFragments_dangerModifyFragment } from './BlockOpOptions';
import { BlockOpUpstreamResume } from './BlockOpUpstreamResume';
import { ChatMessageEditAttachments, type EditModeAttachmentsHandle } from './ChatMessageEditAttachments';
import { ContentFragments } from './fragments-content/ContentFragments';
import { DocumentAttachmentFragments } from './fragments-attachment-doc/DocumentAttachmentFragments';
import { ImageAttachmentFragments } from './fragments-attachment-image/ImageAttachmentFragments';
@@ -180,6 +180,7 @@ export function ChatMessage(props: {
const [contextMenuAnchor, setContextMenuAnchor] = React.useState<HTMLElement | null>(null);
const [opsMenuAnchor, setOpsMenuAnchor] = React.useState<HTMLElement | null>(null);
const [textContentEditState, setTextContentEditState] = React.useState<ChatMessageTextPartEditState | null>(null);
const attachmentsEditRef = React.useRef<EditModeAttachmentsHandle>(null);
// external state
const { adjContentScaling, disableMarkdown, doubleClickToEdit, uiComplexityMode } = useUIPreferencesStore(useShallow(state => ({
@@ -188,7 +189,6 @@ export function ChatMessage(props: {
doubleClickToEdit: state.doubleClickToEdit,
uiComplexityMode: state.complexityMode,
})));
const labsEnhanceCodeBlocks = useUXLabsStore(state => state.labsEnhanceCodeBlocks);
const [showDiff, setShowDiff] = useChatShowTextDiff();
@@ -280,14 +280,25 @@ export function ChatMessage(props: {
}, [handleFragmentDelete, handleFragmentReplace, messageFragments]);
const handleApplyAllEdits = React.useCallback(async (withControl: boolean) => {
const state = textContentEditState || {};
// 0. take state, including new attachment drafts BEFORE clearing state
const fragmentsEdits = textContentEditState || {};
const newFragments = await attachmentsEditRef.current?.takeAllFragments() ?? [];
// 1. clear edit state (unmounts EditModeAttachments, triggers cleanup)
setTextContentEditState(null);
for (const [fragmentId, editedText] of Object.entries(state))
// 2A. apply text fragment edits
for (const [fragmentId, editedText] of Object.entries(fragmentsEdits))
handleApplyEdit(fragmentId, editedText);
// if the user pressed Ctrl, we begin a regeneration from here
// 2B. append new attachment fragments
for (const fragment of newFragments)
onMessageFragmentAppend?.(messageId, fragment);
// 3. if the user pressed Ctrl, we begin a regeneration from here
if (withControl && onMessageAssistantFrom)
await onMessageAssistantFrom(messageId, 0);
}, [handleApplyEdit, messageId, onMessageAssistantFrom, textContentEditState]);
}, [handleApplyEdit, messageId, onMessageAssistantFrom, onMessageFragmentAppend, textContentEditState]);
const handleEditsApplyClicked = React.useCallback(() => handleApplyAllEdits(false), [handleApplyAllEdits]);
@@ -314,11 +325,17 @@ export function ChatMessage(props: {
const handleCloseOpsMenu = React.useCallback(() => setOpsMenuAnchor(null), []);
const handleOpsCopy = (e: React.MouseEvent) => {
const handleOpsMessageCopySrc = React.useCallback((e: React.MouseEvent) => {
e.preventDefault();
clipboardCopyDOMSelectionOrFallback(blocksRendererRef.current, textSubject, 'Message');
// copy full source text (ops menu) - bypasses DOM, always gets pre-collapsed content
copyToClipboard(fragmentFlattenedText, 'Message');
handleCloseOpsMenu();
closeContextMenu();
}, [fragmentFlattenedText, handleCloseOpsMenu]);
const handleBubbleCopyDOM = (e: React.MouseEvent) => {
e.preventDefault();
// copy cleaned DOM selection (bubble) - rich text for pasting into Google Docs, etc.
clipboardCopyDOMSelectionOrFallback(blocksRendererRef.current, textSubject, 'Selection');
closeBubble();
};
@@ -802,7 +819,6 @@ export function ChatMessage(props: {
optiAllowSubBlocksMemo={!!messagePendingIncomplete}
disableMarkdownText={disableMarkdown || fromUser /* User messages are edited as text. Try to have them in plain text. NOTE: This may bite. */}
showUnsafeHtmlCode={props.showUnsafeHtmlCode}
enhanceCodeBlocks={labsEnhanceCodeBlocks}
textEditsState={textContentEditState}
setEditedText={(!props.onMessageFragmentReplace || messagePendingIncomplete) ? undefined : handleEditSetText}
@@ -833,6 +849,14 @@ export function ChatMessage(props: {
/>
)}
{/* [Edit Mode] Add new attachments (right below the Document Fragments) */}
{isEditingText && !fromAssistant && !!onMessageFragmentAppend && (
<ChatMessageEditAttachments
ref={attachmentsEditRef}
isMobile={props.isMobile}
/>
)}
{/* [SYSTEM, REAL] Image Attachment Fragments - just for a realistic display below the system instruction text/docs */}
{fromSystem && imageAttachments.length >= 1 && (
<ImageAttachmentFragments
@@ -872,6 +896,13 @@ export function ChatMessage(props: {
/>
)}
{/* Char & Word count */}
{/*{!zenMode && !isEditingText && !messagePendingIncomplete && fragmentFlattenedText.length > 0 && (*/}
{/* <Typography level='body-xs' sx={{ mx: 1.5, mt: 0.5, textAlign: fromAssistant ? 'left' : 'right', opacity: 0.5 }}>*/}
{/* {fragmentFlattenedText.length.toLocaleString()} chars · {(fragmentFlattenedText.match(/\S+/g) || []).length.toLocaleString()} words*/}
{/* </Typography>*/}
{/*)}*/}
</Box>
@@ -896,7 +927,7 @@ export function ChatMessage(props: {
{/*{ENABLE_COPY_MESSAGE_OVERLAY && !fromSystem && !isEditingText && (*/}
{/* <Tooltip title={messagePendingIncomplete ? null : (fromAssistant ? 'Copy message' : 'Copy input')} variant='solid'>*/}
{/* <IconButton*/}
{/* variant='outlined' onClick={handleOpsCopy}*/}
{/* variant='outlined' onClick={handleOpsMessageCopySrc}*/}
{/* sx={{*/}
{/* position: 'absolute', ...(fromAssistant ? { right: { xs: 12, md: 28 } } : { left: { xs: 12, md: 28 } }), zIndex: 10,*/}
{/* opacity: 0, transition: 'opacity 0.16s cubic-bezier(.17,.84,.44,1)',*/}
@@ -934,7 +965,7 @@ export function ChatMessage(props: {
</MenuItem>
)}
{/* Copy */}
<MenuItem onClick={handleOpsCopy} sx={{ flex: 1 }}>
<MenuItem onClick={handleOpsMessageCopySrc} sx={{ flex: 1 }}>
<ListItemDecorator><ContentCopyIcon /></ListItemDecorator>
Copy
</MenuItem>
@@ -1015,7 +1046,7 @@ export function ChatMessage(props: {
{!!props.onTextDiagram && <ListDivider />}
{!!props.onTextDiagram && (
<MenuItem onClick={handleOpsDiagram} disabled={!couldDiagram}>
<ListItemDecorator><AccountTreeOutlinedIcon /></ListItemDecorator>
<ListItemDecorator><PhTreeStructure /></ListItemDecorator>
Auto-Diagram ...
</MenuItem>
)}
@@ -1145,7 +1176,7 @@ export function ChatMessage(props: {
{/* Intelligent functions */}
{!!props.onTextDiagram && <Tooltip disableInteractive arrow placement='top' title={couldDiagram ? 'Auto-Diagram...' : 'Too short to Auto-Diagram'}>
<IconButton color='success' onClick={couldDiagram ? handleOpsDiagram : undefined}>
<AccountTreeOutlinedIcon sx={{ color: couldDiagram ? 'primary' : 'neutral.plainDisabledColor' }} />
<PhTreeStructure sx={{ color: couldDiagram ? 'primary' : 'neutral.plainDisabledColor' }} />
</IconButton>
</Tooltip>}
{!!props.onTextImagine && <Tooltip disableInteractive arrow placement='top' title='Auto-Draw'>
@@ -1162,11 +1193,19 @@ export function ChatMessage(props: {
{/* Bubble Copy */}
<Tooltip disableInteractive arrow placement='top' title='Copy Selection'>
<IconButton onClick={handleOpsCopy}>
<IconButton onClick={handleBubbleCopyDOM}>
<ContentCopyIcon />
</IconButton>
</Tooltip>
{/* Selection char & word count */}
{!!selText && <Divider />}
{!!selText && (
<Typography level='body-xs' sx={{ px: 1, whiteSpace: 'nowrap' }}>
{selText.length.toLocaleString()}c · {(selText.match(/\S+/g) || []).length.toLocaleString()}w
</Typography>
)}
</ButtonGroup>
</ClickAwayListener>
</Popper>
@@ -1181,13 +1220,13 @@ export function ChatMessage(props: {
minWidth={220}
placement='bottom-start'
>
<MenuItem onClick={handleOpsCopy} sx={{ flex: 1, alignItems: 'center' }}>
<MenuItem onClick={(e) => { handleOpsMessageCopySrc(e); closeContextMenu(); }} sx={{ flex: 1, alignItems: 'center' }}>
<ListItemDecorator><ContentCopyIcon /></ListItemDecorator>
Copy
</MenuItem>
{!!props.onTextDiagram && <ListDivider />}
{!!props.onTextDiagram && <MenuItem onClick={handleOpsDiagram} disabled={!couldDiagram || props.isImagining}>
<ListItemDecorator><AccountTreeOutlinedIcon /></ListItemDecorator>
<ListItemDecorator><PhTreeStructure /></ListItemDecorator>
Auto-Diagram ...
</MenuItem>}
{!!props.onTextImagine && <MenuItem onClick={handleOpsImagine} disabled={!couldImagine || props.isImagining}>
@@ -0,0 +1,155 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Sheet } from '@mui/joy';
import { useBrowseCapability } from '~/modules/browse/store-module-browsing';
import type { AttachmentDraftsStoreApi } from '~/common/attachment-drafts/store-attachment-drafts_slice';
import type { DMessageAttachmentFragment } from '~/common/stores/chat/chat.fragments';
import { AttachmentDraftsList } from '~/common/attachment-drafts/attachment-drafts-ui/AttachmentDraftsList';
import { AttachmentSourcesMemo } from '~/common/attachment-drafts/attachment-sources/AttachmentSources';
import { useAttachHandler_CameraOpen, useAttachHandler_Files, useAttachHandler_ScreenCapture, useAttachHandler_UrlWebLinks } from '~/common/attachment-drafts/attachment-sources/useAttachmentSourceHandlers';
import { createAttachmentDraftsVanillaStore } from '~/common/attachment-drafts/store-attachment-drafts_vanilla';
import { supportsCameraCapture } from '~/common/components/camera/useCameraCapture';
import { supportsScreenCapture } from '~/common/util/screenCaptureUtils';
import { useAttachmentDrafts } from '~/common/attachment-drafts/useAttachmentDrafts';
import { useGoogleDrivePicker } from '~/common/attachment-drafts/attachment-sources/useGoogleDrivePicker';
import { ViewDocPartModal } from './fragments-content/ViewDocPartModal';
import { ViewImageRefPartModal } from './fragments-content/ViewImageRefPartModal';
/**
* Imperative interface used outside
*/
export interface EditModeAttachmentsHandle {
takeAllFragments: () => Promise<DMessageAttachmentFragment[]>;
}
const _styles = {
box: {
overflow: 'hidden',
p: 0.5,
// looks - exactly from BoxTextArea - the Text editor
boxShadow: 'inset 1px 0px 3px -2px var(--joy-palette-warning-softColor)',
outline: '1px solid',
outlineColor: 'var(--joy-palette-warning-solidBg)',
borderRadius: 'sm',
// layout
display: 'flex',
flexWrap: 'wrap',
alignItems: 'center',
gap: 1,
// shade to the buttons inside this > div > div > button
'& > div > div > button': {
// backgroundColor: 'warning.softActiveBg',
borderColor: 'warning.outlinedBorder',
borderRadius: 'sm',
boxShadow: 'sm',
},
},
} as const satisfies Record<string, SxProps>;
/**
* Encapsulates all attachment wiring for ChatMessage edit mode.
* Owns a standalone attachment drafts store (one per edit session).
* Exposes an imperative handle for the parent to "take" fragments on save.
*/
export const ChatMessageEditAttachments = React.forwardRef<EditModeAttachmentsHandle, { isMobile: boolean }>(
function EditModeAttachments(props, ref) {
// state
const storeApiRef = React.useRef<AttachmentDraftsStoreApi | null>(null);
if (!storeApiRef.current) storeApiRef.current = createAttachmentDraftsVanillaStore(); // created only on mount
// external state
const {
attachmentDrafts,
attachAppendClipboardItems, attachAppendCloudFile, attachAppendFile, attachAppendUrl, // attachAppendDataTransfer
attachmentsTakeAllFragments,
} = useAttachmentDrafts(storeApiRef.current, false, false, undefined, false);
const browseCapability = useBrowseCapability();
// imperative handle for parent to take fragments on save
React.useImperativeHandle(ref, () => ({
takeAllFragments: () => attachmentsTakeAllFragments('global', 'app-chat'),
}), [attachmentsTakeAllFragments]);
// [effect] cleanup on unmount - remove all drafts (deleted their DBlob assets, except for 'taken' ones)
React.useEffect(() => {
const store = storeApiRef.current;
return () => {
store?.getState().removeAllAttachmentDrafts();
};
}, []);
// handlers - composed from shared attachment source hooks
const handleAttachFiles = useAttachHandler_Files(attachAppendFile);
const handleOpenCamera = useAttachHandler_CameraOpen(attachAppendFile);
const handleAttachScreenCapture = useAttachHandler_ScreenCapture(attachAppendFile);
const { openWebInputDialog, webInputDialogComponent } = useAttachHandler_UrlWebLinks(attachAppendUrl);
const { openGoogleDrivePicker, googleDrivePickerComponent } = useGoogleDrivePicker(attachAppendCloudFile, props.isMobile);
// viewer render props - same pattern as ComposerAttachmentDraftsList.tsx:44-52
const renderDocViewer = React.useCallback(
(part: React.ComponentProps<typeof ViewDocPartModal>['docPart'], onClose: () => void) =>
<ViewDocPartModal docPart={part} onClose={onClose} />,
[],
);
const renderImageViewer = React.useCallback(
(part: React.ComponentProps<typeof ViewImageRefPartModal>['imageRefPart'], onClose: () => void) =>
<ViewImageRefPartModal imageRefPart={part} onClose={onClose} />,
[],
);
return <>
<Sheet color='warning' variant='soft' sx={_styles.box}>
{/* [+] Attachment Sources menu */}
<AttachmentSourcesMemo
mode='menu-message'
canBrowse={browseCapability.mayWork}
hasScreenCapture={supportsScreenCapture}
hasCamera={supportsCameraCapture()}
// onlyImages={showAttachOnlyImages}
onAttachClipboard={attachAppendClipboardItems}
onAttachFiles={handleAttachFiles}
onAttachScreenCapture={handleAttachScreenCapture}
onOpenCamera={handleOpenCamera}
onOpenGoogleDrivePicker={openGoogleDrivePicker}
onOpenWebInput={openWebInputDialog}
/>
{/* Attachment Drafts list */}
{attachmentDrafts.length > 0 ? (
<AttachmentDraftsList
attachmentDraftsStoreApi={storeApiRef.current!}
attachmentDrafts={attachmentDrafts}
buttonsCanWrap
renderDocViewer={renderDocViewer}
renderImageViewer={renderImageViewer}
/>
) : null}
</Sheet>
{/* Modal portals */}
{webInputDialogComponent}
{googleDrivePickerComponent}
</>;
},
);
@@ -1,7 +1,6 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box } from '@mui/joy';
import { BlocksContainer } from '~/modules/blocks/BlocksContainers';
import { RenderImageRefDBlob } from '~/modules/blocks/image/RenderImageRefDBlob';
@@ -78,17 +77,15 @@ export function BlockPartImageRef(props: {
scaledImageSx={scaledImageSx}
variant='content-part'
/>
) : (
<Box>
ContentPartImageRef: unknown reftype
</Box>
)}
) : 'BlockPartImageRef: unknown reftype'}
{/* Image viewer modal */}
{!props.disableViewer && viewingImageRefPart && (
<ViewImageRefPartModal
imageRefPart={viewingImageRefPart}
onClose={() => setViewingImageRefPart(null)}
onDeleteFragment={onFragmentDelete ? handleDeleteFragment : undefined}
onReplaceFragment={onFragmentReplace ? handleReplaceFragment : undefined}
/>
)}
@@ -27,7 +27,6 @@ export function BlockPartText_AutoBlocks(props: {
isMobile: boolean,
fitScreen: boolean,
disableMarkdownText: boolean,
enhanceCodeBlocks: boolean,
renderAsWordsDiff?: WordsDiff,
showUnsafeHtmlCode?: boolean,
@@ -75,7 +74,7 @@ export function BlockPartText_AutoBlocks(props: {
isMobile={props.isMobile}
showUnsafeHtmlCode={props.showUnsafeHtmlCode}
renderAsWordsDiff={props.renderAsWordsDiff}
codeRenderVariant={props.enhanceCodeBlocks ? 'enhanced' : 'outlined'}
codeRenderVariant='enhanced' // was: { props.enhanceCodeBlocks ? 'enhanced' : 'outlined' }
textRenderVariant={props.disableMarkdownText ? 'text' : 'markdown'}
optiAllowSubBlocksMemo={props.optiAllowSubBlocksMemo}
onContextMenu={props.onContextMenu}
@@ -59,7 +59,6 @@ export function ContentFragments(props: {
messageGeneratorLlmId?: string | null,
optiAllowSubBlocksMemo?: boolean,
disableMarkdownText: boolean,
enhanceCodeBlocks: boolean,
showUnsafeHtmlCode?: boolean,
textEditsState: ChatMessageTextPartEditState | null,
@@ -333,7 +332,6 @@ export function ContentFragments(props: {
fitScreen={props.fitScreen}
isMobile={props.isMobile}
disableMarkdownText={props.disableMarkdownText}
enhanceCodeBlocks={props.enhanceCodeBlocks}
// renderWordsDiff={wordsDiff || undefined}
showUnsafeHtmlCode={props.showUnsafeHtmlCode}
optiAllowSubBlocksMemo={!!props.optiAllowSubBlocksMemo}
@@ -23,10 +23,20 @@ const propGridSx: SxProps = {
alignItems: 'center',
columnGap: 2,
rowGap: 1,
// labels
'& > :nth-of-type(odd)': {
color: 'text.secondary',
fontSize: 'xs',
},
// values
'& > :nth-of-type(even)': {
// fontWeight: 'bold',
color: 'text.primary',
// agi-ellipsize
whiteSpace: 'nowrap',
overflow: 'hidden',
textOverflow: 'ellipsis',
},
};
const textPageSx: SxProps = {
@@ -4,17 +4,18 @@ import type { SxProps } from '@mui/joy/styles/types';
import { Box, Button } from '@mui/joy';
import FileDownloadOutlinedIcon from '@mui/icons-material/FileDownloadOutlined';
import { RenderImageRefDBlob } from '~/modules/blocks/image/RenderImageRefDBlob';
import { RenderImageURL } from '~/modules/blocks/image/RenderImageURL';
import { getImageAsset } from '~/common/stores/blob/dblobs-portability';
import type { DMessageImageRefPart } from '~/common/stores/chat/chat.fragments';
import type { DMessageContentFragment, DMessageImageRefPart } from '~/common/stores/chat/chat.fragments';
import { AppBreadcrumbs } from '~/common/components/AppBreadcrumbs';
import { GoodModal } from '~/common/components/modals/GoodModal';
import { convert_Base64WithMimeType_To_Blob } from '~/common/util/blobUtils';
import { downloadBlob } from '~/common/util/downloadUtils';
import { useIsMobile } from '~/common/components/useMatchMedia';
import { BlockPartImageRef } from './BlockPartImageRef';
import { AppBreadcrumbs } from '~/common/components/AppBreadcrumbs';
const imageViewerModalSx: SxProps = {
maxWidth: '90vw',
@@ -28,10 +29,11 @@ const imageViewerContainerSx: SxProps = {
maxHeight: '80vh',
overflow: 'auto',
// pre-compensate the Block > Render Items 1.5 margin
m: -1.5,
// pre-compensate the RenderImageRefDBlob > Sheet's 1.5 (BlocksContainer-alike) margin
mx: -1.5,
// add some margin to unclip the Sheet's shadow
'& > div': {
pt: 1.5,
mb: 0.5,
},
};
@@ -39,6 +41,8 @@ const imageViewerContainerSx: SxProps = {
export function ViewImageRefPartModal(props: {
imageRefPart: DMessageImageRefPart,
onClose: () => void,
onDeleteFragment?: () => void,
onReplaceFragment?: (newFragment: DMessageContentFragment) => void,
}) {
// state
@@ -49,7 +53,7 @@ export function ViewImageRefPartModal(props: {
const isMobile = useIsMobile();
// derived state
const { dataRef, altText } = props.imageRefPart;
const { dataRef, altText, width, height } = props.imageRefPart;
const isDBlob = dataRef.reftype === 'dblob';
// handlers
@@ -133,11 +137,27 @@ export function ViewImageRefPartModal(props: {
sx={imageViewerModalSx}
>
<Box sx={imageViewerContainerSx}>
<BlockPartImageRef
disableViewer={true /* we're in the Modal, we won't pop this up anymore */}
imageRefPart={props.imageRefPart}
contentScaling='sm'
/>
{dataRef.reftype === 'dblob' ? (
<RenderImageRefDBlob
dataRefDBlobAssetId={dataRef.dblobAssetId}
dataRefMimeType={dataRef.mimeType}
dataRefBytesSize={dataRef.bytesSize}
imageAltText={altText}
imageWidth={width}
imageHeight={height}
onDeleteFragment={props.onDeleteFragment}
onReplaceFragment={props.onReplaceFragment}
// onViewImage={} we're already viewing the image in the dialog
// scaledImageSx={} we reset scale in this dialog
variant='content-part'
/>
) : dataRef.reftype === 'url' ? (
<RenderImageURL
imageURL={dataRef.url}
expandableText={altText}
variant='content-part'
/>
) : 'ViewImageRefPartModal: unknown reftype'}
</Box>
</GoodModal>
);
@@ -18,7 +18,7 @@ import { useOverlayComponents } from '~/common/layout/overlays/useOverlayCompone
// configuration
const ENABLE_MARKDOWN_DETECTION = false;
const ENABLE_MARKDOWN_DETECTION = true;
// const REASONING_COLOR = '#ca74b8'; // '#f22a85' (folder-aligned), '#ca74b8' (emoji-aligned)
const REASONING_COLOR: ColorPaletteProp = 'success';
const ANTHROPIC_REDACTED_EXPLAINER = // https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#example-streaming-with-redacted-thinking
@@ -29,7 +29,7 @@ const _styles = {
block: {
mx: 1.5,
} as const,
},
chip: {
px: 1.5,
@@ -38,24 +38,24 @@ const _styles = {
outline: '1px solid',
outlineColor: `${REASONING_COLOR}.solidBg`, // .outlinedBorder
boxShadow: `1px 2px 4px -3px var(--joy-palette-${REASONING_COLOR}-solidBg)`,
} as const,
},
chipDisabled: {
px: 1.5,
py: 0.375,
my: '1px', // to not crop the outline on mobile, or on beam
} as const,
},
chipIcon: {
fontSize: '1rem',
mr: 0.5,
} as const,
},
chipIconPending: {
fontSize: '1rem',
mr: 0.5,
animation: `${animationSpinHalfPause} 2s ease-in-out infinite`,
} as const,
},
chipExpanded: {
mt: '1px', // need to copy the `chip` mt
@@ -63,14 +63,14 @@ const _styles = {
py: 0.375,
// borderRadius: 'sm',
// transition: 'border-radius 0.2s ease-in-out',
} as const,
},
text: {
borderRadius: '12px',
borderRadius: 'sm', // was: 12px
border: '1px solid',
borderColor: `${REASONING_COLOR}.outlinedColor`,
backgroundColor: `rgb(var(--joy-palette-${REASONING_COLOR}-lightChannel) / 15%)`, // similar to success.50
boxShadow: 'inset 1px 1px 3px -3px var(--joy-palette-neutral-solidBg)',
// boxShadow: 'inset 1px 1px 3px -3px var(--joy-palette-neutral-solidBg)',
mt: 1,
p: 1,
@@ -81,13 +81,19 @@ const _styles = {
// layout
display: 'flex',
flexDirection: 'column',
} as const,
},
textUndoWhitespace: {
// for markdown content, we want to allow it to control the whitespace and line breaks, so we undo the plain text styles that break on whitespace
overflowWrap: 'normal',
whiteSpace: 'normal',
},
buttonInline: {
outline: 'none',
// borderRadius: 'sm',
// fontSize: 'xs',
} as const,
},
} as const;
@@ -97,6 +103,8 @@ function _maybeMarkdownReasoning(trimmed: string): boolean {
// const trimmed = text.trimStart();
return trimmed.startsWith('**')
|| trimmed.startsWith('# ')
// || trimmed.startsWith('* ')
// || trimmed.startsWith('- ')
|| /^#{2,6}\s/.test(trimmed);
}
@@ -124,8 +132,12 @@ export function BlockPartModelAux(props: {
// memo
const scaledTypographySx = useScaledTypographySx(adjustContentScaling(props.contentScaling, -1), false, false);
const textSx = React.useMemo(() => ({ ..._styles.text, ...scaledTypographySx }), [scaledTypographySx]);
const maybeMarkdown = React.useMemo(() => !ENABLE_MARKDOWN_DETECTION || neverExpanded ? false : _maybeMarkdownReasoning(props.auxText), [neverExpanded, props.auxText]);
const textSx = React.useMemo(() => ({
..._styles.text,
...scaledTypographySx,
...(maybeMarkdown ? _styles.textUndoWhitespace : {}),
}), [maybeMarkdown, scaledTypographySx]);
let typeText = props.auxType === 'reasoning' ? 'Reasoning' : 'Auxiliary';
@@ -200,7 +212,7 @@ export function BlockPartModelAux(props: {
Show {typeText}
</Chip>
{expanded && (showInline || showDelete) && !!props.auxText && (
{expanded && !props.messagePendingIncomplete && (showInline || showDelete) && !!props.auxText && (
<Box sx={{ display: 'flex', gap: 1 }}>
{/* Make inline */}
@@ -208,10 +220,10 @@ export function BlockPartModelAux(props: {
color={REASONING_COLOR}
variant='soft'
size='sm'
disabled={!onFragmentReplace || props.messagePendingIncomplete}
disabled={!onFragmentReplace /* || props.messagePendingIncomplete */}
onClick={!onFragmentReplace ? undefined : handleInline}
endDecorator={<TextFieldsIcon />}
sx={(!onFragmentReplace || props.messagePendingIncomplete) ? _styles.chipDisabled : _styles.chip}
sx={(!onFragmentReplace /* || props.messagePendingIncomplete */) ? _styles.chipDisabled : _styles.chip}
>
Make Regular Text
</Chip>}
@@ -221,10 +233,10 @@ export function BlockPartModelAux(props: {
color={REASONING_COLOR}
variant='soft'
size='sm'
disabled={!onFragmentDelete || props.messagePendingIncomplete}
disabled={!onFragmentDelete /* || props.messagePendingIncomplete */}
onClick={!onFragmentDelete ? undefined : handleDelete}
endDecorator={<DeleteOutlineIcon />}
sx={(!onFragmentDelete || props.messagePendingIncomplete) ? _styles.chipDisabled : _styles.chip}
sx={(!onFragmentDelete /* || props.messagePendingIncomplete */) ? _styles.chipDisabled : _styles.chip}
>
Delete
</Chip>}
@@ -1,7 +1,7 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Chip } from '@mui/joy';
import { Box, Chip, ColorPaletteProp } from '@mui/joy';
import BrushRoundedIcon from '@mui/icons-material/BrushRounded';
import CodeIcon from '@mui/icons-material/Code';
import HourglassEmptyIcon from '@mui/icons-material/HourglassEmpty';
@@ -47,7 +47,7 @@ const _styles = {
opChip: {
maxWidth: '100%', // fundamental for the ellipsize to work
// width: '100%', // would have way less 'jumpy-ness'
// minWidth: 200, // would work on mobile, but no clear advantage
minWidth: 200, // would work on mobile, but no clear advantage
// fontWeight: 500,
minHeight: '2rem',
// replaced by Box with px: 2
@@ -64,15 +64,16 @@ const _styles = {
} as const satisfies Record<string, SxProps>;
const modelOperationConfig = {
const modelOperationConfig: Record<DVoidPlaceholderModelOp['mot'], { Icon: React.ElementType, color: ColorPaletteProp }> = {
'search-web': { Icon: SearchRoundedIcon, color: 'neutral' },
'gen-image': { Icon: BrushRoundedIcon, color: 'success' },
'code-exec': { Icon: CodeIcon, color: 'primary' },
'flow-cont': { Icon: SearchRoundedIcon, color: 'warning' },
} as const;
function ModelOperationChip(props: {
mot: 'search-web' | 'gen-image' | 'code-exec',
mot: DVoidPlaceholderModelOp['mot'],
cts: number,
text: string,
contentScaling: ContentScaling,
@@ -14,6 +14,7 @@ const INLINE_COLOR = 'primary';
const bubbleComposerSx: SxProps = {
// contained
minWidth: 0,
width: '100%',
zIndex: 2, // stays on top of the 'tokens' bubble in the composer
@@ -1,7 +1,9 @@
import * as React from 'react';
import type { DMessageId } from '~/common/stores/chat/chat.message';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import { createTextContentFragment, DMessageContentFragment, DMessageFragment, DMessageFragmentId, isTextContentFragment } from '~/common/stores/chat/chat.fragments';
import { wrapWithMarkdownSyntax } from '~/modules/blocks/markdown/markdown.wrapper';
import { BUBBLE_MIN_TEXT_LENGTH } from './ChatMessage';
@@ -33,7 +35,7 @@ const APPLY_HTML_STRIKE = (text: string) => `<del>${text}</del>`;
const APPLY_MD_STRONG = (text: string) => wrapWithMarkdownSyntax(text, '**');
const APPLY_CUT = (_text: string) => ''; // Cut removes the text entirely
type HighlightTool = 'highlight' | 'strike' | 'strong' | 'cut';
export type HighlightTool = 'highlight' | 'strike' | 'strong' | 'cut';
// -- Matcher algorithms --
@@ -171,6 +173,10 @@ export function useSelHighlighterMemo(
// Tool application function
acc = (tool: HighlightTool) => {
// Copy to clipboard before cutting
if (tool === 'cut')
copyToClipboard(selText, 'Cut text');
// Apply the tool to the inner text
const selProcessed =
tool === 'highlight' ? APPLY_HTML_HIGHLIGHT(selText)
+10 -7
View File
@@ -1,4 +1,4 @@
import { AixChatGenerateContent_DMessageGuts, aixChatGenerateContent_DMessage_FromConversation } from '~/modules/aix/client/aix.client';
import { aixChatGenerateContent_DMessage_FromConversation, AixChatGenerateContent_DMessageGuts } from '~/modules/aix/client/aix.client';
import { autoChatFollowUps } from '~/modules/aifn/auto-chat-follow-ups/autoChatFollowUps';
import { autoConversationTitle } from '~/modules/aifn/autotitle/autoTitle';
@@ -7,10 +7,10 @@ import type { DLLMId } from '~/common/stores/llms/llms.types';
import { AudioGenerator } from '~/common/util/audio/AudioGenerator';
import { ConversationsManager } from '~/common/chat-overlay/ConversationsManager';
import { DMessage, MESSAGE_FLAG_NOTIFY_COMPLETE, messageWasInterruptedAtStart } from '~/common/stores/chat/chat.message';
import { getUXLabsHighPerformance } from '~/common/stores/store-ux-labs';
import { getLabsHighPerformance } from '~/common/stores/store-ux-labs';
import { PersonaChatMessageSpeak } from './persona/PersonaChatMessageSpeak';
import { getChatAutoAI, getIsNotificationEnabledForModel } from '../store-app-chat';
import { getChatAutoAI, getChatThinkingPolicy, getIsNotificationEnabledForModel } from '../store-app-chat';
import { getInstantAppChatPanesCount } from '../components/panes/store-panes-manager';
@@ -52,10 +52,10 @@ export async function runPersonaOnConversationHead(
},
);
const parallelViewCount = getUXLabsHighPerformance() ? 0 : getInstantAppChatPanesCount();
const parallelViewCount = getLabsHighPerformance() ? 0 : getInstantAppChatPanesCount();
// ai follow-up operations (fire/forget)
const { autoSpeak, autoSuggestDiagrams, autoSuggestHTMLUI, autoSuggestQuestions, autoTitleChat, chatKeepLastThinkingOnly } = getChatAutoAI();
const { autoSpeak, autoSuggestDiagrams, autoSuggestHTMLUI, autoSuggestQuestions, autoTitleChat } = getChatAutoAI();
// AutoSpeak
const autoSpeaker: PersonaProcessorInterface | null = autoSpeak !== 'off' ? new PersonaChatMessageSpeak(autoSpeak) : null;
@@ -129,8 +129,11 @@ export async function runPersonaOnConversationHead(
if (!hasBeenAborted && (autoSuggestDiagrams || autoSuggestHTMLUI || autoSuggestQuestions))
void autoChatFollowUps(conversationId, assistantMessageId, autoSuggestDiagrams, autoSuggestHTMLUI, autoSuggestQuestions);
if (chatKeepLastThinkingOnly)
cHandler.historyKeepLastThinkingOnly();
const chatThinkingPolicy = getChatThinkingPolicy();
if (chatThinkingPolicy === 'last-only')
cHandler.historyStripThinking(1);
else if (chatThinkingPolicy === 'discard-all')
cHandler.historyStripThinking(0);
// return true if this succeeded
return messageStatus.outcome === 'success';
+20 -8
View File
@@ -8,6 +8,8 @@ import { Is } from '~/common/util/pwaUtils';
export type ChatAutoSpeakType = 'off' | 'firstLine' | 'all';
export type ChatThinkingPolicy = 'last-only' | 'all' | 'discard-all';
export type TokenCountingMethod = 'accurate' | 'approximate';
@@ -38,8 +40,8 @@ interface AppChatStore {
autoVndAntBreakpoints: boolean;
setAutoVndAntBreakpoints: (autoVndAntBreakpoints: boolean) => void;
chatKeepLastThinkingOnly: boolean,
setChatKeepLastThinkingOnly: (chatKeepLastThinkingOnly: boolean) => void;
chatThinkingPolicy: ChatThinkingPolicy,
setChatThinkingPolicy: (chatThinkingPolicy: ChatThinkingPolicy) => void;
tokenCountingMethod: TokenCountingMethod;
setTokenCountingMethod: (tokenCountingMethod: TokenCountingMethod) => void;
@@ -48,6 +50,9 @@ interface AppChatStore {
clearFilters: () => void;
filterHasBeamOpen: boolean;
toggleFilterHasBeamOpen: () => void;
filterHasDocFragments: boolean;
toggleFilterHasDocFragments: () => void;
@@ -110,15 +115,18 @@ const useAppChatStore = create<AppChatStore>()(persist(
autoVndAntBreakpoints: true, // 2024-08-24: on as it saves user's money
setAutoVndAntBreakpoints: (autoVndAntBreakpoints: boolean) => _set({ autoVndAntBreakpoints }),
chatKeepLastThinkingOnly: true,
setChatKeepLastThinkingOnly: (chatKeepLastThinkingOnly: boolean) => _set({ chatKeepLastThinkingOnly }),
chatThinkingPolicy: 'last-only',
setChatThinkingPolicy: (chatThinkingPolicy: ChatThinkingPolicy) => _set({ chatThinkingPolicy }),
tokenCountingMethod: Is.Desktop ? 'accurate' : 'approximate',
setTokenCountingMethod: (tokenCountingMethod: TokenCountingMethod) => _set({ tokenCountingMethod }),
// Chat UI
clearFilters: () => _set({ filterIsArchived: false, filterHasDocFragments: false, filterHasImageAssets: false, filterHasStars: false }),
clearFilters: () => _set({ filterIsArchived: false, filterHasBeamOpen: false, filterHasDocFragments: false, filterHasImageAssets: false, filterHasStars: false }),
filterHasBeamOpen: false,
toggleFilterHasBeamOpen: () => _set(({ filterHasBeamOpen }) => ({ filterHasBeamOpen: !filterHasBeamOpen })),
filterHasDocFragments: false,
toggleFilterHasDocFragments: () => _set(({ filterHasDocFragments }) => ({ filterHasDocFragments: !filterHasDocFragments })),
@@ -189,7 +197,7 @@ export const useChatAutoAI = () => useAppChatStore(useShallow(state => ({
autoSuggestQuestions: state.autoSuggestQuestions,
autoTitleChat: state.autoTitleChat,
autoVndAntBreakpoints: state.autoVndAntBreakpoints,
chatKeepLastThinkingOnly: state.chatKeepLastThinkingOnly,
chatThinkingPolicy: state.chatThinkingPolicy,
tokenCountingMethod: state.tokenCountingMethod,
setAutoSpeak: state.setAutoSpeak,
setAutoSuggestAttachmentPrompts: state.setAutoSuggestAttachmentPrompts,
@@ -198,7 +206,7 @@ export const useChatAutoAI = () => useAppChatStore(useShallow(state => ({
setAutoSuggestQuestions: state.setAutoSuggestQuestions,
setAutoTitleChat: state.setAutoTitleChat,
setAutoVndAntBreakpoints: state.setAutoVndAntBreakpoints,
setChatKeepLastThinkingOnly: state.setChatKeepLastThinkingOnly,
setChatThinkingPolicy: state.setChatThinkingPolicy,
setTokenCountingMethod: state.setTokenCountingMethod,
})));
@@ -210,7 +218,6 @@ export const getChatAutoAI = (): {
autoSuggestQuestions: boolean,
autoTitleChat: boolean,
autoVndAntBreakpoints: boolean,
chatKeepLastThinkingOnly: boolean,
} => useAppChatStore.getState();
export const useChatAutoSuggestHTMLUI = (): boolean =>
@@ -219,6 +226,9 @@ export const useChatAutoSuggestHTMLUI = (): boolean =>
export const useChatAutoSuggestAttachmentPrompts = (): boolean =>
useAppChatStore(state => state.autoSuggestAttachmentPrompts);
export const getChatThinkingPolicy = (): ChatThinkingPolicy =>
useAppChatStore.getState().chatThinkingPolicy;
export const getChatTokenCountingMethod = (): TokenCountingMethod =>
useAppChatStore.getState().tokenCountingMethod;
@@ -230,6 +240,7 @@ export const useChatMicTimeoutMs = (): [number, (micTimeoutMs: number) => void]
export function useChatDrawerFilters() {
return useAppChatStore(useShallow(state => ({
filterHasBeamOpen: state.filterHasBeamOpen,
filterHasDocFragments: state.filterHasDocFragments,
filterHasImageAssets: state.filterHasImageAssets,
filterHasStars: state.filterHasStars,
@@ -237,6 +248,7 @@ export function useChatDrawerFilters() {
showPersonaIcons: state.showPersonaIcons2,
showRelativeSize: state.showRelativeSize,
clearFilters: state.clearFilters,
toggleFilterHasBeamOpen: state.toggleFilterHasBeamOpen,
toggleFilterHasDocFragments: state.toggleFilterHasDocFragments,
toggleFilterHasImageAssets: state.toggleFilterHasImageAssets,
toggleFilterHasStars: state.toggleFilterHasStars,
+7 -8
View File
@@ -19,7 +19,6 @@ import { useIsMobile } from '~/common/components/useMatchMedia';
import { BigAgiProNewsCallout, bigAgiProUrl } from './bigAgiPro.data';
import { DevNewsItem, newsFrontendTimestamp, NewsItems } from './news.data';
import { beamNewsCallout } from './beam.data';
// number of news items to show by default, before the expander
@@ -266,12 +265,12 @@ export function AppNews() {
{/* </Box>*/}
{/*)}*/}
{/* Inject the Beam item here*/}
{idx === 2 && (
<Box sx={{ mb: 3 }}>
{beamNewsCallout}
</Box>
)}
{/*/!* Inject the Beam item here*!/*/}
{/*{idx === 2 && (*/}
{/* <Box sx={{ mb: 3 }}>*/}
{/* {beamNewsCallout}*/}
{/* </Box>*/}
{/*)}*/}
{/* News Item */}
<NewsCard key={'news-' + idx} newsItem={ni} idx={idx} addPadding={addPadding} />
@@ -283,7 +282,7 @@ export function AppNews() {
</Box>
)}
{idx === 1 && <Divider sx={{ my: 6, mx: 6 }}/>}
{/*{idx === 1 && <Divider sx={{ my: 6, mx: 6 }}/>}*/}
</React.Fragment>;
})}
-42
View File
@@ -1,42 +0,0 @@
import * as React from 'react';
import { Button, Card, CardContent, Grid, Typography } from '@mui/joy';
import LaunchIcon from '@mui/icons-material/Launch';
import { Link } from '~/common/components/Link';
// export const beamReleaseDate = '2024-04-01T22:00:00Z';
export const beamBlogUrl = 'https://big-agi.com/blog/beam-multi-model-ai-reasoning/';
export const beamNewsCallout =
<Card variant='solid' invertedColors>
<CardContent sx={{ gap: 2 }}>
<Typography level='title-lg'>
Beam - launched in 1.15
</Typography>
<Typography level='body-sm'>
Beam is a world-first, multi-model AI chat modality that accelerates the discovery of superior solutions by leveraging the collective strengths of diverse LLMs.
{/*Beam is a world-first, multi-model AI chat modality. By combining the strenghts of diverse LLMs, Beam allows you to find better answers, faster.*/}
</Typography>
<Grid container spacing={1}>
<Grid xs={12} sm={7}>
<Button
fullWidth variant='soft' color='primary' endDecorator={<LaunchIcon />}
component={Link} href={beamBlogUrl} noLinkStyle target='_blank'
>
Blog
</Button>
</Grid>
<Grid xs={12} sm={5} sx={{ display: 'flex', flexAlign: 'center', justifyContent: 'center' }}>
{/*<Button*/}
{/* fullWidth variant='outlined' color='primary' startDecorator={<ThumbUpRoundedIcon />}*/}
{/* // endDecorator={<LaunchIcon />}*/}
{/* component={Link} href={beamHNUrl} noLinkStyle target='_blank'*/}
{/*>*/}
{/* on Hackernews 🙏*/}
{/*</Button>*/}
</Grid>
</Grid>
</CardContent>
</Card>;
+17 -3
View File
@@ -18,8 +18,6 @@ import { Release } from '~/common/app.release';
import { clientUtmSource } from '~/common/util/pwaUtils';
import { platformAwareKeystrokes } from '~/common/components/KeyStroke';
import { beamBlogUrl } from './beam.data';
// Cover Images
// A capybara created from the intersection of two perfect spheres, creating a unique geometric form. Made of frosted glass with black sunglasses. Sitting on a platform where two squares overlap - their intersection glows softly. The overlapping area contains the word "OPEN" in clean sans-serif. White background with geometric shadows.
@@ -37,6 +35,9 @@ import coverV113 from '../../../public/images/covers/release-cover-v1.13.0.png';
import coverV112 from '../../../public/images/covers/release-cover-v1.12.0.png';
const beamBlogUrl = 'https://big-agi.com/blog/beam-multi-model-ai-reasoning/';
interface NewsItem {
versionCode: string;
versionName?: string;
@@ -71,6 +72,19 @@ export const DevNewsItem: NewsItem = {
// news and feature surfaces
export const NewsItems: NewsItem[] = [
{
versionCode: '2.0.4',
versionName: 'Hyper Params',
versionDate: new Date('2026-03-25T12:00:00Z'),
items: [
{ text: <><B>Opus 4.6</B> adaptive thinking 1M tokens, <B>Sonnet 4.6</B>, <B>GPT-5.4</B> family, <B>Gemini 3.1 Pro</B>, <B>Nano Banana 2</B>, <B>Grok 4.20</B>, <B>Z.ai</B> models</> },
{ text: <>Improved parameter accuracy for reasoning effort, verbosity, and temperature</> },
{ text: <><B issue={965}>AWS Bedrock</B>: native Anthropic, Amazon Nova, and OpenAI-compatible</> },
{ text: <>Anthropic: <B>Fast mode</B>, <B>continuation</B>, search depth US-inference</> },
{ text: <><B issue={945}>Attachments on any message</B>, lossless images, focus mode</> },
{ text: <>Rich text copy, reasoning trace controls, and more fixes</> },
],
},
{
versionCode: '2.0.3',
versionName: 'Red Carpet',
@@ -174,7 +188,7 @@ export const NewsItems: NewsItem[] = [
{ text: <>Support for new Mistral-Large models</>, icon: MistralIcon },
{ text: <>Support for Google Gemini 1.5 models and various improvements</>, icon: GoogleIcon as any },
{ text: <>Deeper LocalAI integration including support for <B issue={411}>model galleries</B></>, icon: LocalAIIcon },
{ text: <>Major <B href='https://twitter.com/enricoros/status/1756553038293303434'>performance optimizations</B>: runs faster, saves power, saves memory</> },
{ text: <>Major <B href='https://x.com/enricoros/status/1756553038293303434'>performance optimizations</B>: runs faster, saves power, saves memory</> },
{ text: <>Improvements: auto-size charts, search and folder experience</> },
{ text: <>Perfect chat scaling, with rapid keyboard shortcuts</> },
{ text: <>Also: diagrams auto-resize, open code with StackBlitz and JSFiddle, quick model visibility toggle, open links externally, docs on the web</> },
+2 -1
View File
@@ -12,6 +12,7 @@ import type { ContentScaling } from '~/common/app.theme';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { agiUuid } from '~/common/util/idUtils';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import { getLLMLabel } from '~/common/stores/llms/llms.types';
import { useFormEditTextArray } from '~/common/components/forms/useFormEditTextArray';
import { useLLMSelect, useLLMSelectLocalState } from '~/common/components/forms/useLLMSelect';
import { useToggleableBoolean } from '~/common/util/hooks/useToggleableBoolean';
@@ -255,7 +256,7 @@ export function Creator(props: { display: boolean }) {
Embodying Persona ...
</Typography>
<Typography level='title-sm' sx={{ mt: 1 }}>
Using: {personaLlm?.label}
Using: {personaLlm ? getLLMLabel(personaLlm) : 'Loading model...'}
</Typography>
</Box>
<Box>
+11 -19
View File
@@ -3,21 +3,19 @@ import * as React from 'react';
import { FormControl, ListDivider, Switch } from '@mui/joy';
import CodeIcon from '@mui/icons-material/Code';
import EditRoundedIcon from '@mui/icons-material/EditRounded';
import EngineeringIcon from '@mui/icons-material/Engineering';
import WarningRoundedIcon from '@mui/icons-material/WarningRounded';
import type { DModelDomainId } from '~/common/stores/llms/model.domains.types';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { FormSelectControl, FormSelectOption } from '~/common/components/forms/FormSelectControl';
import { useLLMSelect } from '~/common/components/forms/useLLMSelect';
import { useLabsDevMode } from '~/common/stores/store-ux-labs';
import { useModelDomain } from '~/common/stores/llms/hooks/useModelDomain';
import type { TokenCountingMethod } from '../chat/store-app-chat';
import type { ChatThinkingPolicy, TokenCountingMethod } from '../chat/store-app-chat';
import { useChatAutoAI } from '../chat/store-app-chat';
const _keepThinkingBlocksOptions: FormSelectOption<'all' | 'last-only'>[] = [
const _keepThinkingBlocksOptions: FormSelectOption<ChatThinkingPolicy>[] = [
{
value: 'last-only',
label: 'Most Recent',
@@ -28,6 +26,11 @@ const _keepThinkingBlocksOptions: FormSelectOption<'all' | 'last-only'>[] = [
label: 'Preserve All',
description: 'Keep all traces',
},
{
value: 'discard-all',
label: 'Discard All',
description: 'May reduce quality',
},
] as const;
const _tokenCountingMethodOptions: FormSelectOption<TokenCountingMethod>[] = [
@@ -76,12 +79,10 @@ export function AppChatSettingsAI() {
autoSuggestHTMLUI, setAutoSuggestHTMLUI,
// autoSuggestQuestions, setAutoSuggestQuestions,
autoTitleChat, setAutoTitleChat,
chatKeepLastThinkingOnly, setChatKeepLastThinkingOnly,
chatThinkingPolicy, setChatThinkingPolicy,
tokenCountingMethod, setTokenCountingMethod,
} = useChatAutoAI();
const labsDevMode = useLabsDevMode();
const showModelIcons = false; // useUIComplexityMode() === 'extra';
// callbacks
@@ -136,15 +137,6 @@ export function AppChatSettingsAI() {
tooltip='Vision model used to generate text descriptions of images when the Caption (Text) attachment option is selected.'
/>
{labsDevMode && (
<FormControlDomainModel
domainId='primaryChat'
title={<><EngineeringIcon color='warning' sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Last used model</>}
description='Chat fallback model'
tooltip='The last used chat model, used as default for new conversations. This is a development setting used to test out auto-detection of the most fitting initial chat model.'
/>
)}
<FormSelectControl
title='Token Counting'
tooltip='Controls how tokens are counted for context limits and pricing estimates.'
@@ -155,10 +147,10 @@ export function AppChatSettingsAI() {
<FormSelectControl
title='Reasoning traces'
tooltip='Controls how AI thinking/reasoning blocks are kept in your chat history. Keeping only in the last message (default) reduces clutter.'
tooltip='Controls how AI thinking/reasoning blocks are kept in your chat history. "Most Recent" keeps only the last message traces (default). "Discard All" removes all traces after each response, which may reduce multi-turn quality with some providers.'
options={_keepThinkingBlocksOptions}
value={chatKeepLastThinkingOnly ? 'last-only' : 'all'}
onChange={(value) => setChatKeepLastThinkingOnly(value === 'last-only')}
value={chatThinkingPolicy}
onChange={setChatThinkingPolicy}
/>
<ListDivider inset='gutter'>Automatic AI Functions</ListDivider>
+126 -45
View File
@@ -1,65 +1,146 @@
import * as React from 'react';
import { ScaledTextBlockRenderer } from '~/modules/blocks/ScaledTextBlockRenderer';
import { Box, Chip, Divider, Typography } from '@mui/joy';
import { GoodModal } from '~/common/components/modals/GoodModal';
import { platformAwareKeystrokes } from '~/common/components/KeyStroke';
import type { ShortcutDefinition } from '~/common/components/shortcuts/useGlobalShortcuts';
import { shortcutsCatalog } from '~/common/components/shortcuts/shortcutsCatalog';
import { useGlobalShortcutsStore } from '~/common/components/shortcuts/store-global-shortcuts';
import { useIsMobile } from '~/common/components/useMatchMedia';
import { useUIContentScaling } from '~/common/stores/store-ui';
import { Box } from '@mui/joy';
import { Is } from '~/common/util/pwaUtils';
const shortcutsMd = platformAwareKeystrokes(`
// Styles
| Shortcut | Description |
|------------------|-----------------------------------------|
| **Edit** | |
| Shift + Enter | Newline |
| Alt + Enter | Append (no response) |
| Ctrl + Enter | Beam (and start all Beams) |
| Ctrl + Shift + Z | **Regenerate** last message |
| Ctrl + Shift + B | **Beam** last message |
| Ctrl + Shift + F | Attach file |
| Ctrl + Shift + V | Attach clipboard (better than Ctrl + V) |
| Ctrl + M | Microphone (voice typing) |
| Ctrl + L | Change Model |
| Ctrl + P | Change Persona |
| **Chats** | |
| Ctrl + O | Open Chat ... |
| Ctrl + S | Save Chat ... |
| Ctrl + Shift + N | **New** chat |
| Ctrl + Shift + X | **Reset** chat |
| Ctrl + Shift + D | **Delete** chat |
| Ctrl + Up | Previous message/Beam (shift for top) |
| Ctrl + Down | Next message/Beam (shift to bottom) |
| Ctrl + [ | **Previous** chat (in history) |
| Ctrl + ] | **Next** chat (in history) |
| **Settings** | |
| Ctrl + , | Preferences |
| Ctrl + Shift + M | 🧠 Models |
| Ctrl + Shift + O | 💬 Options (current Chat Model) |
| Ctrl + Shift + A | Toggle AI Request Inspector |
| Ctrl + Shift + + | Increase Text Size |
| Ctrl + Shift + - | Decrease Text Size |
| Ctrl + Shift + / | Shortcuts |
const _styles = {
grid: {
display: 'grid',
gridTemplateColumns: { xs: '1fr', md: '1fr 1fr' },
gap: 0.75,
columnGap: { md: 3 },
alignItems: 'center',
},
categoryLabel: {
gridColumn: { md: '1 / -1' },
mt: 1.5,
mb: 0.5,
'&:first-of-type': { mt: 0 },
},
categoryDivider: {
gridColumn: { md: '1 / -1' },
mt: 1,
},
row: {
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
gap: 1,
},
keys: {
display: 'flex',
gap: 0.5,
flexShrink: 0,
},
} as const;
`).trim();
function _platformModifier(mod: string): string {
if (!Is.OS.MacOS) return mod;
switch (mod) {
case 'Ctrl':
return '⌃';
case 'Shift':
return '⇧';
case 'Alt':
return '⌥';
default:
return mod;
}
}
function _displayKey(key: string): string {
switch (key) {
case 'ArrowUp':
return '↑';
case 'ArrowDown':
return '↓';
case 'ArrowLeft':
return '←';
case 'ArrowRight':
return '→';
case 'Backspace':
return '⌫';
default:
return key.length === 1 ? key.toUpperCase() : key;
}
}
/**
* Build a set of fingerprints from currently registered shortcuts for active detection.
* Fingerprint: `key_lowercase:ctrl:shift` - matches the global handler resolution.
*/
function _buildActiveFingerprints(): Set<string> {
const allShortcuts = useGlobalShortcutsStore.getState().getAllShortcuts();
const fingerprints = new Set<string>();
for (const s of allShortcuts) {
if (!s.disabled)
fingerprints.add(`${s.key.toLowerCase()}:${!!s.ctrl}:${!!s.shift}`);
}
return fingerprints;
}
function _isActive(def: ShortcutDefinition, fingerprints: Set<string>): boolean {
return fingerprints.has(`${def.key.toLowerCase()}:${!!def.ctrl}:${!!def.shift}`);
}
function ShortcutKeyCombo(props: { def: ShortcutDefinition }) {
const { ctrl, shift, alt, key } = props.def;
const parts: string[] = [];
if (ctrl) parts.push(_platformModifier('Ctrl'));
if (shift) parts.push(_platformModifier('Shift'));
if (alt) parts.push(_platformModifier('Alt'));
parts.push(_displayKey(key));
return (
<Box sx={_styles.keys}>
{parts.map((part, i) =>
<Chip key={i} size='sm' variant='soft' color='neutral'>{part}</Chip>,
)}
</Box>
);
}
export function ShortcutsModal(props: { onClose: () => void }) {
// external state
const isMobile = useIsMobile();
const contentScaling = useUIContentScaling();
// build active fingerprints once at render time
const activeFingerprints = React.useMemo(_buildActiveFingerprints, []);
return (
<GoodModal open fullscreen={isMobile} title='Desktop Shortcuts' onClose={props.onClose}>
<Box sx={{ mx: -2 }}>
<ScaledTextBlockRenderer
text={shortcutsMd}
contentScaling={contentScaling}
textRenderVariant='markdown'
/>
<GoodModal open fullscreen={isMobile} title='Keyboard Shortcuts' onClose={props.onClose}>
<Box sx={_styles.grid}>
{shortcutsCatalog.map((category, ci) => (
<React.Fragment key={category.label}>
{ci > 0 && <Divider sx={_styles.categoryDivider} />}
<Typography level='body-xs' textTransform='uppercase' fontWeight='lg' sx={_styles.categoryLabel}>
{category.label}
</Typography>
{category.items.map((item, i) => {
const active = _isActive(item, activeFingerprints);
return (
<Box key={i} sx={_styles.row}>
<ShortcutKeyCombo def={item} />
<Typography level='body-xs' sx={!active ? { opacity: 0.5 } : undefined}>
{item.description}
</Typography>
</Box>
);
})}
</React.Fragment>
))}
</Box>
</GoodModal>
);
+28 -75
View File
@@ -1,106 +1,53 @@
import * as React from 'react';
import { FormControl, Switch, Typography } from '@mui/joy';
import AddAPhotoIcon from '@mui/icons-material/AddAPhoto';
import CodeIcon from '@mui/icons-material/Code';
import { FormControl, Typography } from '@mui/joy';
import EditNoteIcon from '@mui/icons-material/EditNote';
import EngineeringIcon from '@mui/icons-material/Engineering';
import LocalAtmOutlinedIcon from '@mui/icons-material/LocalAtmOutlined';
import ScreenshotMonitorIcon from '@mui/icons-material/ScreenshotMonitor';
import AttachFileRoundedIcon from '@mui/icons-material/AttachFileRounded';
import ShortcutIcon from '@mui/icons-material/Shortcut';
import ImageOutlinedIcon from '@mui/icons-material/ImageOutlined';
import SpeedIcon from '@mui/icons-material/Speed';
import TitleIcon from '@mui/icons-material/Title';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { FormSwitchControl } from '~/common/components/forms/FormSwitchControl';
import { Is } from '~/common/util/pwaUtils';
import { Link } from '~/common/components/Link';
import { useIsMobile } from '~/common/components/useMatchMedia';
import { useUXLabsStore } from '~/common/stores/store-ux-labs';
// uncomment for more settings
export const DEV_MODE_SETTINGS = false;
export function UxLabsSettings() {
// external state
const isMobile = useIsMobile();
const {
labsAttachScreenCapture, setLabsAttachScreenCapture,
labsCameraDesktop, setLabsCameraDesktop,
labsChatBarAlt, setLabsChatBarAlt,
labsEnhanceCodeBlocks, setLabsEnhanceCodeBlocks,
labsHighPerformance, setLabsHighPerformance,
labsShowCost, setLabsShowCost,
labsLosslessImages, setLabsPreserveLosslessImages,
labsAutoHideComposer, setLabsAutoHideComposer,
labsShowShortcutBar, setLabsShowShortcutBar,
labsDevMode, setLabsDevMode,
labsDevNoStreaming, setLabsDevNoStreaming,
labsComposerAttachmentsInline, setLabsComposerAttachmentsInline,
} = useUXLabsStore();
return <>
{/* [DEV MODE] Settings */}
{(Is.Deployment.Localhost || labsDevMode) && (
<FormSwitchControl
title={<><EngineeringIcon color='warning' sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Developer Mode</>} description={labsDevMode ? 'Enabled' : 'Disabled'}
checked={labsDevMode} onChange={setLabsDevMode}
/>
)}
{labsDevMode && (
<FormSwitchControl
title={<><EngineeringIcon color='warning' sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Disable Streaming</>} description={labsDevNoStreaming ? 'Enabled' : 'Disabled'}
checked={labsDevNoStreaming} onChange={setLabsDevNoStreaming}
/>
)}
{/* Non-Graduated Settings */}
<FormSwitchControl
title={<><CodeIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Enhance Legacy Code</>} description={labsEnhanceCodeBlocks ? 'Auto-Enhance' : 'Disabled'}
checked={labsEnhanceCodeBlocks} onChange={setLabsEnhanceCodeBlocks}
title={<><ImageOutlinedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Lossless Images</>} description={labsLosslessImages ? 'Large storage use' : 'Compress'}
tooltipWarning={labsLosslessImages}
tooltip={<>
Preserves the original lossless PNG format for AI-generated images instead of compressing them to WebP/JPEG.
<hr />
WARNING: PNG images can be very large (e.g. 10-20MB each in high quality modes in Gemini Nano Banana models). This will use significantly more storage.
</>}
checked={labsLosslessImages} onChange={setLabsPreserveLosslessImages}
/>
<FormControl orientation='horizontal' sx={{ justifyContent: 'space-between' }}>
<FormLabelStart
title={<><SpeedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Unlock Refresh</>}
description={labsHighPerformance ? 'Unlocked' : 'Default'}
tooltipWarning={labsHighPerformance}
tooltip={<>
Unlocks the maximum UI refresh rate for Chats and Beams, and will draw every single token as they come in.
<hr />
THIS MAY CAUSE HIGH CPU USAGE, BATTERY DRAIN, AND STUTTERING WITH FAST MODELS.
<hr />
Default: OFF
</>}
/>
<Switch checked={labsHighPerformance} onChange={event => setLabsHighPerformance(event.target.checked)}
endDecorator={labsHighPerformance ? 'On' : 'Off'}
slotProps={{ endDecorator: { sx: { minWidth: 26 } } }} />
</FormControl>
{DEV_MODE_SETTINGS && <FormSwitchControl
title={<><TitleIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Chat Title</>} description={labsChatBarAlt === 'title' ? 'Show Title' : 'Show Models'}
checked={labsChatBarAlt === 'title'} onChange={(on) => setLabsChatBarAlt(on ? 'title' : false)}
/>}
{!isMobile && <FormSwitchControl
title={<><ScreenshotMonitorIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} /> Screen Capture</>} description={labsAttachScreenCapture ? 'Enabled' : 'Disabled'}
checked={labsAttachScreenCapture} onChange={setLabsAttachScreenCapture}
/>}
{!isMobile && <FormSwitchControl
title={<><AddAPhotoIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} /> Webcam Capture</>} description={/*'v1.8 · ' +*/ (labsCameraDesktop ? 'Enabled' : 'Disabled')}
checked={labsCameraDesktop} onChange={setLabsCameraDesktop}
/>}
<FormSwitchControl
title={<><LocalAtmOutlinedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Cost of messages</>} description={labsShowCost ? 'Show when available' : 'Disabled'}
checked={labsShowCost} onChange={setLabsShowCost}
title={<><SpeedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Unlock Refresh</>} description={labsHighPerformance ? 'Unlocked' : 'Default'}
tooltipWarning={labsHighPerformance}
tooltip={<>
Unlocks the maximum UI refresh rate for Chats and Beams, and will draw every single token as they come in.
<hr />
THIS MAY CAUSE HIGH CPU USAGE, BATTERY DRAIN, AND STUTTERING WITH FAST MODELS.
</>}
checked={labsHighPerformance} onChange={setLabsHighPerformance}
/>
{!isMobile && <FormSwitchControl
@@ -108,6 +55,11 @@ export function UxLabsSettings() {
checked={labsShowShortcutBar} onChange={setLabsShowShortcutBar}
/>}
<FormSwitchControl
title={<><AttachFileRoundedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Attachment Buttons</>} description={labsComposerAttachmentsInline ? 'Enabled' : 'Disabled'}
checked={labsComposerAttachmentsInline} onChange={setLabsComposerAttachmentsInline}
/>
<FormSwitchControl
title={<><EditNoteIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Auto-hide input</>} description={labsAutoHideComposer ? 'Hover to show' : 'Always visible'}
checked={labsAutoHideComposer} onChange={setLabsAutoHideComposer}
@@ -123,7 +75,8 @@ export function UxLabsSettings() {
<FormControl orientation='horizontal' sx={{ justifyContent: 'space-between', alignItems: 'center' }}>
<FormLabelStart title='Graduated' description='Ex-labs' />
<Typography level='body-xs'>
<Link href='https://big-agi.com/blog/beam-multi-model-ai-reasoning' target='_blank'>Beam</Link>
Screen Capture · Webcam · Cost Estimation · Enhanced Code Blocks
{' · '}<Link href='https://big-agi.com/blog/beam-multi-model-ai-reasoning' target='_blank'>Beam</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/208' target='_blank'>Split Chats</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/354' target='_blank'>Call AGI</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/282' target='_blank'>Persona Creator</Link>
+5 -5
View File
@@ -8,12 +8,12 @@
*/
export const Brand = {
Title: {
Base: 'big-AGI',
Common: (process.env.NODE_ENV === 'development' ? '[DEV] ' : '') + 'big-AGI',
Base: 'Big-AGI',
Common: (process.env.NODE_ENV === 'development' ? '[DEV] ' : '') + 'Big-AGI',
},
Meta: {
Description: 'Launch big-AGI to unlock the full potential of AI, with precise control over your data and models. Voice interface, AI personas, advanced features, and fun UX.',
SiteName: 'big-AGI | Precision AI for You',
Description: 'Launch the open-source AI workspace for experts. BYO API keys. Compare and tune models, use personas, voice and vision - your data stays local.',
SiteName: 'Big-AGI | AI for power-users',
ThemeColor: '#32383E',
TwitterSite: '@enricoros',
},
@@ -24,7 +24,7 @@ export const Brand = {
OpenRepo: 'https://github.com/enricoros/big-agi',
OpenProject: 'https://github.com/users/enricoros/projects/4',
SupportInvite: 'https://discord.gg/MkH4qj2Jp9',
// Twitter: 'https://www.twitter.com/enricoros',
// Twitter: 'https://x.com/enricoros',
PrivacyPolicy: 'https://big-agi.com/privacy',
TermsOfService: 'https://big-agi.com/terms',
},
+2 -2
View File
@@ -23,8 +23,8 @@ export const Release = {
// this is here to trigger revalidation of data, e.g. models refresh
Monotonics: {
Aix: 54,
NewsVersion: 203,
Aix: 62,
NewsVersion: 204,
},
// Frontend: pretty features
@@ -27,7 +27,7 @@ import { LiveFileIcon } from '~/common/livefile/liveFile.icons';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { ellipsizeFront, ellipsizeMiddle } from '~/common/util/textUtils';
import type { LLMAttachmentDraft } from './useLLMAttachmentDrafts';
import type { IAttachmentEnrichment } from '../llm-enrichment/attachment.enrichment';
const ATTACHMENT_MIN_STYLE = {
@@ -120,7 +120,7 @@ const converterTypeToIconMap: { [key in AttachmentDraftConverterType]: React.Com
'unhandled': TextureIcon,
};
function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean, onViewImageRefPart: (imageRefPart: DMessageImageRefPart) => void) {
function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean, onViewImageRefPart?: (imageRefPart: DMessageImageRefPart) => void) {
const activeConverters = attachmentDraft.converters.filter(c => c.isActive);
if (activeConverters.length === 0)
return null;
@@ -139,7 +139,7 @@ function attachmentIcons(attachmentDraft: AttachmentDraft, noTooltips: boolean,
outputSingleImageRefDBlobs = [fragment.part.dataRef];
}
const handleViewFirstImage = (e: React.MouseEvent) => {
const handleViewFirstImage = !onViewImageRefPart ? undefined : (e: React.MouseEvent) => {
e.preventDefault();
e.stopPropagation();
const fragment = attachmentDraft.outputFragments[0];
@@ -224,17 +224,19 @@ function attachmentLabelText(attachmentDraft: AttachmentDraft): string {
}
export const LLMAttachmentButtonMemo = React.memo(LLMAttachmentButton);
export const AttachmentDraftButtonMemo = React.memo(AttachmentDraftButton);
function LLMAttachmentButton(props: {
llmAttachment: LLMAttachmentDraft,
function AttachmentDraftButton(props: {
draft: AttachmentDraft,
enrichment?: IAttachmentEnrichment,
menuShown: boolean,
onToggleMenu: (attachmentDraftId: AttachmentDraftId, anchor: HTMLAnchorElement) => void,
onViewImageRefPart: (imageRefPart: DMessageImageRefPart) => void,
onViewImageRefPart?: (imageRefPart: DMessageImageRefPart) => void,
}) {
// derived state
const { attachmentDraft: draft, llmSupportsAllFragments } = props.llmAttachment;
const { draft, enrichment } = props;
const llmSupportsAllFragments = enrichment?.isCompatible(draft) ?? true;
const isInputLoading = draft.inputLoading;
const isInputError = !!draft.inputError;
@@ -21,10 +21,9 @@ import { humanReadableBytes } from '~/common/util/textUtils';
import { themeZIndexOverMobileDrawer } from '~/common/app.theme';
import { useUIPreferencesStore } from '~/common/stores/store-ui';
import type { AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import type { AttachmentDraftsStoreApi } from '~/common/attachment-drafts/store-attachment-drafts_slice';
import type { LLMAttachmentDraft } from './useLLMAttachmentDrafts';
import type { LLMAttachmentDraftsAction } from './LLMAttachmentsList';
import type { AttachmentDraft, AttachmentDraftId, AttachmentDraftsAction } from '../attachment.types';
import type { AttachmentDraftsStoreApi } from '../store-attachment-drafts_slice';
import type { IAttachmentEnrichment } from '../llm-enrichment/attachment.enrichment';
// configuration
@@ -49,16 +48,17 @@ const actionButtonsSx: SxProps = {
};
export function LLMAttachmentMenu(props: {
export function AttachmentDraftMenu(props: {
attachmentDraftsStoreApi: AttachmentDraftsStoreApi,
llmAttachmentDraft: LLMAttachmentDraft,
draft: AttachmentDraft,
enrichment?: IAttachmentEnrichment,
menuAnchor: HTMLAnchorElement,
isPositionFirst: boolean,
isPositionLast: boolean,
onClose: () => void,
onDraftAction?: (attachmentDraftId: AttachmentDraftId, actionId: LLMAttachmentDraftsAction) => void,
onViewDocPart: (docPart: DMessageDocPart) => void,
onViewImageRefPart: (imageRefPart: DMessageImageRefPart) => void
onDraftAction?: (attachmentDraftId: AttachmentDraftId, actionId: AttachmentDraftsAction) => void,
onViewDocPart?: (docPart: DMessageDocPart) => void,
onViewImageRefPart?: (imageRefPart: DMessageImageRefPart) => void
}) {
// state
@@ -72,12 +72,10 @@ export function LLMAttachmentMenu(props: {
const isUnmoveable = props.isPositionFirst && props.isPositionLast;
const {
attachmentDraft: draft,
llmSupportsAllFragments,
llmSupportsTextFragments,
llmTokenCountApprox,
} = props.llmAttachmentDraft;
const { draft, enrichment } = props;
const llmSupportsAllFragments = enrichment?.isCompatible(draft) ?? true;
const llmSupportsTextFragments = enrichment?.supportsTextInline(draft) ?? false;
const llmTokenCountApprox = enrichment?.estimateTokens(draft) ?? null;
const {
id: draftId,
@@ -145,13 +143,13 @@ export function LLMAttachmentMenu(props: {
const handleViewImageRefPart = React.useCallback((event: React.MouseEvent, imageRefPart: DMessageImageRefPart) => {
event.preventDefault();
event.stopPropagation();
onViewImageRefPart(imageRefPart);
onViewImageRefPart?.(imageRefPart);
}, [onViewImageRefPart]);
const handleViewDocPart = React.useCallback((event: React.MouseEvent, docPart: DMessageDocPart) => {
event.preventDefault();
event.stopPropagation();
onViewDocPart(docPart);
onViewDocPart?.(docPart);
}, [onViewDocPart]);
const canHaveDetails = !!draftInput && !isConverting;
@@ -344,7 +342,7 @@ export function LLMAttachmentMenu(props: {
<Typography level='body-sm' textColor='success.softColor' sx={{ display: 'flex', alignItems: 'center' }}>
Input: {draftInput.urlImage.mimeType} · {draftInput.urlImage.width}x{draftInput.urlImage.height}{!draftInput.urlImage.imgDataUrl?.length ? '' : ` · ${humanReadableBytes(draftInput.urlImage.imgDataUrl.length)}`}
&nbsp;
<Chip component='span' size='sm' color='success' variant='soft' startDecorator={<VisibilityIcon />} onClick={(event) => {
{!!onViewImageRefPart && <Chip component='span' size='sm' color='success' variant='soft' startDecorator={<VisibilityIcon />} onClick={(event) => {
if (draftInput?.urlImage?.imgDataUrl) {
// Invoke the viewer but with a virtual 'temp' part description to see this preview image
handleViewImageRefPart(event, {
@@ -360,7 +358,7 @@ export function LLMAttachmentMenu(props: {
}
}} sx={{ ml: 'auto' }}>
view input
</Chip>
</Chip>}
</Typography>
)}
@@ -390,9 +388,9 @@ export function LLMAttachmentMenu(props: {
{/* copy*/}
{/*</Chip>*/}
<ButtonGroup size='sm' color='primary' variant='outlined' sx={actionButtonsSx}>
<Button startDecorator={<VisibilityIcon sx={{ fontSize: 'md' }} />} onClick={(event) => handleViewDocPart(event, part)}>
{!!onViewDocPart && <Button startDecorator={<VisibilityIcon sx={{ fontSize: 'md' }} />} onClick={(event) => handleViewDocPart(event, part)}>
view
</Button>
</Button>}
<Button onClick={(event) => handleCopyToClipboard(event, part.data.text)}/* endDecorator={<ContentCopyIcon />} */>
copy
</Button>
@@ -419,12 +417,12 @@ export function LLMAttachmentMenu(props: {
{/* del*/}
{/*</Chip>}*/}
<ButtonGroup size='sm' color='primary' variant='outlined' sx={actionButtonsSx}>
<Button
{!!onViewImageRefPart && <Button
startDecorator={<VisibilityIcon sx={{ fontSize: 'md' }} />}
onClick={(event) => handleViewImageRefPart(event, legacyImageRefPart)}
>
view
</Button>
</Button>}
{isOutputMultiple && (
<Button
color='warning'
@@ -1,32 +1,22 @@
import * as React from 'react';
import { Box, CircularProgress, IconButton, ListDivider, ListItemDecorator, MenuItem } from '@mui/joy';
import AutoFixHighIcon from '@mui/icons-material/AutoFixHigh';
import { Box, IconButton, ListDivider, ListItemDecorator, MenuItem } from '@mui/joy';
import ClearIcon from '@mui/icons-material/Clear';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import VerticalAlignBottomIcon from '@mui/icons-material/VerticalAlignBottom';
import type { AgiAttachmentPromptsData } from '~/modules/aifn/agiattachmentprompts/useAgiAttachmentPrompts';
import type { DMessageDocPart, DMessageImageRefPart } from '~/common/stores/chat/chat.fragments';
import { CloseablePopup } from '~/common/components/CloseablePopup';
import { ConfirmationModal } from '~/common/components/modals/ConfirmationModal';
import { useOverlayComponents } from '~/common/layout/overlays/useOverlayComponents';
import type { AttachmentDraftId } from '~/common/attachment-drafts/attachment.types';
import type { AttachmentDraftsStoreApi } from '~/common/attachment-drafts/store-attachment-drafts_slice';
import type { DMessageDocPart, DMessageImageRefPart } from '~/common/stores/chat/chat.fragments';
import type { AttachmentDraft, AttachmentDraftId, AttachmentDraftsAction } from '../attachment.types';
import type { AttachmentDraftsStoreApi } from '../store-attachment-drafts_slice';
import type { AttachmentEnrichmentSummary, IAttachmentEnrichment } from '../llm-enrichment/attachment.enrichment';
import { ViewImageRefPartModal } from '../../message/fragments-content/ViewImageRefPartModal';
import type { LLMAttachmentDraft } from './useLLMAttachmentDrafts';
import { LLMAttachmentButtonMemo } from './LLMAttachmentButton';
import { LLMAttachmentMenu } from './LLMAttachmentMenu';
import { LLMAttachmentsPromptsButtonMemo } from './LLMAttachmentsPromptsButton';
import { ViewDocPartModal } from '../../message/fragments-content/ViewDocPartModal';
export type LLMAttachmentDraftsAction = 'inline-text' | 'copy-text';
import { AttachmentDraftButtonMemo } from './AttachmentDraftButton';
import { AttachmentDraftMenu } from './AttachmentDraftMenu';
const _style = {
@@ -62,15 +52,21 @@ const _style = {
/**
* Renderer of attachment drafts, with menus, etc.
* Generic renderer of attachment drafts, with menus, etc.
* Portable across Composer, ChatMessage edit, FollowUps, etc.
*/
export function LLMAttachmentsList(props: {
agiAttachmentPrompts?: AgiAttachmentPromptsData,
export function AttachmentDraftsList(props: {
attachmentDraftsStoreApi: AttachmentDraftsStoreApi,
canInlineSomeFragments: boolean,
llmAttachmentDrafts: LLMAttachmentDraft[],
onAttachmentDraftsAction?: (attachmentDraftId: AttachmentDraftId | null, actionId: LLMAttachmentDraftsAction) => void,
attachmentDrafts: AttachmentDraft[],
enrichment?: IAttachmentEnrichment,
enrichmentSummary?: AttachmentEnrichmentSummary,
buttonsCanWrap?: boolean,
onAttachmentDraftsAction?: (attachmentDraftId: AttachmentDraftId | null, actionId: AttachmentDraftsAction) => void,
// optional rendering props
startDecorator?: React.ReactNode,
renderDocViewer?: (docPart: DMessageDocPart, onClose: () => void) => React.ReactNode,
renderImageViewer?: (imageRefPart: DMessageImageRefPart, onClose: () => void) => React.ReactNode,
renderOverallMenuExtra?: () => React.ReactNode,
}) {
// state
@@ -82,15 +78,20 @@ export function LLMAttachmentsList(props: {
// derived state
const { agiAttachmentPrompts, canInlineSomeFragments, llmAttachmentDrafts } = props;
const hasAttachments = llmAttachmentDrafts.length >= 1;
const { attachmentDrafts, enrichmentSummary } = props;
const canInlineSomeFragments = enrichmentSummary?.anyInlinable ?? false;
const hasAttachments = attachmentDrafts.length >= 1;
// ref to optimize
const attachmentDraftsRef = React.useRef(attachmentDrafts);
attachmentDraftsRef.current = attachmentDrafts;
// derived item menu state
const itemMenuAnchor = draftMenu?.anchor;
const itemMenuAttachmentDraftId = draftMenu?.attachmentDraftId;
const itemMenuAttachmentDraft = itemMenuAttachmentDraftId ? llmAttachmentDrafts.find(la => la.attachmentDraft.id === draftMenu.attachmentDraftId) : undefined;
const itemMenuIndex = itemMenuAttachmentDraft ? llmAttachmentDrafts.indexOf(itemMenuAttachmentDraft) : -1;
const itemMenuAttachmentDraft = itemMenuAttachmentDraftId ? attachmentDrafts.find(a => a.id === draftMenu.attachmentDraftId) : undefined;
const itemMenuIndex = itemMenuAttachmentDraft ? attachmentDrafts.indexOf(itemMenuAttachmentDraft) : -1;
// overall menu
@@ -100,10 +101,10 @@ export function LLMAttachmentsList(props: {
const handleOverallMenuHide = React.useCallback(() => setOverallMenuAnchor(null), []);
const handleOverallMenuToggle = React.useCallback((event: React.MouseEvent<HTMLAnchorElement>) => {
event.shiftKey && console.log('llmAttachmentDrafts', llmAttachmentDrafts);
event.shiftKey && console.log('llmAttachmentDrafts', attachmentDraftsRef.current);
event.preventDefault(); // added for the Right mouse click (to prevent the menu)
setOverallMenuAnchor(anchor => anchor ? null : event.currentTarget);
}, [llmAttachmentDrafts]);
}, []);
const handleOverallCopyText = React.useCallback(() => {
handleOverallMenuHide();
@@ -121,13 +122,13 @@ export function LLMAttachmentsList(props: {
open onClose={onUserReject} onPositive={() => onResolve(true)}
title='Confirm Removal'
positiveActionText='Remove All'
confirmationText={`This action will remove all (${llmAttachmentDrafts.length}) attachments. Do you want to proceed?`}
confirmationText={`This action will remove all (${attachmentDraftsRef.current.length}) attachments. Do you want to proceed?`}
/>,
)) {
handleOverallMenuHide();
props.attachmentDraftsStoreApi.getState().removeAllAttachmentDrafts();
}
}, [handleOverallMenuHide, llmAttachmentDrafts.length, props.attachmentDraftsStoreApi, showPromisedOverlay]);
}, [handleOverallMenuHide, props.attachmentDraftsStoreApi, showPromisedOverlay]);
// item menu
@@ -139,7 +140,7 @@ export function LLMAttachmentsList(props: {
setDraftMenu(prev => prev?.attachmentDraftId === attachmentDraftId ? null : { anchor, attachmentDraftId });
}, [handleOverallMenuHide]);
const handleDraftAction = React.useCallback((attachmentDraftId: AttachmentDraftId, actionId: LLMAttachmentDraftsAction) => {
const handleDraftAction = React.useCallback((attachmentDraftId: AttachmentDraftId, actionId: AttachmentDraftsAction) => {
// pass-through, but close the menu as well, as the action is destructive for the caller
handleDraftMenuHide();
onAttachmentDraftsAction?.(attachmentDraftId, actionId);
@@ -174,19 +175,18 @@ export function LLMAttachmentsList(props: {
{/* Horizontally scrollable */}
<Box sx={!props.buttonsCanWrap ? _style.barScrollX : _style.barWraps}>
{/* AI Suggestion Button */}
{(!!agiAttachmentPrompts && (agiAttachmentPrompts.isVisible || agiAttachmentPrompts.hasData)) && (
<LLMAttachmentsPromptsButtonMemo data={agiAttachmentPrompts} />
)}
{/* Slot: before buttons (e.g. AI Suggestion Button) */}
{props.startDecorator}
{/* Attachment Buttons */}
{llmAttachmentDrafts.map((llmAttachment) =>
<LLMAttachmentButtonMemo
key={llmAttachment.attachmentDraft.id}
llmAttachment={llmAttachment}
menuShown={llmAttachment.attachmentDraft.id === itemMenuAttachmentDraftId}
{attachmentDrafts.map((draft) =>
<AttachmentDraftButtonMemo
key={draft.id}
draft={draft}
enrichment={props.enrichment}
menuShown={draft.id === itemMenuAttachmentDraftId}
onToggleMenu={handleDraftMenuToggle}
onViewImageRefPart={handleViewImageRefPart}
onViewImageRefPart={!props.renderImageViewer ? undefined : handleViewImageRefPart}
/>,
)}
@@ -207,28 +207,25 @@ export function LLMAttachmentsList(props: {
{/* Image Viewer Modal - when opening attachment images */}
{!!viewerImageRefPart && (
<ViewImageRefPartModal imageRefPart={viewerImageRefPart} onClose={handleCloseImageViewer} />
)}
{!!viewerImageRefPart && props.renderImageViewer?.(viewerImageRefPart, handleCloseImageViewer)}
{/* Text Viewer Modal */}
{!!viewerDocPart && (
<ViewDocPartModal docPart={viewerDocPart} onClose={handleCloseDocPartViewer} />
)}
{!!viewerDocPart && props.renderDocViewer?.(viewerDocPart, handleCloseDocPartViewer)}
{/* Single LLM Attachment Draft Menu */}
{/* Single Attachment Draft Menu */}
{!!itemMenuAnchor && !!itemMenuAttachmentDraft && !!props.attachmentDraftsStoreApi && (
<LLMAttachmentMenu
<AttachmentDraftMenu
attachmentDraftsStoreApi={props.attachmentDraftsStoreApi}
llmAttachmentDraft={itemMenuAttachmentDraft}
draft={itemMenuAttachmentDraft}
enrichment={props.enrichment}
menuAnchor={itemMenuAnchor}
isPositionFirst={itemMenuIndex === 0}
isPositionLast={itemMenuIndex === llmAttachmentDrafts.length - 1}
isPositionLast={itemMenuIndex === attachmentDrafts.length - 1}
onClose={handleDraftMenuHide}
onDraftAction={!onAttachmentDraftsAction ? undefined : handleDraftAction}
onViewDocPart={handleViewDocPart}
onViewImageRefPart={handleViewImageRefPart}
onViewDocPart={!props.renderDocViewer ? undefined : handleViewDocPart}
onViewImageRefPart={!props.renderImageViewer ? undefined : handleViewImageRefPart}
/>
)}
@@ -241,14 +238,8 @@ export function LLMAttachmentsList(props: {
minWidth={200}
placement='top-start'
>
{/* uses the agiAttachmentPrompts to imagine what the user will ask aboud those */}
{!!agiAttachmentPrompts && (
<MenuItem color='primary' variant='soft' onClick={agiAttachmentPrompts.refetch} disabled={!hasAttachments || agiAttachmentPrompts.isFetching}>
<ListItemDecorator>{agiAttachmentPrompts.isFetching ? <CircularProgress size='sm' /> : <AutoFixHighIcon />}</ListItemDecorator>
What can I do?
</MenuItem>
)}
{!!agiAttachmentPrompts && <ListDivider />}
{/* Slot: extra overall menu items (e.g. "What can I do?") */}
{props.renderOverallMenuExtra?.()}
{!!onAttachmentDraftsAction && <MenuItem onClick={handleOverallInlineText} disabled={!canInlineSomeFragments}>
<ListItemDecorator><VerticalAlignBottomIcon /></ListItemDecorator>
@@ -262,10 +253,10 @@ export function LLMAttachmentsList(props: {
<MenuItem onClick={handleOverallClear}>
<ListItemDecorator><ClearIcon /></ListItemDecorator>
Remove All{llmAttachmentDrafts.length > 5 ? <span style={{ opacity: 0.5 }}> {llmAttachmentDrafts.length} attachments</span> : null}
Remove All{attachmentDrafts.length > 5 ? <span style={{ opacity: 0.5 }}> {attachmentDrafts.length} attachments</span> : null}
</MenuItem>
</CloseablePopup>
)}
</>;
}
}
@@ -0,0 +1,551 @@
import * as React from 'react';
import { keyframes } from '@emotion/react';
import type { FileWithHandle } from 'browser-fs-access';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Button, Checkbox, ColorPaletteProp, Dropdown, IconButton, ListDivider, ListItem, ListItemDecorator, Menu, MenuButton, MenuItem } from '@mui/joy';
import AddRoundedIcon from '@mui/icons-material/AddRounded';
import AddToDriveRoundedIcon from '@mui/icons-material/AddToDriveRounded';
import AttachFileRoundedIcon from '@mui/icons-material/AttachFileRounded';
import CameraAltOutlinedIcon from '@mui/icons-material/CameraAltOutlined';
import ContentPasteGoIcon from '@mui/icons-material/ContentPasteGo';
import FiberManualRecordIcon from '@mui/icons-material/FiberManualRecord';
import LanguageRoundedIcon from '@mui/icons-material/LanguageRounded';
import ScreenshotMonitorIcon from '@mui/icons-material/ScreenshotMonitor';
import { useBrowseStore } from '~/modules/browse/store-module-browsing';
import { ButtonAttachFilesMemo, openFileForAttaching } from '~/common/components/ButtonAttachFiles';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { supportsClipboardRead } from '~/common/util/clipboardUtils';
import { takeScreenCapture } from '~/common/util/screenCaptureUtils';
import { themeZIndexOverMobileDrawer } from '~/common/app.theme';
import { ButtonAttachCameraMemo } from './ButtonAttachCamera';
import { ButtonAttachClipboardMemo } from './ButtonAttachClipboard';
import { ButtonAttachGoogleDriveMemo } from './ButtonAttachGoogleDrive';
import { ButtonAttachScreenCaptureMemo } from './ButtonAttachScreenCapture';
import { ButtonAttachWebMemo } from './ButtonAttachWeb';
import { hasGoogleDriveCapability } from './useGoogleDrivePicker';
// configuration
export const ATTACH_BUTTON_RADIUS = '18px'; // for the rich (non-compact) menu button
// animations for the rich (non-compact) menu
const animationMenu = keyframes` from {opacity: 0;} to {opacity: 1;}`;
const animationMenuItem = keyframes` from {opacity: 0;transform: translateY(-6px);} to {opacity: 1;transform: translateY(0);}`;
const _style = {
menuItem: {
// pl: 3,
// pr: 2,
py: 0.5, // was 1
minHeight: 60,
// minHeight: '3.25rem', // now 52, was 60
},
menuItemContent: {
display: 'flex',
flexDirection: 'column',
gap: 0.125,
},
menuItemContentDisabled: {
display: 'flex',
flexDirection: 'column',
gap: 0.125,
opacity: 0.5,
},
menuItemName: {
typography: 'title-sm',
fontWeight: 600,
// fontSize: '15px',
},
menuItemDescription: {
fontSize: 'xs',
color: 'text.tertiary',
// fontWeight: 400,
},
liveFeedButton: {
ml: 1,
// outline: '1px solid transparent',
// '&:hover': {
// outlineColor: 'currentColor',
// },
},
} as const satisfies Record<string, SxProps>;
// Live feed record button - returns null if onClick is undefined
function LiveFeedButton(props: { isActive: boolean, tooltip: string, onClick: () => void }) {
return (
<TooltipOutlined title={props.tooltip} placement='top'>
<IconButton
size='sm'
variant={props.isActive ? 'solid' : 'outlined'}
color='danger'
onClick={(e) => {
e.stopPropagation();
props.onClick();
}}
sx={_style.liveFeedButton}
>
<FiberManualRecordIcon sx={{ fontSize: 16 }} />
{/*{props.isActive ? <AddRoundedIcon sx={{ fontSize: 18 }} /> : <FiberManualRecordIcon sx={{ fontSize: 16 }} />}*/}
</IconButton>
</TooltipOutlined>
);
}
// Rich menu item (used in menu-rich mode)
function RichMenuItem(props: {
name: React.ReactNode;
description: React.ReactNode;
Icon: React.ComponentType;
onClick: () => void;
delay?: number;
disabled?: boolean;
color?: ColorPaletteProp;
endAction?: React.ReactNode;
}) {
return (
<MenuItem
onClick={props.onClick}
disabled={props.disabled}
color={props.color}
sx={!props.delay ? _style.menuItem : {
..._style.menuItem,
animation: `${animationMenuItem} 0.12s cubic-bezier(0.25, 0.46, 0.45, 0.94) ${props.delay}s both`,
}}
>
<ListItemDecorator>
<props.Icon />
</ListItemDecorator>
<Box sx={props.disabled ? _style.menuItemContentDisabled : _style.menuItemContent}>
<Box sx={_style.menuItemName}>
{props.name}
</Box>
<Box sx={_style.menuItemDescription}>
{props.description}
</Box>
</Box>
{props.endAction && (
<Box sx={{ ml: 'auto', display: 'flex', alignItems: 'center' }}>
{props.endAction}
</Box>
)}
</MenuItem>
);
}
// Auto-download toggle (shown when browsing capability exists)
function AutoDownloadToggle(props: { delay?: number }) {
// external state
const enableComposerAttach = useBrowseStore(s => s.enableComposerAttach);
const handleToggle = React.useCallback((event: React.ChangeEvent<HTMLInputElement>) => {
event.stopPropagation();
useBrowseStore.getState().setEnableComposerAttach(event.target.checked);
}, []);
return <>
<ListDivider inset='gutter' sx={{ my: 1 }} />
<ListItem
sx={{
..._style.menuItem,
animation: `${animationMenuItem} 0.12s cubic-bezier(0.25, 0.46, 0.45, 0.94) ${props.delay}s both`,
}}
// onClick={(event) => {
// event.preventDefault();
// event.stopPropagation();
// setEnableComposerAttach(!enableComposerAttach);
// }}
>
<ListItemDecorator>
<Checkbox
size='sm'
color='neutral'
checked={enableComposerAttach}
onChange={handleToggle}
onClick={(event) => event.stopPropagation()}
sx={{ ml: 0.375 }}
/>
</ListItemDecorator>
<Box sx={_style.menuItemContent}>
<Box sx={{ typography: 'title-sm' }}>
Attach pasted URLs
</Box>
<Box sx={_style.menuItemDescription}>
Download and attach pasted web links
</Box>
</Box>
</ListItem>
</>;
}
/**
* Portable attachment sources component.
*
* Three modes:
* - **menu-compact**: Mobile-style - icon trigger, simple MenuItems (no descriptions/animations)
* - **menu-rich**: Desktop-style - labeled button trigger, rich items with descriptions and animations
* - **inline-buttons**: Individual source buttons rendered inline (no dropdown)
*/
export const AttachmentSourcesMemo = React.memo(AttachmentSources);
function AttachmentSources(props: {
// mode
mode: 'menu-compact' | 'menu-rich' | 'inline-buttons' | 'menu-message',
color?: ColorPaletteProp, // menu-rich and inline-buttons
richButtonStandOut?: boolean, // menu-rich only
menuButton?: React.ReactNode, // custom MenuButton trigger for menu-compact/menu-message modes
// source availability - note that hasGoogleDriveCapability is local
canBrowse: boolean, // whether browsing is available (for Web button and showing the auto-attach toggle)
hasCamera: boolean,
// hasGoogleDrive: boolean, // it's now local: hasGoogleDriveCapability
hasScreenCapture: boolean,
// configuration
onlyImages?: boolean, // makes clipboard/drive/web unavailable
// callbacks
onAttachClipboard: () => void,
onAttachFiles: (files: FileWithHandle[], errorMessage: string | null) => void,
onAttachScreenCapture: (file: File) => void,
onOpenCamera: () => void,
onOpenGoogleDrivePicker?: () => void, // optional because requires additional external setup (e.g. user-storage of tokens)
onOpenWebInput: () => void,
// live feeds - end action buttons (presence if the callback is set, active state if the boolean is true)
hasActiveCameraFeed?: boolean,
hasActiveScreenFeed?: boolean,
onStartLiveCameraFeed?: () => void,
onStartLiveScreenFeed?: () => void,
}) {
// state (screen capture - used in menu modes where the component handles the capture)
const [capturingScreen, setCapturingScreen] = React.useState(false);
const [screenCaptureError, setScreenCaptureError] = React.useState<string | null>(null);
// handlers
const { onAttachFiles, onAttachScreenCapture } = props;
const handleAttachFilePicker = React.useCallback(() => {
return openFileForAttaching(true, onAttachFiles);
}, [onAttachFiles]);
const handleTakeScreenCapture = React.useCallback(async () => {
setScreenCaptureError(null);
setCapturingScreen(true);
try {
const file = await takeScreenCapture();
file && onAttachScreenCapture(file);
} catch (error: any) {
const message = error instanceof Error ? error.message : String(error);
setScreenCaptureError(message);
}
setCapturingScreen(false);
}, [onAttachScreenCapture]);
// inline-buttons mode - individual buttons rendered flat (no dropdown)
if (props.mode === 'inline-buttons')
return <>
{/* Files */}
<ButtonAttachFilesMemo color={props.color} onAttachFiles={props.onAttachFiles} /*fullWidth*/ multiple />
{/* Web */}
{!props.onlyImages && <ButtonAttachWebMemo color={props.color} disabled={!props.canBrowse} onOpenWebInput={props.onOpenWebInput} />}
{/* Google Drive */}
{hasGoogleDriveCapability && !props.onlyImages && !!props.onOpenGoogleDrivePicker && (
<ButtonAttachGoogleDriveMemo color={props.color} onOpenGoogleDrivePicker={props.onOpenGoogleDrivePicker} />
)}
{/* Clipboard */}
{supportsClipboardRead() && !props.onlyImages && (
<ButtonAttachClipboardMemo color={props.color} onAttachClipboard={props.onAttachClipboard} />
)}
{/* Screen Capture */}
{props.hasScreenCapture && (
<ButtonAttachScreenCaptureMemo color={props.color} onAttachScreenCapture={props.onAttachScreenCapture} />
)}
{/* Camera */}
{props.hasCamera && (
<ButtonAttachCameraMemo color={props.color} onOpenCamera={props.onOpenCamera} />
)}
</>;
// menu-compact mode (mobile) - simple icon trigger with flat menu items
if (props.mode === 'menu-compact' || props.mode === 'menu-message') {
const isMessage = props.mode === 'menu-message';
return <>
<Dropdown>
{props.menuButton ? props.menuButton : !isMessage ? (
<MenuButton slots={{ root: IconButton }}>
<AddRoundedIcon />
</MenuButton>
) : (
<MenuButton slots={{ root: Button }} slotProps={{
root: {
size: 'sm',
variant: 'soft',
color: 'warning',
startDecorator: <AddRoundedIcon />,
sx: { minHeight: '2.25rem', m: -0.25 /* absorb parent's padding */ },
},
} as const}>
Attach
</MenuButton>
)}
<Menu sx={{ '--List-padding': '0.5rem', zIndex: themeZIndexOverMobileDrawer /* menu-compact or menu-message: above dialogs */ }}>
{/* Files */}
{/*<MenuItem onClick={handleAttachFilePicker}>*/}
{/* <ListItemDecorator><AttachFileRoundedIcon /></ListItemDecorator>*/}
{/* {props.onlyImages ? 'Images' : 'File'}*/}
{/*</MenuItem>*/}
<RichMenuItem name={props.onlyImages ? 'Images' : 'Files'} description='PDF, DOCX, images, code' color={props.color} Icon={AttachFileRoundedIcon} onClick={handleAttachFilePicker} />
{/* Web */}
{!props.onlyImages && /*props.canBrowse &&*/ (
// <MenuItem onClick={props.onOpenWebInput} disabled={!props.canBrowse}>
// <ListItemDecorator><LanguageRoundedIcon /></ListItemDecorator>
// Web
// </MenuItem>
<RichMenuItem name='Web' description='Import from web pages' color={props.color} Icon={LanguageRoundedIcon} onClick={props.onOpenWebInput} disabled={!props.canBrowse} />
)}
{/* Google Drive */}
{!props.onlyImages && hasGoogleDriveCapability && !!props.onOpenGoogleDrivePicker && (
// <MenuItem onClick={props.onOpenGoogleDrivePicker}>
// <ListItemDecorator><AddToDriveRoundedIcon /></ListItemDecorator>
// Drive
// </MenuItem>
<RichMenuItem name='Drive' description='Attach Google Drive files' color={props.color} Icon={AddToDriveRoundedIcon} onClick={props.onOpenGoogleDrivePicker} />
)}
{/* Clipboard */}
{!props.onlyImages && supportsClipboardRead() && (
// <MenuItem onClick={props.onAttachClipboard}>
// <ListItemDecorator><ContentPasteGoIcon /></ListItemDecorator>
// Paste
// </MenuItem>
<RichMenuItem name='Clipboard' description='Auto-convert to the best format' color={props.color} Icon={ContentPasteGoIcon} onClick={props.onAttachClipboard} />
)}
{/* Screen Capture */}
{props.hasScreenCapture && (
// <MenuItem onClick={handleTakeScreenCapture} disabled={capturingScreen}>
// <ListItemDecorator><ScreenshotMonitorIcon /></ListItemDecorator>
// Screen
// </MenuItem>
<RichMenuItem
name='Screen'
color={screenCaptureError ? 'danger' : props.color}
description={screenCaptureError ? `Error: ${screenCaptureError}` : 'Capture tabs, apps, and screens'}
Icon={ScreenshotMonitorIcon}
disabled={capturingScreen}
onClick={handleTakeScreenCapture}
endAction={!isMessage && props.onStartLiveScreenFeed && <LiveFeedButton isActive={!!props.hasActiveScreenFeed} tooltip='Live Screen chat' onClick={props.onStartLiveScreenFeed} />}
/>
)}
{/* Camera */}
{props.hasCamera && isMessage && (
// <MenuItem onClick={props.onOpenCamera}>
// <ListItemDecorator><CameraAltOutlinedIcon /></ListItemDecorator>
// Camera
// </MenuItem>
<RichMenuItem
name='Camera'
color={props.color}
Icon={CameraAltOutlinedIcon}
description='Capture photos with optional OCR'
onClick={props.onOpenCamera}
endAction={!isMessage && props.onStartLiveCameraFeed && <LiveFeedButton isActive={!!props.hasActiveCameraFeed} tooltip='Live Camera chat' onClick={props.onStartLiveCameraFeed} />}
/>
)}
</Menu>
</Dropdown>
{/* [mobile] Responsive Camera OCR button */}
{props.hasCamera && !isMessage && <ButtonAttachCameraMemo isMobile color={props.color} onOpenCamera={props.onOpenCamera} />}
</>;
}
// menu-rich mode (desktop) - labeled button trigger with animated, descriptive menu items
return (
<Dropdown>
<MenuButton
slots={{ root: Button }}
slotProps={{
root: {
// size: 'sm',
variant: 'plain',
color: props.color,
startDecorator: <AddRoundedIcon />,
fullWidth: true, // to match other buttons in the col
sx: {
minWidth: 100,
justifyContent: 'flex-start',
borderRadius: ATTACH_BUTTON_RADIUS,
textWrap: 'nowrap',
...(props.richButtonStandOut && {
backgroundColor: 'background.popup',
border: '1px solid',
borderColor: `${props.color || 'neutral'}.outlinedBorder`,
}),
// when aria-expanded is true (menu open), remove top border radius
'&[aria-expanded="true"]': {
borderTopRightRadius: 0,
borderTopLeftRadius: 0,
backgroundColor: `${props.color || 'neutral'}.softHoverBg`,
},
},
},
}}
>
Attach
</MenuButton>
<Menu
// variant='soft'
color={props.color}
placement='top-start'
popperOptions={{ modifiers: [{ name: 'offset', options: { offset: [-10 /* 62 */, -2] } }] }}
sx={{
minWidth: 280,
'--List-padding': '0.5rem',
zIndex: themeZIndexOverMobileDrawer,
animation: `${animationMenu} 0.12s cubic-bezier(0.25, 0.46, 0.45, 0.94)`,
// boxShadow: '0 16px 25px -5px rgb(0 0 0 / 0.1), 0 8px 10px -6px rgb(0 0 0 / 0.1)',
boxShadow: 'md',
borderRadius: ATTACH_BUTTON_RADIUS,
border: '1px solid',
borderColor: `${props.color || 'neutral'}.outlinedBorder`,
backgroundColor: 'background.popup',
overflow: 'hidden',
}}
>
{/* File Attachment */}
<RichMenuItem
name={props.onlyImages ? 'Images' : 'Files'}
Icon={AttachFileRoundedIcon}
description={props.onlyImages ? 'PNG, JPG, WEBP images to edit' : 'PDF, DOCX, images, code'}
onClick={handleAttachFilePicker}
delay={0}
/>
{/* Web/URL Attachment */}
{!props.onlyImages && /*props.canBrowse &&*/ (
<RichMenuItem
name='Web'
Icon={LanguageRoundedIcon}
description='Import web pages, including screenshots'
onClick={props.onOpenWebInput}
disabled={!props.canBrowse}
delay={0.02}
/>
)}
{/* Google Drive Attachment */}
{!props.onlyImages && hasGoogleDriveCapability && !!props.onOpenGoogleDrivePicker && (
<RichMenuItem
name='Drive'
Icon={AddToDriveRoundedIcon}
description='Attach Google Drive files'
onClick={props.onOpenGoogleDrivePicker}
delay={0.04}
/>
)}
{/* Clipboard Attachment */}
{!props.onlyImages && supportsClipboardRead() && (
<RichMenuItem
name='Clipboard'
Icon={ContentPasteGoIcon}
// description='Auto-converts images and text to the best format'
description='Auto-adapts images and text'
onClick={props.onAttachClipboard}
delay={0.06}
/>
)}
{/*{!props.onlyImages && props.canBrowse && (*/}
{/* <ListItem>*/}
{/* <ListItemDecorator />*/}
{/* <Checkbox*/}
{/* size='sm'*/}
{/* color='neutral'*/}
{/* // checked={enableComposerAttach}*/}
{/* // onChange={handleToggle}*/}
{/* onClick={(event) => event.stopPropagation()}*/}
{/* sx={{ ml: 0.375 }}*/}
{/* slotProps={{*/}
{/* label: {*/}
{/* sx: {*/}
{/* fontSize: 'sm',*/}
{/* fontWeight: 'md',*/}
{/* },*/}
{/* },*/}
{/* }}*/}
{/* label='Download and attach links'*/}
{/* />*/}
{/* </ListItem>*/}
{/*)}*/}
{/* Divider before labs features */}
{(props.hasScreenCapture || props.hasCamera) && <ListDivider inset='gutter' sx={{ my: 1 }} />}
{/* Screen Capture */}
{props.hasScreenCapture && (
<RichMenuItem
name='Screen'
Icon={ScreenshotMonitorIcon}
description={screenCaptureError ? `Error: ${screenCaptureError}` : 'Capture tabs, apps, and screens'}
onClick={handleTakeScreenCapture}
disabled={capturingScreen}
color={screenCaptureError ? 'danger' : undefined}
delay={0.08}
endAction={props.onStartLiveScreenFeed && <LiveFeedButton isActive={!!props.hasActiveScreenFeed} tooltip='Live Screen chat' onClick={props.onStartLiveScreenFeed} />}
/>
)}
{/* Camera */}
{props.hasCamera && (
<RichMenuItem
name='Camera'
Icon={CameraAltOutlinedIcon}
description='Capture photos with optional OCR'
onClick={props.onOpenCamera}
delay={0.1}
endAction={props.onStartLiveCameraFeed && <LiveFeedButton isActive={!!props.hasActiveCameraFeed} tooltip='Live Camera chat' onClick={props.onStartLiveCameraFeed} />}
/>
)}
{/* URL Auto-Download Toggle - only show when browse capability exists */}
{!props.onlyImages && props.canBrowse && (
<AutoDownloadToggle delay={0.12} />
)}
</Menu>
</Dropdown>
);
}
@@ -6,8 +6,6 @@ import CameraAltOutlinedIcon from '@mui/icons-material/CameraAltOutlined';
import { buttonAttachSx } from '~/common/components/ButtonAttachFiles';
import { CameraCaptureModal } from '../CameraCaptureModal';
export const ButtonAttachCameraMemo = React.memo(ButtonAttachCamera);
@@ -43,24 +41,4 @@ function ButtonAttachCamera(props: {
</Button>
</Tooltip>
);
}
export function useCameraCaptureModalDialog(onAttachImageStable: (file: File) => void) {
// state
const [open, setOpen] = React.useState(false);
const openCamera = React.useCallback(() => setOpen(true), []);
const cameraCaptureComponent = React.useMemo(() => open && (
<CameraCaptureModal
onCloseModal={() => setOpen(false)}
onAttachImage={onAttachImageStable}
/>
), [open, onAttachImageStable]);
return {
openCamera,
cameraCaptureComponent,
};
}
@@ -0,0 +1,123 @@
import * as React from 'react';
import type { FileWithHandle } from 'browser-fs-access';
import type { CameraCaptureDialogOptions } from '~/common/components/camera/useCameraCaptureDialog';
import type { CameraLiveStream } from '~/common/components/camera/useCameraCapture';
import { addSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import { useCameraCaptureDialog } from '~/common/components/camera/useCameraCaptureDialog';
import type { AttachmentDraftsApi } from '../useAttachmentDrafts';
import { useWebAttachmentModal } from './useWebAttachmentModal';
// Focused hooks that bridge `useAttachmentDrafts` return values to UI callback shapes.
// Each hook wraps one attachment source. Consumers compose only what they need.
type _HandleCameraOpen = (options?: CameraCaptureDialogOptions) => Promise<void>;
type _HandleFiles = (files: FileWithHandle[], errorMessage: string | null) => void;
type _HandlePasteIntercept = (event: React.ClipboardEvent) => void;
type _HandleScreenCapture = (file: File) => void;
type _HandleWebLinks = (links: { url: string }[]) => void;
/**
* Returns a handler that opens the camera capture dialog and appends the captured files.
*/
export function useAttachHandler_CameraOpen(
attachAppendFile: AttachmentDraftsApi['attachAppendFile'],
handleLiveStream?: (stream: CameraLiveStream) => void,
): _HandleCameraOpen {
// external state
const { openCameraCapture } = useCameraCaptureDialog(); // -> showPromisedOverlay
return React.useCallback(async (optionsOrEvent?: CameraCaptureDialogOptions | React.SyntheticEvent) => {
// guard: onClick handlers pass the event as first arg
const options = optionsOrEvent && 'nativeEvent' in optionsOrEvent ? undefined : optionsOrEvent;
const result = await openCameraCapture({ allowMultiCapture: true, allowLiveFeed: !!handleLiveStream, ...options });
if (!result) return; // user dismissed the dialog without capturing anything
// append all captured images
for (const imageFile of result.images)
void attachAppendFile('camera', imageFile);
// handle live stream if provided
if (result.liveStream)
handleLiveStream?.(result.liveStream);
}, [attachAppendFile, handleLiveStream, openCameraCapture]);
}
/**
* Returns a handler for files to become attachments.
*/
export function useAttachHandler_Files(attachAppendFile: AttachmentDraftsApi['attachAppendFile']) {
return React.useCallback<_HandleFiles>(async (files, errorMessage) => {
if (errorMessage)
addSnackbar({ key: 'attach-files-open-fail', message: `Unable to open files: ${errorMessage}`, type: 'issue' });
// files are appended sequentially (awaited) so conversion pipelines don't race
for (const file of files)
await attachAppendFile('file-open', file)
.catch((error: any) => addSnackbar({ key: 'attach-file-open-fail', message: `Unable to attach the file "${file.name}" (${error?.message || error?.toString() || 'unknown error'})`, type: 'issue' }));
}, [attachAppendFile]);
}
/**
* Returns a paste handler that intercepts Ctrl+V, routing pasted files through the attachment pipeline.
*/
export function useAttachHandler_PasteIntercept(attachAppendDataTransfer: AttachmentDraftsApi['attachAppendDataTransfer']) {
return React.useCallback<_HandlePasteIntercept>(async (event) => {
// false = don't attach text (only files), to prevent duplicate text in input
if (await attachAppendDataTransfer(event.clipboardData, 'paste', false) === 'as_files') {
// preventDefault stops the browser's default paste only when files were captured
event.preventDefault();
}
}, [attachAppendDataTransfer]);
}
/**
* Returns a handler for screen/window/tab captures to become attachments.
*/
export function useAttachHandler_ScreenCapture(attachAppendFile: AttachmentDraftsApi['attachAppendFile']) {
return React.useCallback<_HandleScreenCapture>((file) => {
void attachAppendFile('screencapture', file);
}, [attachAppendFile]);
}
/**
* Returns `{ openWebInputDialog, webInputDialogComponent }` for web link attachments.
* Consumer must render `webInputDialogComponent`.
*/
export function useAttachHandler_UrlWebLinks(attachAppendUrl: AttachmentDraftsApi['attachAppendUrl'], composerText?: string) {
// local handler
const _handleAttachWebLinks = React.useCallback<_HandleWebLinks>(async (links) => {
// processd in parallel
const attachPromises = links.map(link => attachAppendUrl('input-link', link.url));
// find if any failed
const results = await Promise.allSettled(attachPromises);
const issueUrls = results.reduce<string[]>((acc, result, index) => {
if (result.status === 'rejected')
acc.push(links[index].url);
return acc;
}, []);
if (issueUrls.length)
addSnackbar({ key: 'attach-web-fail', message: `Unable to attach: ${issueUrls.join(', ')}`, type: 'issue', overrides: { autoHideDuration: 4000 } });
}, [attachAppendUrl]);
// return the component and open() function
// optional composerText is passed to the modal for URL auto-detection from the current input text
return useWebAttachmentModal(_handleAttachWebLinks, composerText);
}
@@ -10,7 +10,7 @@ import LogoutIcon from '@mui/icons-material/Logout';
import { TooltipOutlined } from '~/common/components/TooltipOutlined';
import { addSnackbar } from '~/common/components/snackbar/useSnackbarsStore';
import type { AttachmentStoreCloudInput } from './useAttachmentDrafts';
import type { AttachmentStoreCloudInput } from '../useAttachmentDrafts';
// configuration
@@ -259,7 +259,7 @@ function WebInputModal(props: {
}
export function useWebInputModal(onAttachWebLinks: (urls: WebInputData[]) => void, composerText?: string) {
export function useWebAttachmentModal(onAttachWebLinks: (urls: WebInputData[]) => void, composerText?: string) {
// state
const [open, setOpen] = React.useState(false);
@@ -19,6 +19,7 @@ export async function imageDataToImageAttachmentFragmentViaDBlob(
caption: string,
convertToMimeType: false | CommonImageMimeTypes,
resizeMode: false | LLMImageResizeMode,
scopeId: DBlobDBScopeId = 'attachment-drafts',
): Promise<DMessageAttachmentFragment | null> {
// convert to Blobs if needed
@@ -49,7 +50,7 @@ export async function imageDataToImageAttachmentFragmentViaDBlob(
});
// add the image to the DBlobs DB
const dblobAssetId = await addDBImageAsset('attachment-drafts', imageBlob, {
const dblobAssetId = await addDBImageAsset(scopeId, imageBlob, {
label: title ? 'Image: ' + title : 'Image',
metadata: {
width: imageWidth,
@@ -459,6 +459,11 @@ function _prepareDocData(source: AttachmentDraftSource, input: Readonly<Attachme
case 'drop':
fileTitle = source.refPath || _lowCollisionRefString('Dropped File', 6);
break;
case 'live-feed-camera':
case 'live-feed-screen':
fileCaption = sourceOrigin === 'live-feed-camera' ? 'Live Camera' : 'Live Screen';
fileTitle = source.refPath || _lowCollisionRefString(fileCaption, 6);
break;
default:
const _exhaustiveCheck: never = sourceOrigin;
fileTitle = 'File';
@@ -644,7 +649,13 @@ export async function attachmentPerformConversion(
let tableData: DMessageDataInline;
try {
const mdTable = htmlTableToMarkdown(input.altData!, false);
tableData = createDMessageDataInlineText(mdTable, 'text/markdown');
// fall back to source text if the table conversion produced empty/tiny content
if (mdTable.replace(/[\s|:\-]/g, '').length < 2) {
const fallbackText = await _inputDataToString(input.data, 'rich-text-table');
tableData = createDMessageDataInlineText(fallbackText || mdTable, input.mimeType);
} else {
tableData = createDMessageDataInlineText(mdTable, 'text/markdown');
}
} catch (error) {
// fallback to text/plain
const fallbackText = await _inputDataToString(input.data, 'rich-text-table');
@@ -1037,11 +1048,19 @@ export async function attachmentPerformConversion(
}
}
// warn if any doc output fragment has empty text content (something went wrong in conversion)
// TODO: future: check if the text is a conversion error... can happen with drag & drop
const emptyOutputWarnings: string[] = [];
for (const fragment of newFragments)
if (isDocPart(fragment.part) && fragment.part.data.idt === 'text' && !fragment.part.data.text.trim())
emptyOutputWarnings.push('Converted output is empty - the source content may be missing or invalid.');
// update
replaceOutputFragments(attachment.id, newFragments);
edit(attachment.id, {
outputsConverting: false,
outputsConversionProgress: null,
...(emptyOutputWarnings.length && { outputWarnings: emptyOutputWarnings }),
});
}
@@ -93,7 +93,12 @@ export type AttachmentDraftSource = {
egoFragmentsInputData: DraftEgoFragmentsInputData;
};
export type AttachmentDraftSourceOriginFile = 'camera' | 'screencapture' | 'file-open' | 'clipboard-read' | AttachmentDraftSourceOriginDTO;
export type AttachmentDraftSourceOriginFile =
| 'camera' | 'screencapture'
| 'live-feed-camera' | 'live-feed-screen'
| 'file-open'
| 'clipboard-read'
| AttachmentDraftSourceOriginDTO;
export type AttachmentDraftSourceOriginDTO = 'drop' | 'paste';
@@ -180,6 +185,11 @@ export type AttachmentDraftConverterType =
// 3. Output - this is done via DMessageAttachmentFragment[], to be directly compatible with our data
// Actions on attachment drafts
export type AttachmentDraftsAction = 'inline-text' | 'copy-text';
/*export type AttachmentDraftPreview = {
renderer: 'noPreview',
title: string; // A title for the preview
@@ -0,0 +1,36 @@
import type { AttachmentDraft } from '../attachment.types';
/**
* Per-draft enrichment interface - provides LLM-specific (or context-specific)
* compatibility/token info for an AttachmentDraft.
*
* Implementations may be LLM-aware (Composer) or simple pass-throughs (edit mode).
*/
export interface IAttachmentEnrichment {
/** Whether all output fragments of this draft are supported */
isCompatible(draft: AttachmentDraft): boolean;
/** Whether this draft has text fragments that can be inlined */
supportsTextInline(draft: AttachmentDraft): boolean;
/** Approximate token count for this draft, or null if unknown */
estimateTokens(draft: AttachmentDraft): number | null;
/** Approximate total token count across all drafts, or null if unknown */
estimateTotalTokens(drafts: AttachmentDraft[]): number | null;
/** Whether this draft contains image fragments */
hasImages(draft: AttachmentDraft): boolean;
}
/**
* Pre-computed collection-level summary derived from IAttachmentEnrichment
* across all drafts. Used to avoid re-computing in multiple places.
*/
export interface AttachmentEnrichmentSummary {
allCompatible: boolean;
anyImages: boolean;
anyInlinable: boolean;
totalTokensApprox: number | null;
}
@@ -0,0 +1,87 @@
import * as React from 'react';
import type { DLLM } from '~/common/stores/llms/llms.types';
import type { DMessageAttachmentFragment } from '~/common/stores/chat/chat.fragments';
import { estimateTokensForFragments } from '~/common/stores/chat/chat.tokens';
import { useShallowStable } from '~/common/util/hooks/useShallowObject';
import type { AttachmentDraft } from '../attachment.types';
import type { AttachmentEnrichmentSummary, IAttachmentEnrichment } from './attachment.enrichment';
// configuration
// TODO: consider also Audio inputs, maybe PDF binary inputs
// FIXME: reference fragments could refer to non-image as well(!)
const _IMAGE_TYPES: DMessageAttachmentFragment['part']['pt'][] = [
'reference', // _DMessageReferencePartBase
'image_ref', // DMessageImageRefPart (legacy)
] as const;
/**
* LLM-specific implementation of IAttachmentEnrichment.
* Determines compatibility based on a target LLM's capabilities.
*/
class LLMAttachmentEnrichment implements IAttachmentEnrichment {
private readonly supportedTextTypes: DMessageAttachmentFragment['part']['pt'][];
private readonly supportedTypes: DMessageAttachmentFragment['part']['pt'][];
constructor(private readonly llm: DLLM | null, supportsImages: boolean) {
this.supportedTypes = supportsImages ? [..._IMAGE_TYPES, 'doc'] : ['doc'];
this.supportedTextTypes = this.supportedTypes.filter(pt => pt === 'doc');
}
isCompatible = (draft: AttachmentDraft): boolean => {
if (!draft.outputFragments) return false;
return draft.outputFragments.every(op => this.supportedTypes.includes(op.part.pt));
};
supportsTextInline = (draft: AttachmentDraft): boolean => {
if (!draft.outputFragments) return false;
return draft.outputFragments.some(op => this.supportedTextTypes.includes(op.part.pt));
};
estimateTokens = (draft: AttachmentDraft): number | null => {
if (!this.llm) return null;
return estimateTokensForFragments(this.llm, 'user', draft.outputFragments, true, 'useAttachmentDraftsEnrichment');
};
estimateTotalTokens = (drafts: AttachmentDraft[]): number | null => {
if (!this.llm) return null;
return drafts.reduce((acc, d) => acc + (this.estimateTokens(d) || 0), 0);
};
hasImages = (draft: AttachmentDraft): boolean => {
if (!draft.outputFragments) return false;
return draft.outputFragments.some(op => _IMAGE_TYPES.includes(op.part.pt));
};
}
/**
* Hook that creates an LLM-specific IAttachmentEnrichment and computes
* collection-level summary for the given attachment drafts.
*/
export function useAttachmentDraftsEnrichment(attachmentDrafts: AttachmentDraft[], chatLLM: DLLM | null, chatLLMSupportsImages: boolean): {
enrichment: IAttachmentEnrichment;
summary: AttachmentEnrichmentSummary;
} {
// Enrichment instance - stable, only recreated if inputs change
const enrichment = React.useMemo(
() => new LLMAttachmentEnrichment(chatLLM, chatLLMSupportsImages),
[chatLLM, chatLLMSupportsImages],
);
// Collection-level summary - shallow-stabilized to avoid unnecessary re-renders
const summary = useShallowStable<AttachmentEnrichmentSummary>({
allCompatible: attachmentDrafts.every(enrichment.isCompatible),
anyImages: attachmentDrafts.some(enrichment.hasImages),
anyInlinable: attachmentDrafts.some(enrichment.supportsTextInline),
totalTokensApprox: enrichment.estimateTotalTokens(attachmentDrafts),
});
return { enrichment, summary };
}
@@ -29,6 +29,10 @@ function notifyOnlyImages(item: any) {
export type AttachmentStoreCloudInput = Omit<Extract<AttachmentDraftSource, { media: 'cloud' }>, 'media' | 'origin'>;
/** Inferred return type - used by composable source handler hooks. */
export type AttachmentDraftsApi = ReturnType<typeof useAttachmentDrafts>;
/**
* @param attachmentsStoreApi A Per-Chat or standalone Attachment Drafts store.
* @param enableLoadURLsOnPaste Only used if invoking attachAppendDataTransfer or attachAppendClipboardItems.
+11 -3
View File
@@ -15,7 +15,7 @@ import { createTextContentFragment, DMessageFragment, DMessageFragmentId } from
import { gcChatImageAssets } from '~/common/stores/chat/chat.gc';
import { getChatLLMId } from '~/common/stores/llms/store-llms';
import { getChatAutoAI } from '../../apps/chat/store-app-chat';
import { getChatAutoAI, getChatThinkingPolicy } from '../../apps/chat/store-app-chat';
import { createDEphemeral, EPHEMERALS_DEFAULT_TIMEOUT } from './store-perchat-ephemerals_slice';
import { createPerChatVanillaStore, PerChatOverlayStore } from './store-perchat_vanilla';
@@ -227,8 +227,9 @@ export class ConversationHandler {
return _chatStoreActions.historyView(this.conversationId)?.find(m => m.id === messageId);
}
historyKeepLastThinkingOnly(): void {
return _chatStoreActions.historyKeepLastThinkingOnly(this.conversationId);
/** Strips thinking fragments from assistant messages, preserving `keepCount` most recent (0 = discard all, 1 = keep last only). */
historyStripThinking(keepCount: number): void {
return _chatStoreActions.historyStripThinking(this.conversationId, keepCount);
}
title(): string | undefined {
@@ -265,6 +266,13 @@ export class ConversationHandler {
this.messageAppend(newMessage);
}
// post-result: strip reasoning traces per user's thinking policy (issue #1003)
const chatThinkingPolicy = getChatThinkingPolicy();
if (chatThinkingPolicy === 'last-only')
this.historyStripThinking(1);
else if (chatThinkingPolicy === 'discard-all')
this.historyStripThinking(0);
// close beam
terminateKeepingSettings();
};
@@ -1,4 +1,6 @@
import type { DBlobAssetId } from '~/common/stores/blob/dblobs-portability';
import type { DConversationId } from '~/common/stores/chat/chat.conversation';
import { collectFragmentAssetIds, gcRegisterAssetCollector } from '~/common/stores/chat/chat.gc';
import { ConversationHandler } from './ConversationHandler';
@@ -14,6 +16,40 @@ export class ConversationsManager {
private static _instance: ConversationsManager;
private readonly handlers: Map<DConversationId, ConversationHandler> = new Map();
private constructor() {
// Register a GC collector to protect DBlob assets referenced in active Beam stores.
// Uses inversion of control to avoid circular dependency (chat/ -> chat-overlay/).
gcRegisterAssetCollector(() => this._collectBeamAssetIds());
}
/**
* Collect DBlob asset IDs from all active Beam stores (rays, fusions, follow-ups).
*/
private _collectBeamAssetIds(): DBlobAssetId[] {
const assetIds = new Set<DBlobAssetId>();
for (const handler of this.handlers.values()) {
const { rays, fusions } = handler.getBeamStore().getState();
// Scatter rays + their follow-up messages
for (const ray of rays) {
collectFragmentAssetIds(ray.message.fragments, assetIds);
// if (ray.followUpMessages)
// for (const msg of ray.followUpMessages)
// collectFragmentAssetIds(msg.fragments, assetIds);
}
// Gather fusions + their follow-up messages
for (const fusion of fusions) {
if (fusion.outputDMessage)
collectFragmentAssetIds(fusion.outputDMessage.fragments, assetIds);
// if (fusion.followUpMessages)
// for (const msg of fusion.followUpMessages)
// collectFragmentAssetIds(msg.fragments, assetIds);
}
}
return Array.from(assetIds);
}
static getHandler(conversationId: DConversationId): ConversationHandler {
const instance = ConversationsManager._instance || (ConversationsManager._instance = new ConversationsManager());
let handler = instance.handlers.get(conversationId);
+2 -1
View File
@@ -94,11 +94,12 @@ export function OptionalPostHogAnalytics() {
_posthog.init(process.env.NEXT_PUBLIC_POSTHOG_KEY || '', {
api_host: '/a/ph', // client analytics host - default: process.env.NEXT_PUBLIC_POSTHOG_HOST || 'https://us.i.posthog.com'
ui_host: 'https://us.posthog.com',
defaults: '2025-05-24',
defaults: '2026-01-30',
capture_exceptions: true, // captures exceptions using Error Tracking
// capture_pageview: false, // we used to handle this manually, but changed to the 'defaults' option which captures pageviews automatically
// capture_pageleave: true, // we used to track goodbyes, now included in 'defaults'
person_profiles: 'identified_only',
remote_config_refresh_interval_ms: 0, // no background refreshes. Flags only update on page load or manual `reloadFeatureFlags()` calls.
disable_surveys: true, // disable surveys
debug: Release.IsNodeDevBuild, // enable debug mode in development (was: `loaded: (ph) => if (Release.IsNodeDevBuild) ph.debug();`)
});
+21 -5
View File
@@ -1,12 +1,27 @@
import * as React from 'react';
import { Breadcrumbs, Typography } from '@mui/joy';
import { Breadcrumbs, BreadcrumbsSlotsAndSlotProps, Typography } from '@mui/joy';
import KeyboardArrowRightIcon from '@mui/icons-material/KeyboardArrowRight';
import { Link } from '~/common/components/Link';
const _sx = { p: 0 };
const _breadcrumbSlotProps: BreadcrumbsSlotsAndSlotProps['slotProps'] = {
root: {
sx: { p: 0 },
},
// see anatomy https://mui.com/joy-ui/react-breadcrumbs/#anatomy
ol: {
// keep it all in one line
sx: { flexWrap: 'nowrap' },
},
li: {
// undo the 'flex' on li, and auto-ellipsize contents
sx: { display: 'block' },
className: 'agi-ellipsize',
},
} as const;
export function AppBreadcrumbs(props: {
size?: 'sm' | 'md' | 'lg';
@@ -23,12 +38,13 @@ export function AppBreadcrumbs(props: {
onRootClick?.();
}, [onRootClick]);
return <Breadcrumbs size={props.size || 'sm'} separator={<KeyboardArrowRightIcon />} aria-label='breadcrumbs' sx={_sx}>
{(props.children && !!rootTitle && !!onRootClick)
? <AppBreadcrumbs.Link color='neutral' href='#' onClick={handleRootClick}>{props.rootTitle}</AppBreadcrumbs.Link>
return <Breadcrumbs size={props.size || 'sm'} aria-label='breadcrumbs' separator={<KeyboardArrowRightIcon />} slotProps={_breadcrumbSlotProps}>
{/* Title */}
{(props.children && !!rootTitle && !!onRootClick) ? <AppBreadcrumbs.Link color='neutral' href='#' onClick={handleRootClick}>{props.rootTitle}</AppBreadcrumbs.Link>
: (typeof props.rootTitle === 'string') ? <Typography>{props.rootTitle}</Typography>
: props.rootTitle
}
{/* Rest */}
{props.children}
{/*{nav.pnt === 'create-new' && <Link color='neutral' href='#'>Create New</Link>}*/}
{/*{['Characters', 'Futurama', 'TV Shows', 'Home'].map((item: string) => (*/}
+16 -2
View File
@@ -11,6 +11,19 @@ const Popup = styled(Popper)({
});
/**
* Use this for submenus on any Menu/Popup, to prevent the parent popup from closing when clicking on this item. e.g.
* <MenuItem onClick={joyKeepPopup(() => setShowModelsHidden(!showModelsHidden))}> ...
*/
export function joyKeepPopup<TEvent extends React.MouseEvent>(fn: (event: TEvent) => void) {
return (event: TEvent) => {
// the key to not close the popup when activating this menu item - REV ENG
(event as any).defaultMuiPrevented = true;
fn(event);
};
}
/**
* Workaround to the Menu in Joy 5-beta.0.
*
@@ -93,6 +106,8 @@ export function CloseablePopup(props: {
},
}], [props.placementOffset]);
const popperMemoSx: undefined | SxProps = React.useMemo(() => !props.zIndex ? undefined : ({ zIndex: props.zIndex }), [props.zIndex]);
const styleMemoSx: SxProps = React.useMemo(() => ({
// style
@@ -120,7 +135,6 @@ export function CloseablePopup(props: {
}), [props.boxShadow, props.maxHeightGapPx, props.maxWidth, props.minWidth, props.size, props.dense, props.bigIcons, props.noBottomPadding, props.noTopPadding, props.sx]);
return (
<Popup
role={undefined}
@@ -129,7 +143,7 @@ export function CloseablePopup(props: {
placement={props.placement}
disablePortal={false}
modifiers={modifiersMemo}
sx={props.zIndex ? { zIndex: props.zIndex } : undefined}
sx={popperMemoSx}
>
<ClickAwayListener onClickAway={handleClose}>
{props.menu ? (
+53 -16
View File
@@ -1,8 +1,10 @@
import * as React from 'react';
import { Button, IconButton, useColorScheme } from '@mui/joy';
import BrightnessAutoIcon from '@mui/icons-material/BrightnessAuto';
import DarkModeIcon from '@mui/icons-material/DarkMode';
import LightModeIcon from '@mui/icons-material/LightMode';
import { GoodTooltip } from './GoodTooltip';
export const darkModeToggleButtonSx = {
boxShadow: 'sm',
@@ -12,29 +14,64 @@ export const darkModeToggleButtonSx = {
},
} as const;
type ThemeMode = 'light' | 'dark' | 'system';
const _nextThemeMode: Record<ThemeMode, ThemeMode> = {
light: 'dark',
dark: 'system',
system: 'light',
};
const _themeModeLabel: Record<ThemeMode, string> = {
light: 'Light Theme',
dark: 'Dark Theme',
system: 'System Theme',
};
function _themeModeIcon(mode: ThemeMode) {
switch (mode) {
case 'dark':
return <DarkModeIcon />;
case 'system':
return <BrightnessAutoIcon />;
case 'light':
default:
return <LightModeIcon />;
}
}
export function DarkModeToggleButton(props: { hasText?: boolean }) {
// external state
const { mode: colorMode, setMode: setColorMode } = useColorScheme();
const mode: ThemeMode = colorMode === 'light' || colorMode === 'dark' || colorMode === 'system'
? colorMode
: 'system';
const handleToggleDarkMode = (event: React.MouseEvent) => {
event.stopPropagation();
setColorMode(colorMode === 'dark' ? 'light' : 'dark');
setColorMode(_nextThemeMode[mode]);
};
return props.hasText ? (
<Button
variant='soft'
color='neutral'
onClick={handleToggleDarkMode}
sx={darkModeToggleButtonSx}
startDecorator={colorMode !== 'dark' ? <DarkModeIcon color='primary' /> : <LightModeIcon />}
>
{colorMode === 'dark' ? 'Light Mode' : 'Dark Mode'}
</Button>
) : (
<IconButton size='sm' variant='soft' onClick={handleToggleDarkMode} sx={{ ml: 'auto', /*mr: '2px',*/ my: '-0.25rem' /* absorb the menuItem padding */ }}>
{colorMode !== 'dark' ? <DarkModeIcon /> : <LightModeIcon />}
</IconButton>
const title = `Theme: ${_themeModeLabel[mode]}`;
return (
<GoodTooltip title={title}>
{props.hasText ? (
<Button
variant='soft'
color='neutral'
onClick={handleToggleDarkMode}
sx={darkModeToggleButtonSx}
startDecorator={React.cloneElement(_themeModeIcon(mode), { color: 'primary' })}
>
{_themeModeLabel[mode]}
</Button>
) : (
<IconButton size='sm' variant='soft' onClick={handleToggleDarkMode} sx={{ ml: 'auto', /*mr: '2px',*/ my: '-0.25rem' /* absorb the menuItem padding */ }}>
{_themeModeIcon(mode)}
</IconButton>
)}
</GoodTooltip>
);
}
}
+29 -10
View File
@@ -1,28 +1,47 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, styled } from '@mui/joy';
import { Box, BoxProps, styled } from '@mui/joy';
/**
* Everything in this has been hand tuned to verify that it sticks to the top, clips to the parent
* which is the really the one whose height is following the 0..1-fr proportion.
*
* An alternative former implementation with just overflow: 'hidden' on the BoxCollapsee had the content
* lagging its reveal compared to the parent.
*
* Another alternative had contain: 'layout paint' and no overflow property, but had a seldom 1px paint
* issue on Chrome on the bottom edge.
*
* Note that the issue of 'BoxCollapsee' having a different height than the FR implies remains, but we
* basically just use the Collapsee to ignore the layout and clip all on the parent instead.
*/
const BoxCollapser = styled(Box)({
display: 'grid',
transition: 'grid-template-rows 0.2s cubic-bezier(.17,.84,.44,1)',
gridTemplateRows: '0fr',
'&[aria-expanded="true"]': {
gridTemplateRows: '1fr',
alignItems: 'start',
gridTemplateRows: '1fr',
'&[aria-hidden="true"]': {
gridTemplateRows: '0fr',
},
transition: 'grid-template-rows 0.2s cubic-bezier(.17,.84,.44,1)', // quartic - hand tuned, feels faster
overflow: 'clip',
contain: 'layout',
});
const BoxCollapsee = styled(Box)({
overflow: 'hidden',
/**
* FIX: the absence of this made the ChatPanelModelParameters content overflow on the horizontal
*/
minWidth: 0,
minHeight: 0,
});
export function ExpanderControlledBox(props: { expanded: boolean, children: React.ReactNode, sx?: SxProps }) {
export function ExpanderControlledBox({ expanded, children, ...rest }: BoxProps & { expanded: boolean }) {
return (
<BoxCollapser aria-expanded={props.expanded} data-agi-no-copy={!props.expanded || undefined} sx={props.sx}>
<BoxCollapser aria-hidden={!expanded ? true : undefined} data-agi-no-copy={!expanded || undefined} {...rest}>
<BoxCollapsee>
{props.children}
{children}
</BoxCollapsee>
</BoxCollapser>
);

Some files were not shown because too many files have changed in this diff Show More