Compare commits

..

166 Commits

Author SHA1 Message Date
Enrico Ros 1caeaee7f0 1.16.0: update News 2024-05-09 01:00:53 -07:00
Enrico Ros f354134234 Update README 2024-05-09 00:56:50 -07:00
Enrico Ros 66219d30e0 ReplyTo: fix bubble 2024-05-09 00:48:24 -07:00
Enrico Ros b9e3942ed8 ChatMessage: fix broken overflow 2024-05-09 00:18:29 -07:00
Enrico Ros 2354cdc1d1 ReplyTo: render in ChatMessage 2024-05-09 00:18:21 -07:00
Enrico Ros d929438df9 ReplyTo: extract 2024-05-09 00:09:17 -07:00
Enrico Ros 1acaed1de7 ReplyTo: Move Bubble 2024-05-09 00:03:22 -07:00
Enrico Ros 16195f8a55 ReplyTo: works 100 for OpenAI, ok for Anthropic, exposes Chat sequencing issues for a couple 2024-05-09 00:00:37 -07:00
Enrico Ros d7fc8c178f 1.16.0: enable cost by default 2024-05-08 15:39:03 -07:00
Enrico Ros 2894e16706 Merge branch 'release-1.16.0' 2024-05-08 15:11:10 -07:00
Enrico Ros c2340f3432 1.16.0: README 2024-05-08 15:03:32 -07:00
Enrico Ros 3b7b3106db Misc 2024-05-08 14:37:31 -07:00
Enrico Ros cff92819f9 1.16.0: News 2024-05-08 14:13:01 -07:00
Enrico Ros 2f981d852b Show message costs (option) 2024-05-08 13:11:21 -07:00
Enrico Ros 8eef74d776 1.16.0: version 2024-05-08 11:53:56 -07:00
Enrico Ros 60e46204dc Update default contextWindow to 8192
And override as per https://github.com/enricoros/big-AGI/pull/518#issuecomment-2090736347
2024-05-07 04:44:38 -07:00
Enrico Ros 6a5d783435 Show Costs on Hover. #480, #341 2024-05-07 04:33:39 -07:00
Enrico Ros 0223e076c4 LLM Options: improve 2024-05-07 03:54:28 -07:00
Enrico Ros ce80c78319 1.16.0: disable Reply-To (note: full in a different branch) 2024-05-07 02:55:14 -07:00
Enrico Ros cc0085ae61 Group vendors (disabled) 2024-05-07 02:46:41 -07:00
Enrico Ros f28e243b9d Chat: perfect execution error reporting, Fixes #523 2024-05-07 02:19:54 -07:00
Enrico Ros 2e4532593f Toggle JSON mode, Fixes #515 2024-05-07 00:58:02 -07:00
Enrico Ros 1f10905a03 Fix model temperaturs 2024-05-07 00:47:11 -07:00
Enrico Ros 88762db484 Anthropic: more precise usage link to show the token usage. Fixes #524 2024-05-06 23:48:41 -07:00
Enrico Ros 3b5ab0ac70 Beam: fix relaxed parsing. Fixes #528 2024-05-06 23:45:34 -07:00
Enrico Ros 8903c9296b OpenRouter: update parser 2024-05-06 22:56:09 -07:00
Enrico Ros 97858a3c94 docs/installation: mention optionality 2024-05-06 22:26:40 -07:00
Enrico Ros 0ec3e83518 Merge pull request #521 from dandv/patch-1
Docs: fix command to run local build
2024-05-06 22:25:51 -07:00
Enrico Ros 8c007b5bf7 Merge pull request #522 from dandv/patch-2
E: grammar in OpenAISourceSetup.tsx
2024-05-06 22:21:46 -07:00
Enrico Ros 768236b0e2 Merge pull request #525 from PrivTEC/patch-1
Correct typo in config-feature-browse.md
2024-05-06 22:20:18 -07:00
Enrico Ros 495d78b885 Perplexity: update models, with the ne online models 2024-05-06 21:20:02 -07:00
Enrico Ros 34b1e515fe Figure out unused model vendors 2024-05-06 21:04:02 -07:00
PrivTEC 79edbd3fa5 Correct typo in config-feature-browse.md
Corrected the typo from "proyy" to "proxy" in the file `config-feature-browse.md`. This change addresses a small, but significant error in the configuration documentation.
2024-05-06 03:51:04 +02:00
Dan Dascalescu f50d9994e2 E: grammar in OpenAISourceSetup.tsx 2024-05-04 22:22:34 +03:00
Dan Dascalescu 1603d3085f Docs: fix command to run local build 2024-05-04 22:16:12 +03:00
Enrico Ros ccf7036f33 Longer timeouts 2024-05-02 00:43:10 -07:00
Enrico Ros a0a1a5e3c1 Update the proxy desc 2024-05-02 00:09:17 -07:00
Enrico Ros fbf9120859 Default to llama3 2024-05-01 23:59:09 -07:00
Enrico Ros 8a770beec3 Update Ollama models 2024-05-01 23:05:30 -07:00
Enrico Ros 6b31669765 Fix diagrams in Dark mode. Fixes #520 2024-05-01 22:54:53 -07:00
Enrico Ros 26d72fc2d8 DMesage: add metadata 2024-04-25 22:17:36 -07:00
Enrico Ros 5eb56d0994 Move Diff'er. 2024-04-25 22:16:14 -07:00
Enrico Ros dbc4a922d5 Message Toolbar: good looking too. 2024-04-25 22:15:20 -07:00
Enrico Ros 141f423842 Diagrams: auto-switch 2024-04-25 22:15:00 -07:00
Enrico Ros 667f2433ab Diagrams: enter 2024-04-25 22:14:59 -07:00
Enrico Ros fd930ef548 Message Toolbar: fix disappearance 2024-04-25 22:14:49 -07:00
Enrico Ros 7eadfb1a63 E: PageDrawerHeader style 2024-04-25 22:11:28 -07:00
Enrico Ros 67cb07ac92 E: Style 2024-04-25 21:53:10 -07:00
Enrico Ros 96d28c43fc Manifest: update 2024-04-25 18:38:56 -07:00
Enrico Ros e57e3f5f0a Code: soft wrap. Closes #517 2024-04-25 11:41:34 -07:00
Enrico Ros 7b99bd71da Update overlay buttons 2024-04-25 11:36:58 -07:00
Enrico Ros 861a037321 Tweaks 2024-04-24 18:51:40 -07:00
Enrico Ros 84cbe6c434 RenderCode: title looks 2024-04-24 18:33:45 -07:00
Enrico Ros 2cbb811523 RenderCode: fix titles 2024-04-24 12:32:17 -07:00
Enrico Ros 8ef4faa10f Llms: update 'latest' 2024-04-24 12:25:34 -07:00
Enrico Ros f6a1c9bf52 Diagrams: fix centering 2024-04-24 03:42:50 -07:00
Enrico Ros 5d9f6fb4f5 Code blocks: undo the removal of ? 2024-04-24 03:31:00 -07:00
Enrico Ros 66840a8ecd Diagrams: center Mermaid and PlantUML diagrams 2024-04-24 03:30:28 -07:00
Enrico Ros a8ee6b255a Diagrams: improve hotfixes for Haiku and 3.5 2024-04-24 03:30:16 -07:00
Enrico Ros bd73d1c533 Diagrams: improve prompts 2024-04-24 03:30:05 -07:00
Enrico Ros e33c0ebc42 Fix code block separation in case of nested blocks. 2024-04-24 02:42:43 -07:00
Enrico Ros 57e4a35fee AppChat: extract chat executor (1st step) 2024-04-24 01:59:49 -07:00
Enrico Ros d490b57410 Diagrams: improve instructions 2024-04-24 01:59:08 -07:00
Enrico Ros 0416602e5f Diagrams: improve dialog 2024-04-24 01:59:01 -07:00
Enrico Ros ddc27b2eb9 BlockCode: improve looks 2024-04-24 01:36:32 -07:00
Enrico Ros 374deb147b Composer: improve ReplyTo integration 2024-04-24 00:03:30 -07:00
Enrico Ros d2eabd1ad0 Composer: correctness of activation 2024-04-24 00:02:42 -07:00
Enrico Ros efbc625cc3 Composer: onAction callback 2024-04-23 23:52:09 -07:00
Enrico Ros 91ae0b8cb0 Codeblocks: broader inclusion of filenames 2024-04-23 23:46:20 -07:00
Enrico Ros ddc5741b00 Attachments: getCollapsedAttachments 2024-04-23 23:18:39 -07:00
Enrico Ros 4729aca6b0 ReplyTo: improve bubble 2024-04-23 22:56:05 -07:00
Enrico Ros bb4fc3a70c Anthropic: relax key validation on custom deployments. Closes #511 2024-04-23 20:32:08 -07:00
Enrico Ros 5d8084b650 Llms: streaming: cleanups 2024-04-23 05:07:55 -07:00
Enrico Ros f316b892f5 Revert "Llms: fix Streaming timeouts (2)"
This reverts commit cbda1d7cd0.
2024-04-23 03:15:07 -07:00
Enrico Ros cbda1d7cd0 Llms: fix Streaming timeouts (2) 2024-04-23 02:07:20 -07:00
Enrico Ros 2f8e879976 Llms: fix Streaming timeouts 2024-04-23 01:45:27 -07:00
Enrico Ros cc0ac5ae3c React: fix llm naming 2024-04-22 23:59:30 -07:00
Enrico Ros 0185d24fb3 Beam: improve Merge disablement 2024-04-22 23:59:08 -07:00
Enrico Ros 97dbdc9c31 Beam: improve inlining (not ready yet) 2024-04-22 23:58:26 -07:00
Enrico Ros a07c66c9a3 Beam: lay down some inlining code 2024-04-22 21:49:14 -07:00
Enrico Ros 308bd25bc0 Beam: improve Tutorial 2024-04-22 21:48:00 -07:00
Enrico Ros 70066a03b6 Explainer Carousel: improvements 2024-04-22 21:44:17 -07:00
Enrico Ros a7f3872af3 Beam: update bar icons 2024-04-22 16:38:26 -07:00
Enrico Ros 22e10e675a RMB on Chat Avatar brings up the menu 2024-04-22 16:31:30 -07:00
Enrico Ros 89679e946d Beam: remove optionality (/beam, chat mode, composer button & shortcut, message beam from) 2024-04-22 16:12:09 -07:00
Enrico Ros 1d1bb9d3df Beam: explain a possible missing user message 2024-04-22 15:58:39 -07:00
Enrico Ros 8faf2b2595 Beam: move scroll button to the Gather pane 2024-04-22 15:58:18 -07:00
Enrico Ros e47ad9700e Anthropic: workaround for history[0] being assistant 2024-04-22 15:40:48 -07:00
Enrico Ros 372b19a057 Formulas: fix rendering for OpenAI-style inline '\(' and block '\[' latex. Fixes #508 2024-04-22 04:39:12 -07:00
Enrico Ros cbe156a868 Merge branch 'refs/heads/main-stable' 2024-04-22 02:57:08 -07:00
Enrico Ros 181a3881e2 Groq: update models
(cherry picked from commit 3eef03b303)
2024-04-22 02:56:47 -07:00
Enrico Ros 3eef03b303 Groq: update models 2024-04-22 02:52:19 -07:00
Enrico Ros ad56e3165c Beam: fix pixel-bound loading of presets 2024-04-22 02:27:07 -07:00
Enrico Ros b1a96b6e75 Beam: clear heuristics for llm selection 2024-04-22 02:26:48 -07:00
Enrico Ros 56419b1b4e Beam: persist the last configuration 2024-04-22 02:19:17 -07:00
Enrico Ros 372f14a9c5 Beam: auto-configure from Elo 2024-04-22 01:01:43 -07:00
Enrico Ros e1ec56a120 Beam: remove fallbackLlmId 2024-04-22 01:01:33 -07:00
Enrico Ros 5bb11249d6 Beam: remove reactive (view-based) ray conf 2024-04-22 01:01:17 -07:00
Enrico Ros 9fbcca1ff2 Llms: avoid name clash 2024-04-22 00:54:41 -07:00
Enrico Ros 323f2b2c3e Llms: cleaner 2024-04-22 00:52:56 -07:00
Enrico Ros b971d38dd5 Llms: heuristic to auto-pick the best diverse LLMs 2024-04-22 00:49:06 -07:00
Enrico Ros 278f479a3a Beam: rename terminate 2024-04-22 00:48:36 -07:00
Enrico Ros 03aea5678d Llms: misc 2024-04-22 00:17:49 -07:00
Enrico Ros b62b8ee7e6 Beam: App: fix state 2024-04-22 00:12:49 -07:00
Enrico Ros 63f55551e5 Beam: gather show all prompts 2024-04-21 23:30:41 -07:00
Enrico Ros b185fbc57d Beam: fallback llm Id 2024-04-21 23:24:52 -07:00
Enrico Ros ceb9d58e72 Beam: fix import rays 2024-04-21 23:10:47 -07:00
Enrico Ros a0bb515a4f Beam: minor bits 2024-04-21 22:28:36 -07:00
Enrico Ros 2cfac2f18b Beam: combine two menus into one 2024-04-21 22:05:08 -07:00
Enrico Ros d412f538b2 Make it more explicit we're only not rolling this one. 2024-04-21 21:30:26 -07:00
Enrico Ros 94f90ad861 Roll packages, but hold Next back. 2024-04-21 21:22:47 -07:00
Enrico Ros 4a402e7937 Roll pdfjs 2024-04-21 21:19:30 -07:00
Enrico Ros c226d6c391 Lock Next to 14.1, as 14.2 introduces the async/await messages when running/building, and we don't know what it means yet.
"The generated code contains 'async/await' because this module is using "topLevelAwait"."

See: https://github.com/vercel/next.js/issues/64792
2024-04-21 21:17:24 -07:00
Enrico Ros 67410e6c59 Revert "Roll packages." - Next v14.2.2 shows some async/await messages.
See https://github.com/vercel/next.js/issues/64792

This reverts commit 419c361147.
2024-04-21 21:12:32 -07:00
Enrico Ros 419c361147 Roll packages. 2024-04-21 20:39:56 -07:00
Enrico Ros 3769a53ffa Merge pull request #507 from mludvig/arm-build-1
Build multi-arch docker image for x64-64 and ARM64
2024-04-15 22:04:07 -07:00
Michael Ludvig ec4aaa3bfb Cleanup 2024-04-16 16:51:57 +12:00
Michael Ludvig be52680fcd Put back hashes and comments 2024-04-16 16:20:48 +12:00
Michael Ludvig 9d41ab9339 Merge branch 'enricoros:main' into arm-build-1 2024-04-16 12:36:23 +12:00
Michael Ludvig f126fc3087 Cleanup 2024-04-16 11:52:58 +12:00
Michael Ludvig 764377037c Disabled arm 32 again (not supported by Prisma) 2024-04-16 11:22:15 +12:00
Michael Ludvig 8e09eaab45 Add sha tag 2024-04-16 11:10:32 +12:00
Michael Ludvig 6523da186c Update versions, add arm32 2024-04-16 10:29:18 +12:00
Michael Ludvig 6471fd8b6f Enable action 2024-04-16 10:01:41 +12:00
Michael Ludvig 247a74881a Added buildx support 2024-04-15 11:34:42 +12:00
Enrico Ros 3ef09f0a5f Models: upgrade data structure to v2 - auto-pick 2024-04-12 05:50:46 -07:00
Enrico Ros b924d331f9 Models: upgrade data structure to v2 2024-04-12 05:36:18 -07:00
Enrico Ros 14041b6012 Beam: simplify a bit 2024-04-12 03:44:54 -07:00
Enrico Ros 2c6cc5ecec Cleanup models update logic 2024-04-12 02:44:14 -07:00
Enrico Ros ac022b1df0 Models: adding prices and benchmarks for a few models 2024-04-12 02:09:14 -07:00
Enrico Ros 0a2081de08 Better Beam Hint 2024-04-12 01:06:25 -07:00
Enrico Ros 64a8e554c7 Designer update 2024-04-12 00:46:58 -07:00
Enrico Ros 082d29fd2f Improve style 2024-04-12 00:45:00 -07:00
Enrico Ros ba5cf9d002 Composer: show the bubble 2024-04-12 00:22:55 -07:00
Enrico Ros 57a55318df Stabilize 2024-04-12 00:07:40 -07:00
Enrico Ros e70f4f7a59 ChatMessageList: this side is probably done 2024-04-11 21:10:56 -07:00
Enrico Ros 1d217fad67 Warning 2024-04-11 21:10:39 -07:00
Enrico Ros e95d46f085 ConversationHandler: prepare chat overlays 2024-04-11 21:08:04 -07:00
Enrico Ros f4577878e1 ChatMessage: Reply on 2024-04-11 20:36:32 -07:00
Enrico Ros 1bd1e5c8e3 ChatMessage: Toolbar complete 2024-04-11 20:19:30 -07:00
Enrico Ros c975dee965 ChatMessageList: remove menu items if t2i off 2024-04-11 19:22:03 -07:00
Enrico Ros 9d690f4219 ChatMessage: fix double-closure 2024-04-11 18:22:12 -07:00
Enrico Ros 29ddb3f58d ChatMessage: improve menu 2024-04-11 18:12:44 -07:00
Enrico Ros 8626bc0b1c BlocksRenderer: selection color 2024-04-11 18:12:37 -07:00
Enrico Ros c362cf6596 Propagate information on whether this can be spoken 2024-04-11 17:52:50 -07:00
Enrico Ros 97264fc5ff ChatMessage: toolbar framework 2024-04-11 17:04:44 -07:00
Enrico Ros 494c4409c1 BlocksRenderer: more v-padding for an improved mouse-up behavior 2024-04-11 16:40:47 -07:00
Enrico Ros d46e366c81 Blocks Renderer: use refs 2024-04-11 13:16:13 -07:00
Enrico Ros 6afe33ee9c decolor 2024-04-11 10:13:54 -07:00
Enrico Ros 903c9e1cc3 Improve options 2024-04-11 10:12:03 -07:00
Enrico Ros 3ef43fc3f5 Merge branch 'joriskalz-chat-with-youtube' 2024-04-11 09:58:56 -07:00
Enrico Ros b1c3be05dd Integrate YouTube transcriber (hidden by default) 2024-04-11 09:58:45 -07:00
Enrico Ros efee23b4a7 Update shadows 2024-04-11 09:49:13 -07:00
Enrico Ros 06b67a7586 Merge branch 'chat-with-youtube' of https://github.com/joriskalz/big-AGI-dev into joriskalz-chat-with-youtube 2024-04-11 09:33:56 -07:00
Joris Kalz 889a2dbf9d Remvoved unwanted new line. 2024-04-11 11:45:03 +01:00
Joris Kalz 2f80fcc888 Removed comments 2024-04-11 11:43:54 +01:00
Joris Kalz f7ee479c1d Removed comments 2024-04-11 11:36:27 +01:00
Joris Kalz 94fa0981fe Update YouTube Transcriber voiceId in data.ts 2024-04-11 11:33:55 +01:00
Joris Kalz 4c74afe438 Update YouTube Transcriber system message in data.ts 2024-04-11 11:33:42 +01:00
Joris Kalz f76cea22de Fix YouTube Transcriber activation bug in PersonaSelector component 2024-04-10 22:18:35 +01:00
Joris Kalz 3d49110808 Implement handleAddMessage function in PersonaSelector component 2024-04-10 22:14:15 +01:00
Joris Kalz 88a4579f7a Refactor PersonaSelector component to handle YouTube Transcriber tile click 2024-04-10 22:00:29 +01:00
Joris Kalz 241bde0333 Update YouTubeURLInput component to handle YouTube video transcripts 2024-04-10 21:48:20 +01:00
Joris Kalz 73c7867cd6 Add YouTube Transcriber persona and handle YouTube Transcriber tile click 2024-04-10 11:53:48 +01:00
Enrico Ros b35254f7ad Qol 2024-04-10 03:14:15 -07:00
Enrico Ros 213e78c956 Beam: save the merge model, and shrink rays when loading a smaller preset 2024-04-10 03:01:18 -07:00
128 changed files with 3325 additions and 1789 deletions
+9 -1
View File
@@ -32,6 +32,12 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
with:
@@ -49,13 +55,15 @@ jobs:
type=raw,value=stable,enable=${{ github.ref == 'refs/heads/main-stable' }}
type=ref,event=tag # Use the tag name as a tag for tag builds
type=semver,pattern={{version}} # Generate semantic versioning tags for tag builds
type=sha # Just in case none of the above applies
- name: Build and push Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: NEXT_PUBLIC_GA4_MEASUREMENT_ID=${{ secrets.GA4_MEASUREMENT_ID }}
build-args: NEXT_PUBLIC_GA4_MEASUREMENT_ID=${{ secrets.GA4_MEASUREMENT_ID }}
+21 -10
View File
@@ -17,18 +17,22 @@ Or fork & run on Vercel
## 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2) 👉 [installation](docs/installation.md) 👉 [documentation](docs/README.md)
> Note: bigger better features (incl. Beam-2) are being cooked outside of `main`.
[//]: # (big-AGI is an open book; see the **[ready-to-ship and future ideas](https://github.com/users/enricoros/projects/4/views/2)** in our open roadmap)
#### What's New in 1.15.1 · April 10, 2024 (minor release, models support)
### What's New in 1.16.0 · May 9, 2024 · Crystal Clear
- Support for the newly released Gemini Pro 1.5 models
- Support for the new OpenAI 2024-04-09 Turbo models
- Ctrl+S and Ctrl+O to save/load chats on desktop
- Resilience fixes after the large success of 1.15.0
- [Beam](https://big-agi.com/blog/beam-multi-model-ai-reasoning) core and UX improvements based on user feedback
- Chat cost estimation 💰 (enable it in Labs / hover the token counter)
- Save/load chat files with Ctrl+S / Ctrl+O on desktop
- Major enhancements to the Auto-Diagrams tool
- YouTube Transcriber Persona for chatting with video content, [#500](https://github.com/enricoros/big-AGI/pull/500)
- Improved formula rendering (LaTeX), and dark-mode diagrams, [#508](https://github.com/enricoros/big-AGI/issues/508), [#520](https://github.com/enricoros/big-AGI/issues/520)
- Models update: **Anthropic**, **Groq**, **Ollama**, **OpenAI**, **OpenRouter**, **Perplexity**
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more [#517](https://github.com/enricoros/big-AGI/issues/517), [507](https://github.com/enricoros/big-AGI/pull/507)
> Note: Beam-2 and new larger features are being cooked outside of `main`.
### 3,000 Commits Milestone · April 7, 2024
#### 3,000 Commits Milestone · April 7, 2024
![big-AGI Milestone](https://github.com/enricoros/big-AGI/assets/32999/47fddbb1-9bd6-4b58-ace4-781dfcb80923)
@@ -42,9 +46,11 @@ Or fork & run on Vercel
- Message **Starring ⭐**: star important messages within chats, to attach them later. [#476](https://github.com/enricoros/big-AGI/issues/476)
- Enhanced the default Persona
- Fixes to Gemini models and SVGs, improvements to UI and icons
- 1.15.1: Support for Gemini Pro 1.5 and OpenAI Turbo models
- Beast release, over 430 commits, 10,000+ lines changed: [release notes](https://github.com/enricoros/big-AGI/releases/tag/v1.15.0), and changes [v1.14.1...v1.15.0](https://github.com/enricoros/big-AGI/compare/v1.14.1...v1.15.0)
### What's New in 1.14.1 · March 7, 2024 · Modelmorphic
<details>
<summary>What's New in 1.14.1 · March 7, 2024 · Modelmorphic</summary>
- **Anthropic** [Claude-3](https://www.anthropic.com/news/claude-3-family) model family support. [#443](https://github.com/enricoros/big-AGI/issues/443)
- New **[Perplexity](https://www.perplexity.ai/)** and **[Groq](https://groq.com/)** integration (thanks @Penagwin). [#407](https://github.com/enricoros/big-AGI/issues/407), [#427](https://github.com/enricoros/big-AGI/issues/427)
@@ -54,7 +60,10 @@ Or fork & run on Vercel
- Enhanced UX with auto-sizing charts, refined search and folder functionalities, perfected scaling
- And with more UI improvements, documentation, bug fixes (20 tickets), and developer enhancements
### What's New in 1.13.0 · Feb 8, 2024 · Multi + Mind
</details>
<details>
<summary>What's New in 1.13.0 · Feb 8, 2024 · Multi + Mind</summary>
https://github.com/enricoros/big-AGI/assets/32999/01732528-730e-41dc-adc7-511385686b13
@@ -66,6 +75,8 @@ https://github.com/enricoros/big-AGI/assets/32999/01732528-730e-41dc-adc7-511385
- Better looking chats with improved spacing, fonts, and menus
- More: new video player, [LM Studio tutorial](https://github.com/enricoros/big-AGI/blob/main/docs/config-local-lmstudio.md) (thanks @aj47), [MongoDB support](https://github.com/enricoros/big-AGI/blob/main/docs/deploy-database.md) (thanks @ranfysvalle02), and speedups
</details>
<details>
<summary>What's New in 1.12.0 · Jan 26, 2024 · AGI Hotline</summary>
+20 -2
View File
@@ -5,11 +5,29 @@ by release.
- For the live roadmap, please see [the GitHub project](https://github.com/users/enricoros/projects/4/views/2)
### 1.16.0 - Mar 2024
### 1.17.0 - Jun 2024
- milestone: [1.16.0](https://github.com/enricoros/big-agi/milestone/16)
- milestone: [1.17.0](https://github.com/enricoros/big-agi/milestone/17)
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
### What's New in 1.16.0 · May 9, 2024 · Crystal Clear
- [Beam](https://big-agi.com/blog/beam-multi-model-ai-reasoning) core and UX improvements based on user feedback
- Chat cost estimation 💰 (enable it in Labs / hover the token counter)
- Save/load chat files with Ctrl+S / Ctrl+O on desktop
- Major enhancements to the Auto-Diagrams tool
- YouTube Transcriber Persona for chatting with video content, [#500](https://github.com/enricoros/big-AGI/pull/500)
- Improved formula rendering (LaTeX), and dark-mode diagrams, [#508](https://github.com/enricoros/big-AGI/issues/508), [#520](https://github.com/enricoros/big-AGI/issues/520)
- Models update: **Anthropic**, **Groq**, **Ollama**, **OpenAI**, **OpenRouter**, **Perplexity**
- Code soft-wrap, chat text selection toolbar, 3x faster on Apple silicon, and more [#517](https://github.com/enricoros/big-AGI/issues/517), [507](https://github.com/enricoros/big-AGI/pull/507)
- Developers: update the LLMs data structures
### What's New in 1.15.1 · April 10, 2024 (minor release, models support)
- Support for the newly released Gemini Pro 1.5 models
- Support for the new OpenAI 2024-04-09 Turbo models
- Resilience fixes after the large success of 1.15.0
### What's New in 1.15.0 · April 1, 2024 · Beam
- ⚠️ [**Beam**: the multi-model AI chat](https://big-agi.com/blog/beam-multi-model-ai-reasoning). find better answers, faster - a game-changer for brainstorming, decision-making, and creativity. [#443](https://github.com/enricoros/big-AGI/issues/443)
+2 -2
View File
@@ -68,7 +68,7 @@ The chat agent won't be able to access the web sites if the browserless containe
- MAX_CONCURRENT_SESSIONS=10
```
You can then add the proyy lines to your `.env` file.
You can then add the proxy lines to your `.env` file.
```
https_proxy=http://PROXY-IP:PROXY-PORT
@@ -115,4 +115,4 @@ If you encounter any issues or have questions about configuring the browse funct
Enjoy the enhanced browsing experience within `big-AGI` and explore the web without ever leaving your chat!
Last updated on Feb 27, 2024 ([edit on GitHub](https://github.com/enricoros/big-AGI/edit/main/docs/config-feature-browse.md))
Last updated on Feb 27, 2024 ([edit on GitHub](https://github.com/enricoros/big-AGI/edit/main/docs/config-feature-browse.md))
+11 -7
View File
@@ -72,15 +72,19 @@ Then, edit the nginx configuration file `/etc/nginx/sites-enabled/default` and a
```nginx
location /ollama/ {
proxy_pass http://localhost:11434;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_pass http://127.0.0.1:11434/;
# Disable buffering for the streaming responses
# Disable buffering for the streaming responses (SSE)
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
# Longer timeouts
proxy_read_timeout 3600;
proxy_connect_timeout 3600;
proxy_send_timeout 3600;
}
```
+4 -4
View File
@@ -47,10 +47,10 @@ the same steps 1 and 2 as for [local development](#local-development).
# .. repeat the steps above up to `npm install`, then:
npm run build
```
4. Start the production server:
4. Start the production server (`npx` may be optional):
```bash
next start --port 3000
```
npx next start --port 3000
```
Your big-AGI production instance is on `http://localhost:3000`.
### Advanced Customization
@@ -116,4 +116,4 @@ Join our vibrant community of developers, researchers, and AI enthusiasts. Share
- [Discord Community](https://discord.gg/MkH4qj2Jp9)
- [Twitter](https://twitter.com/yourusername)
For any questions or inquiries, please don't hesitate to [reach out to our team](mailto:hello@big-agi.com).
For any questions or inquiries, please don't hesitate to [reach out to our team](mailto:hello@big-agi.com).
+349 -96
View File
@@ -1,12 +1,12 @@
{
"name": "big-agi",
"version": "1.15.1",
"version": "1.16.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "big-agi",
"version": "1.15.1",
"version": "1.16.0",
"hasInstallScript": true,
"dependencies": {
"@emotion/cache": "^11.11.0",
@@ -15,8 +15,8 @@
"@emotion/styled": "^11.11.5",
"@mui/icons-material": "^5.15.15",
"@mui/joy": "^5.0.0-beta.32",
"@next/bundle-analyzer": "^14.1.4",
"@next/third-parties": "^14.2.0-canary.60",
"@next/bundle-analyzer": "^14.2.2",
"@next/third-parties": "^14.2.2",
"@prisma/client": "^5.12.1",
"@sanity/diff-match-patch": "^3.1.1",
"@t3-oss/env-nextjs": "^0.9.2",
@@ -30,9 +30,9 @@
"browser-fs-access": "^0.35.0",
"eventsource-parser": "^1.1.2",
"idb-keyval": "^6.2.1",
"next": "^14.1.4",
"next": "~14.1.4",
"nprogress": "^0.2.0",
"pdfjs-dist": "4.0.379",
"pdfjs-dist": "4.1.392",
"plantuml-encoder": "^1.4.0",
"prismjs": "^1.29.0",
"react": "^18.2.0",
@@ -41,36 +41,38 @@
"react-dom": "^18.2.0",
"react-katex": "^3.0.1",
"react-markdown": "^9.0.1",
"react-player": "^2.15.1",
"react-resizable-panels": "^2.0.16",
"react-player": "^2.16.0",
"react-resizable-panels": "^2.0.18",
"react-timeago": "^7.2.0",
"rehype-katex": "^7.0.0",
"remark-gfm": "^4.0.0",
"remark-math": "^6.0.0",
"sharp": "^0.33.3",
"superjson": "^2.2.1",
"tesseract.js": "^5.0.5",
"tiktoken": "^1.0.13",
"tiktoken": "^1.0.14",
"uuid": "^9.0.1",
"zod": "^3.22.4",
"zod": "^3.23.0",
"zustand": "^4.5.2"
},
"devDependencies": {
"@cloudflare/puppeteer": "0.0.5",
"@types/node": "^20.12.5",
"@types/node": "^20.12.7",
"@types/nprogress": "^0.2.3",
"@types/plantuml-encoder": "^1.4.2",
"@types/prismjs": "^1.26.3",
"@types/react": "^18.2.74",
"@types/react": "^18.2.79",
"@types/react-beautiful-dnd": "^13.1.8",
"@types/react-csv": "^1.1.10",
"@types/react-dom": "^18.2.24",
"@types/react-dom": "^18.2.25",
"@types/react-katex": "^3.0.4",
"@types/react-timeago": "^4.1.7",
"@types/uuid": "^9.0.8",
"eslint": "^8.57.0",
"eslint-config-next": "^14.1.4",
"eslint-config-next": "14.2.2",
"prettier": "^3.2.5",
"prisma": "^5.12.1",
"typescript": "^5.4.4"
"typescript": "^5.4.5"
},
"engines": {
"node": "^20.0.0 || ^18.0.0"
@@ -1317,9 +1319,9 @@
}
},
"node_modules/@next/bundle-analyzer": {
"version": "14.1.4",
"resolved": "https://registry.npmjs.org/@next/bundle-analyzer/-/bundle-analyzer-14.1.4.tgz",
"integrity": "sha512-IpF/18HcAOcfHRr24tqPOUpMmVKIqvkCxIubMeRYWCXs3jm7niPGrt8Mu74yMDzfGlUwgQA6Xd6BUc5+jQxcEg==",
"version": "14.2.2",
"resolved": "https://registry.npmjs.org/@next/bundle-analyzer/-/bundle-analyzer-14.2.2.tgz",
"integrity": "sha512-Zp2xG3VTPHUquOcBaRtrr0/n7mqnjKUmprGcJXPEKGgP5rAsLymIfWKm3jIVWIw5Eb4fNOfX4v+L+qiSvs+OJw==",
"dependencies": {
"webpack-bundle-analyzer": "4.10.1"
}
@@ -1330,9 +1332,9 @@
"integrity": "sha512-e7X7bbn3Z6DWnDi75UWn+REgAbLEqxI8Tq2pkFOFAMpWAWApz/YCUhtWMWn410h8Q2fYiYL7Yg5OlxMOCfFjJQ=="
},
"node_modules/@next/eslint-plugin-next": {
"version": "14.1.4",
"resolved": "https://registry.npmjs.org/@next/eslint-plugin-next/-/eslint-plugin-next-14.1.4.tgz",
"integrity": "sha512-n4zYNLSyCo0Ln5b7qxqQeQ34OZKXwgbdcx6kmkQbywr+0k6M3Vinft0T72R6CDAcDrne2IAgSud4uWCzFgc5HA==",
"version": "14.2.2",
"resolved": "https://registry.npmjs.org/@next/eslint-plugin-next/-/eslint-plugin-next-14.2.2.tgz",
"integrity": "sha512-q+Ec2648JtBpKiu/FSJm8HAsFXlNvioHeBCbTP12T1SGcHYwhqHULSfQgFkPgHDu3kzNp2Kem4J54bK4rPQ5SQ==",
"dev": true,
"dependencies": {
"glob": "10.3.10"
@@ -1474,9 +1476,9 @@
}
},
"node_modules/@next/third-parties": {
"version": "14.2.0-canary.60",
"resolved": "https://registry.npmjs.org/@next/third-parties/-/third-parties-14.2.0-canary.60.tgz",
"integrity": "sha512-Y/B3WxgsDkaDmSKGs/C9N/DwVYXAA0RtDOcCCGCwS+anpzcfD3wJBC6Ms9C1ieP1V65HN2NdvA2beGGJYPARKA==",
"version": "14.2.2",
"resolved": "https://registry.npmjs.org/@next/third-parties/-/third-parties-14.2.2.tgz",
"integrity": "sha512-udHgllytb8GPbqghxIDf09E7x/4hYgp7WjmfH1Z3u4EG29Mhf12NyXpc49wtd0k3rLydunqDa4MH9ej2y5Ph/A==",
"dependencies": {
"third-party-capital": "1.0.20"
},
@@ -1607,9 +1609,9 @@
}
},
"node_modules/@rushstack/eslint-patch": {
"version": "1.10.1",
"resolved": "https://registry.npmjs.org/@rushstack/eslint-patch/-/eslint-patch-1.10.1.tgz",
"integrity": "sha512-S3Kq8e7LqxkA9s7HKLqXGTGck1uwis5vAXan3FnU5yw1Ec5hsSGnq4s/UCaSqABPOnOTg7zASLyst7+ohgWexg==",
"version": "1.10.2",
"resolved": "https://registry.npmjs.org/@rushstack/eslint-patch/-/eslint-patch-1.10.2.tgz",
"integrity": "sha512-hw437iINopmQuxWPSUEvqE56NCPsiU8N4AYtfHmJFckclktzK9YQJieD3XkDCDH4OjL+C7zgPUh73R/nrcHrqw==",
"dev": true
},
"node_modules/@sanity/diff-match-patch": {
@@ -1795,6 +1797,11 @@
"integrity": "sha512-dRLjCWHYg4oaA77cxO64oO+7JwCwnIzkZPdrrC71jQmQtlhM556pwKo5bUzqvZndkVbeFLIIi+9TC40JNF5hNQ==",
"dev": true
},
"node_modules/@types/katex": {
"version": "0.16.7",
"resolved": "https://registry.npmjs.org/@types/katex/-/katex-0.16.7.tgz",
"integrity": "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ=="
},
"node_modules/@types/mdast": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-4.0.3.tgz",
@@ -1809,9 +1816,9 @@
"integrity": "sha512-nG96G3Wp6acyAgJqGasjODb+acrI7KltPiRxzHPXnP3NgI28bpQDRv53olbqGXbfcgF5aiiHmO3xpwEpS5Ld9g=="
},
"node_modules/@types/node": {
"version": "20.12.5",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.12.5.tgz",
"integrity": "sha512-BD+BjQ9LS/D8ST9p5uqBxghlN+S42iuNxjsUGjeZobe/ciXzk2qb1B6IXc6AnRLS+yFJRpN2IPEHMzwspfDJNw==",
"version": "20.12.7",
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.12.7.tgz",
"integrity": "sha512-wq0cICSkRLVaf3UGLMGItu/PtdY7oaXaI/RVU+xliKVOtRna3PRY57ZDfztpDL0n11vfymMUnXv8QwYCO7L1wg==",
"dev": true,
"dependencies": {
"undici-types": "~5.26.4"
@@ -1849,9 +1856,9 @@
"integrity": "sha512-5zvhXYtRNRluoE/jAp4GVsSduVUzNWKkOZrCDBWYtE7biZywwdC2AcEzg+cSMLFRfVgeAFqpfNabiPjxFddV1Q=="
},
"node_modules/@types/react": {
"version": "18.2.74",
"resolved": "https://registry.npmjs.org/@types/react/-/react-18.2.74.tgz",
"integrity": "sha512-9AEqNZZyBx8OdZpxzQlaFEVCSFUM2YXJH46yPOiOpm078k6ZLOCcuAzGum/zK8YBwY+dbahVNbHrbgrAwIRlqw==",
"version": "18.2.79",
"resolved": "https://registry.npmjs.org/@types/react/-/react-18.2.79.tgz",
"integrity": "sha512-RwGAGXPl9kSXwdNTafkOEuFrTBD5SA2B3iEB96xi8+xu5ddUa/cpvyVCSNn+asgLCTHkb5ZxN8gbuibYJi4s1w==",
"dependencies": {
"@types/prop-types": "*",
"csstype": "^3.0.2"
@@ -1876,9 +1883,9 @@
}
},
"node_modules/@types/react-dom": {
"version": "18.2.24",
"resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-18.2.24.tgz",
"integrity": "sha512-cN6upcKd8zkGy4HU9F1+/s98Hrp6D4MOcippK4PoE8OZRngohHZpbJn1GsaDLz87MqvHNoT13nHvNqM9ocRHZg==",
"version": "18.2.25",
"resolved": "https://registry.npmjs.org/@types/react-dom/-/react-dom-18.2.25.tgz",
"integrity": "sha512-o/V48vf4MQh7juIKZU2QGDfli6p1+OOi5oXx36Hffpc9adsHeXjVp8rHuPkjd8VT8sOJ2Zp05HR7CdpGTIUFUA==",
"dev": true,
"dependencies": {
"@types/react": "*"
@@ -1934,15 +1941,15 @@
"dev": true
},
"node_modules/@typescript-eslint/parser": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-6.21.0.tgz",
"integrity": "sha512-tbsV1jPne5CkFQCgPBcDOt30ItF7aJoZL997JSF7MhGQqOeT3svWRYxiqlfA5RUdlHN6Fi+EI9bxqbdyAUZjYQ==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/parser/-/parser-7.2.0.tgz",
"integrity": "sha512-5FKsVcHTk6TafQKQbuIVkXq58Fnbkd2wDL4LB7AURN7RUOu1utVP+G8+6u3ZhEroW3DF6hyo3ZEXxgKgp4KeCg==",
"dev": true,
"dependencies": {
"@typescript-eslint/scope-manager": "6.21.0",
"@typescript-eslint/types": "6.21.0",
"@typescript-eslint/typescript-estree": "6.21.0",
"@typescript-eslint/visitor-keys": "6.21.0",
"@typescript-eslint/scope-manager": "7.2.0",
"@typescript-eslint/types": "7.2.0",
"@typescript-eslint/typescript-estree": "7.2.0",
"@typescript-eslint/visitor-keys": "7.2.0",
"debug": "^4.3.4"
},
"engines": {
@@ -1953,7 +1960,7 @@
"url": "https://opencollective.com/typescript-eslint"
},
"peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0"
"eslint": "^8.56.0"
},
"peerDependenciesMeta": {
"typescript": {
@@ -1962,13 +1969,13 @@
}
},
"node_modules/@typescript-eslint/scope-manager": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-6.21.0.tgz",
"integrity": "sha512-OwLUIWZJry80O99zvqXVEioyniJMa+d2GrqpUTqi5/v5D5rOrppJVBPa0yKCblcigC0/aYAzxxqQ1B+DS2RYsg==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/scope-manager/-/scope-manager-7.2.0.tgz",
"integrity": "sha512-Qh976RbQM/fYtjx9hs4XkayYujB/aPwglw2choHmf3zBjB4qOywWSdt9+KLRdHubGcoSwBnXUH2sR3hkyaERRg==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.21.0",
"@typescript-eslint/visitor-keys": "6.21.0"
"@typescript-eslint/types": "7.2.0",
"@typescript-eslint/visitor-keys": "7.2.0"
},
"engines": {
"node": "^16.0.0 || >=18.0.0"
@@ -1979,9 +1986,9 @@
}
},
"node_modules/@typescript-eslint/types": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-6.21.0.tgz",
"integrity": "sha512-1kFmZ1rOm5epu9NZEZm1kckCDGj5UJEf7P1kliH4LKu/RkwpsfqqGmY2OOcUs18lSlQBKLDYBOGxRVtrMN5lpg==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/types/-/types-7.2.0.tgz",
"integrity": "sha512-XFtUHPI/abFhm4cbCDc5Ykc8npOKBSJePY3a3s+lwumt7XWJuzP5cZcfZ610MIPHjQjNsOLlYK8ASPaNG8UiyA==",
"dev": true,
"engines": {
"node": "^16.0.0 || >=18.0.0"
@@ -1992,13 +1999,13 @@
}
},
"node_modules/@typescript-eslint/typescript-estree": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-6.21.0.tgz",
"integrity": "sha512-6npJTkZcO+y2/kr+z0hc4HwNfrrP4kNYh57ek7yCNlrBjWQ1Y0OS7jiZTkgumrvkX5HkEKXFZkkdFNkaW2wmUQ==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/typescript-estree/-/typescript-estree-7.2.0.tgz",
"integrity": "sha512-cyxS5WQQCoBwSakpMrvMXuMDEbhOo9bNHHrNcEWis6XHx6KF518tkF1wBvKIn/tpq5ZpUYK7Bdklu8qY0MsFIA==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.21.0",
"@typescript-eslint/visitor-keys": "6.21.0",
"@typescript-eslint/types": "7.2.0",
"@typescript-eslint/visitor-keys": "7.2.0",
"debug": "^4.3.4",
"globby": "^11.1.0",
"is-glob": "^4.0.3",
@@ -2044,12 +2051,12 @@
}
},
"node_modules/@typescript-eslint/visitor-keys": {
"version": "6.21.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-6.21.0.tgz",
"integrity": "sha512-JJtkDduxLi9bivAB+cYOVMtbkqdPOhZ+ZI5LC47MIRrDV4Yn2o+ZnW10Nkmr28xRpSpdJ6Sm42Hjf2+REYXm0A==",
"version": "7.2.0",
"resolved": "https://registry.npmjs.org/@typescript-eslint/visitor-keys/-/visitor-keys-7.2.0.tgz",
"integrity": "sha512-c6EIQRHhcpl6+tO8EMR+kjkkV+ugUNXOmeASA1rlzkd8EPIriavpWoiEz1HR/VLhbVIdhqnV6E7JZm00cBDx2A==",
"dev": true,
"dependencies": {
"@typescript-eslint/types": "6.21.0",
"@typescript-eslint/types": "7.2.0",
"eslint-visitor-keys": "^3.4.1"
},
"engines": {
@@ -2574,9 +2581,9 @@
}
},
"node_modules/caniuse-lite": {
"version": "1.0.30001606",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001606.tgz",
"integrity": "sha512-LPbwnW4vfpJId225pwjZJOgX1m9sGfbw/RKJvw/t0QhYOOaTXHvkjVGFGPpvwEzufrjvTlsULnVTxdy4/6cqkg==",
"version": "1.0.30001612",
"resolved": "https://registry.npmjs.org/caniuse-lite/-/caniuse-lite-1.0.30001612.tgz",
"integrity": "sha512-lFgnZ07UhaCcsSZgWW0K5j4e69dK1u/ltrL9lTUiFOwNHs12S3UMIEYgBV0Z6C6hRDev7iRnMzzYmKabYdXF9g==",
"funding": [
{
"type": "opencollective",
@@ -3134,6 +3141,17 @@
"node": ">=10.13.0"
}
},
"node_modules/entities": {
"version": "4.5.0",
"resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz",
"integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==",
"engines": {
"node": ">=0.12"
},
"funding": {
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/error-ex": {
"version": "1.3.2",
"resolved": "https://registry.npmjs.org/error-ex/-/error-ex-1.3.2.tgz",
@@ -3367,14 +3385,14 @@
}
},
"node_modules/eslint-config-next": {
"version": "14.1.4",
"resolved": "https://registry.npmjs.org/eslint-config-next/-/eslint-config-next-14.1.4.tgz",
"integrity": "sha512-cihIahbhYAWwXJwZkAaRPpUi5t9aOi/HdfWXOjZeUOqNWXHD8X22kd1KG58Dc3MVaRx3HoR/oMGk2ltcrqDn8g==",
"version": "14.2.2",
"resolved": "https://registry.npmjs.org/eslint-config-next/-/eslint-config-next-14.2.2.tgz",
"integrity": "sha512-12/uFc0KX+wUs7EDpOUGKMXBXZJiBVGdK5/m/QgXOCg2mQ0bQWoKSWNrCeOg7Vum6Kw1d1TW453W6xh+GbHquw==",
"dev": true,
"dependencies": {
"@next/eslint-plugin-next": "14.1.4",
"@next/eslint-plugin-next": "14.2.2",
"@rushstack/eslint-patch": "^1.3.3",
"@typescript-eslint/parser": "^5.4.2 || ^6.0.0",
"@typescript-eslint/parser": "^5.4.2 || ^6.0.0 || 7.0.0 - 7.2.0",
"eslint-import-resolver-node": "^0.3.6",
"eslint-import-resolver-typescript": "^3.5.2",
"eslint-plugin-import": "^2.28.1",
@@ -4289,6 +4307,95 @@
"node": ">= 0.4"
}
},
"node_modules/hast-util-from-dom": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/hast-util-from-dom/-/hast-util-from-dom-5.0.0.tgz",
"integrity": "sha512-d6235voAp/XR3Hh5uy7aGLbM3S4KamdW0WEgOaU1YoewnuYw4HXb5eRtv9g65m/RFGEfUY1Mw4UqCc5Y8L4Stg==",
"dependencies": {
"@types/hast": "^3.0.0",
"hastscript": "^8.0.0",
"web-namespaces": "^2.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-html": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/hast-util-from-html/-/hast-util-from-html-2.0.1.tgz",
"integrity": "sha512-RXQBLMl9kjKVNkJTIO6bZyb2n+cUH8LFaSSzo82jiLT6Tfc+Pt7VQCS+/h3YwG4jaNE2TA2sdJisGWR+aJrp0g==",
"dependencies": {
"@types/hast": "^3.0.0",
"devlop": "^1.1.0",
"hast-util-from-parse5": "^8.0.0",
"parse5": "^7.0.0",
"vfile": "^6.0.0",
"vfile-message": "^4.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-html-isomorphic": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/hast-util-from-html-isomorphic/-/hast-util-from-html-isomorphic-2.0.0.tgz",
"integrity": "sha512-zJfpXq44yff2hmE0XmwEOzdWin5xwH+QIhMLOScpX91e/NSGPsAzNCvLQDIEPyO2TXi+lBmU6hjLIhV8MwP2kw==",
"dependencies": {
"@types/hast": "^3.0.0",
"hast-util-from-dom": "^5.0.0",
"hast-util-from-html": "^2.0.0",
"unist-util-remove-position": "^5.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-from-parse5": {
"version": "8.0.1",
"resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-8.0.1.tgz",
"integrity": "sha512-Er/Iixbc7IEa7r/XLtuG52zoqn/b3Xng/w6aZQ0xGVxzhw5xUFxcRqdPzP6yFi/4HBYRaifaI5fQ1RH8n0ZeOQ==",
"dependencies": {
"@types/hast": "^3.0.0",
"@types/unist": "^3.0.0",
"devlop": "^1.0.0",
"hastscript": "^8.0.0",
"property-information": "^6.0.0",
"vfile": "^6.0.0",
"vfile-location": "^5.0.0",
"web-namespaces": "^2.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-is-element": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-3.0.0.tgz",
"integrity": "sha512-Val9mnv2IWpLbNPqc/pUem+a7Ipj2aHacCwgNfTiK0vJKl0LF+4Ba4+v1oPHFpf3bLYmreq0/l3Gud9S5OH42g==",
"dependencies": {
"@types/hast": "^3.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-parse-selector": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-4.0.0.tgz",
"integrity": "sha512-wkQCkSYoOGCRKERFWcxMVMOcYE2K1AaNLU8DXS9arxnLOUEWbOXKXiJUNzEpqZ3JOKpnha3jkFrumEjVliDe7A==",
"dependencies": {
"@types/hast": "^3.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-to-jsx-runtime": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/hast-util-to-jsx-runtime/-/hast-util-to-jsx-runtime-2.3.0.tgz",
@@ -4315,6 +4422,21 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-to-text": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/hast-util-to-text/-/hast-util-to-text-4.0.2.tgz",
"integrity": "sha512-KK6y/BN8lbaq654j7JgBydev7wuNMcID54lkRav1P0CaE1e47P72AWWPiGKXTJU271ooYzcvTAn/Zt0REnvc7A==",
"dependencies": {
"@types/hast": "^3.0.0",
"@types/unist": "^3.0.0",
"hast-util-is-element": "^3.0.0",
"unist-util-find-after": "^5.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hast-util-whitespace": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-3.0.0.tgz",
@@ -4327,6 +4449,22 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/hastscript": {
"version": "8.0.0",
"resolved": "https://registry.npmjs.org/hastscript/-/hastscript-8.0.0.tgz",
"integrity": "sha512-dMOtzCEd3ABUeSIISmrETiKuyydk1w0pa+gE/uormcTpSYuaNJPbX1NU3JLyscSLjwAQM8bWMhhIlnCqnRvDTw==",
"dependencies": {
"@types/hast": "^3.0.0",
"comma-separated-tokens": "^2.0.0",
"hast-util-parse-selector": "^4.0.0",
"property-information": "^6.0.0",
"space-separated-tokens": "^2.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/hoist-non-react-statics": {
"version": "3.3.2",
"resolved": "https://registry.npmjs.org/hoist-non-react-statics/-/hoist-non-react-statics-3.3.2.tgz",
@@ -5310,6 +5448,24 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/mdast-util-math": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/mdast-util-math/-/mdast-util-math-3.0.0.tgz",
"integrity": "sha512-Tl9GBNeG/AhJnQM221bJR2HPvLOSnLE/T9cJI9tlc6zwQk2nPk/4f0cHkOdEixQPC/j8UtKDdITswvLAy1OZ1w==",
"dependencies": {
"@types/hast": "^3.0.0",
"@types/mdast": "^4.0.0",
"devlop": "^1.0.0",
"longest-streak": "^3.0.0",
"mdast-util-from-markdown": "^2.0.0",
"mdast-util-to-markdown": "^2.1.0",
"unist-util-remove-position": "^5.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/mdast-util-mdx-expression": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/mdast-util-mdx-expression/-/mdast-util-mdx-expression-2.0.0.tgz",
@@ -5627,6 +5783,24 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/micromark-extension-math": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/micromark-extension-math/-/micromark-extension-math-3.0.0.tgz",
"integrity": "sha512-iJ2Q28vBoEovLN5o3GO12CpqorQRYDPT+p4zW50tGwTfJB+iv/VnB6Ini+gqa24K97DwptMBBIvVX6Bjk49oyQ==",
"dependencies": {
"@types/katex": "^0.16.0",
"devlop": "^1.0.0",
"katex": "^0.16.0",
"micromark-factory-space": "^2.0.0",
"micromark-util-character": "^2.0.0",
"micromark-util-symbol": "^2.0.0",
"micromark-util-types": "^2.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/micromark-factory-destination": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/micromark-factory-destination/-/micromark-factory-destination-2.0.0.tgz",
@@ -5931,9 +6105,9 @@
}
},
"node_modules/micromark-util-subtokenize": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/micromark-util-subtokenize/-/micromark-util-subtokenize-2.0.0.tgz",
"integrity": "sha512-vc93L1t+gpR3p8jxeVdaYlbV2jTYteDje19rNSS/H5dlhxUYll5Fy6vJ2cDwP8RnsXi818yGty1ayP55y3W6fg==",
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/micromark-util-subtokenize/-/micromark-util-subtokenize-2.0.1.tgz",
"integrity": "sha512-jZNtiFl/1aY73yS3UGQkutD0UbhTt68qnRpw2Pifmz5wV9h8gOVsN70v+Lq/f1rKaU/W8pxRe8y8Q9FX1AOe1Q==",
"funding": [
{
"type": "GitHub Sponsors",
@@ -6467,6 +6641,17 @@
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/parse5": {
"version": "7.1.2",
"resolved": "https://registry.npmjs.org/parse5/-/parse5-7.1.2.tgz",
"integrity": "sha512-Czj1WaSVpaoj0wbhMzLmWD69anp2WH7FXMB9n1Sy8/ZFF9jolSQVMu1Ij5WIyGmcBmhk7EOndpO4mIpihVqAXw==",
"dependencies": {
"entities": "^4.4.0"
},
"funding": {
"url": "https://github.com/inikulin/parse5?sponsor=1"
}
},
"node_modules/path-exists": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/path-exists/-/path-exists-4.0.0.tgz",
@@ -6523,25 +6708,25 @@
"node": ">=8"
}
},
"node_modules/path2d-polyfill": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/path2d-polyfill/-/path2d-polyfill-2.0.1.tgz",
"integrity": "sha512-ad/3bsalbbWhmBo0D6FZ4RNMwsLsPpL6gnvhuSaU5Vm7b06Kr5ubSltQQ0T7YKsiJQO+g22zJ4dJKNTXIyOXtA==",
"node_modules/path2d": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/path2d/-/path2d-0.1.2.tgz",
"integrity": "sha512-LW++2uxgHNL/FANhgGTPo/yDDQcgsVbKotwIVbpTgTBgRlKUpjOpjp3s3+KjG4OWCQ/r6z+WLDljH1/fC03PWw==",
"optional": true,
"engines": {
"node": ">=8"
"node": ">=6"
}
},
"node_modules/pdfjs-dist": {
"version": "4.0.379",
"resolved": "https://registry.npmjs.org/pdfjs-dist/-/pdfjs-dist-4.0.379.tgz",
"integrity": "sha512-6H0Gv1nna+wmrr3CakaKlZ4rbrL8hvGIFAgg4YcoFuGC0HC4B2DVjXEGTFjJEjLlf8nYi3C3/MYRcM5bNx0elA==",
"version": "4.1.392",
"resolved": "https://registry.npmjs.org/pdfjs-dist/-/pdfjs-dist-4.1.392.tgz",
"integrity": "sha512-fUV14+CG81uDLjgZ2Nmy35GvJsLIekotJb2VhXAoUfMCrWHhQtPJbqryUuevAdSHyEiAdr675ULikoD087+lMg==",
"engines": {
"node": ">=18"
},
"optionalDependencies": {
"canvas": "^2.11.2",
"path2d-polyfill": "^2.0.1"
"path2d": "^0.1.2"
}
},
"node_modules/picocolors": {
@@ -6817,9 +7002,9 @@
}
},
"node_modules/react-player": {
"version": "2.15.1",
"resolved": "https://registry.npmjs.org/react-player/-/react-player-2.15.1.tgz",
"integrity": "sha512-ni1XFuYZuhIKKdeFII+KRLmIPcvCYlyXvtSMhNOgssdfnSovmakBtBTW2bxowPvmpKy5BTR4jC4CF79ucgHT+g==",
"version": "2.16.0",
"resolved": "https://registry.npmjs.org/react-player/-/react-player-2.16.0.tgz",
"integrity": "sha512-mAIPHfioD7yxO0GNYVFD1303QFtI3lyyQZLY229UEAp/a10cSW+hPcakg0Keq8uWJxT2OiT/4Gt+Lc9bD6bJmQ==",
"dependencies": {
"deepmerge": "^4.0.0",
"load-script": "^1.0.0",
@@ -6861,9 +7046,9 @@
"integrity": "sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w=="
},
"node_modules/react-resizable-panels": {
"version": "2.0.16",
"resolved": "https://registry.npmjs.org/react-resizable-panels/-/react-resizable-panels-2.0.16.tgz",
"integrity": "sha512-UrnxmTZaTnbCl/xIOX38ig35RicqGfLuqt2x5fytpNlQvCRuxyXZwIBEhmF+pmrEGxfajyXFBoCplNxLvhF0CQ==",
"version": "2.0.18",
"resolved": "https://registry.npmjs.org/react-resizable-panels/-/react-resizable-panels-2.0.18.tgz",
"integrity": "sha512-rKagCW6C8tTjWRq5jNsASsi4TB2a+IixL9++0G1+kPgAgvULPVf7kE0VbHysC3wdvcGYMa70O46C9YpG7CCkeA==",
"peerDependencies": {
"react": "^16.14.0 || ^17.0.0 || ^18.0.0",
"react-dom": "^16.14.0 || ^17.0.0 || ^18.0.0"
@@ -6964,6 +7149,24 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/rehype-katex": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/rehype-katex/-/rehype-katex-7.0.0.tgz",
"integrity": "sha512-h8FPkGE00r2XKU+/acgqwWUlyzve1IiOKwsEkg4pDL3k48PiE0Pt+/uLtVHDVkN1yA4iurZN6UES8ivHVEQV6Q==",
"dependencies": {
"@types/hast": "^3.0.0",
"@types/katex": "^0.16.0",
"hast-util-from-html-isomorphic": "^2.0.0",
"hast-util-to-text": "^4.0.0",
"katex": "^0.16.0",
"unist-util-visit-parents": "^6.0.0",
"vfile": "^6.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/remark-gfm": {
"version": "4.0.0",
"resolved": "https://registry.npmjs.org/remark-gfm/-/remark-gfm-4.0.0.tgz",
@@ -6981,6 +7184,21 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/remark-math": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/remark-math/-/remark-math-6.0.0.tgz",
"integrity": "sha512-MMqgnP74Igy+S3WwnhQ7kqGlEerTETXMvJhrUzDikVZ2/uogJCb+WHUg97hK9/jcfc0dkD73s3LN8zU49cTEtA==",
"dependencies": {
"@types/mdast": "^4.0.0",
"mdast-util-math": "^3.0.0",
"micromark-extension-math": "^3.0.0",
"unified": "^11.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/remark-parse": {
"version": "11.0.0",
"resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-11.0.0.tgz",
@@ -7824,9 +8042,9 @@
}
},
"node_modules/tiktoken": {
"version": "1.0.13",
"resolved": "https://registry.npmjs.org/tiktoken/-/tiktoken-1.0.13.tgz",
"integrity": "sha512-JaL9ZnvTbGFMDIBeGdVkLt4qWTeCPw+n7Ock+wceAGRenuHA6nOOvMJFliNDyXsjg2osGKJWsXtO2xc74VxyDw=="
"version": "1.0.14",
"resolved": "https://registry.npmjs.org/tiktoken/-/tiktoken-1.0.14.tgz",
"integrity": "sha512-g5zd5r/DoH8Kw0fiYbYpVhb6WO8BHO1unXqmBBWKwoT17HwSounnDtMDFUKm2Pko8U47sjQarOe+9aUrnqmmTg=="
},
"node_modules/tiny-invariant": {
"version": "1.3.3",
@@ -8011,9 +8229,9 @@
}
},
"node_modules/typescript": {
"version": "5.4.4",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.4.4.tgz",
"integrity": "sha512-dGE2Vv8cpVvw28v8HCPqyb08EzbBURxDpuhJvTrusShUfGnhHBafDsLdS1EhhxyL6BJQE+2cT3dDPAv+MQ6oLw==",
"version": "5.4.5",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.4.5.tgz",
"integrity": "sha512-vcI4UpRgg81oIRUFwR0WSIHKt11nJ7SAVlYNIu+QpqeyXP+gpQJy/Z4+F0aGxSE4MqwjyXvW/TzgkLAx2AGHwQ==",
"devOptional": true,
"bin": {
"tsc": "bin/tsc",
@@ -8062,6 +8280,19 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/unist-util-find-after": {
"version": "5.0.0",
"resolved": "https://registry.npmjs.org/unist-util-find-after/-/unist-util-find-after-5.0.0.tgz",
"integrity": "sha512-amQa0Ep2m6hE2g72AugUItjbuM8X8cGQnFoHk0pGfrFeT9GZhzN5SW8nRsiGKK7Aif4CrACPENkA6P/Lw6fHGQ==",
"dependencies": {
"@types/unist": "^3.0.0",
"unist-util-is": "^6.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/unist-util-is": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-6.0.0.tgz",
@@ -8223,6 +8454,19 @@
"url": "https://opencollective.com/unified"
}
},
"node_modules/vfile-location": {
"version": "5.0.2",
"resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-5.0.2.tgz",
"integrity": "sha512-NXPYyxyBSH7zB5U6+3uDdd6Nybz6o6/od9rk8bp9H8GR3L+cm/fC0uUTbqBmUTnMCUDslAGBOIKNfvvb+gGlDg==",
"dependencies": {
"@types/unist": "^3.0.0",
"vfile": "^6.0.0"
},
"funding": {
"type": "opencollective",
"url": "https://opencollective.com/unified"
}
},
"node_modules/vfile-message": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-4.0.2.tgz",
@@ -8241,6 +8485,15 @@
"resolved": "https://registry.npmjs.org/wasm-feature-detect/-/wasm-feature-detect-1.6.1.tgz",
"integrity": "sha512-R1i9ED8UlLu/foILNB1ck9XS63vdtqU/tP1MCugVekETp/ySCrBZRk5I/zI67cI1wlQYeSonNm1PLjDHZDNg6g=="
},
"node_modules/web-namespaces": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-2.0.1.tgz",
"integrity": "sha512-bKr1DkiNa2krS7qxNtdrtHAmzuYGFQLiQ13TsorsdT6ULTkPLKuu5+GsFpDlg6JFjUTwX2DyhMPG2be8uPrqsQ==",
"funding": {
"type": "github",
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/webidl-conversions": {
"version": "3.0.1",
"resolved": "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-3.0.1.tgz",
@@ -8588,9 +8841,9 @@
}
},
"node_modules/zod": {
"version": "3.22.4",
"resolved": "https://registry.npmjs.org/zod/-/zod-3.22.4.tgz",
"integrity": "sha512-iC+8Io04lddc+mVqQ9AZ7OQ2MrUKGN+oIQyq1vemgt46jwCwLfhq7/pwnBnNXXXZb8VTVLKwp9EDkx+ryxIWmg==",
"version": "3.23.0",
"resolved": "https://registry.npmjs.org/zod/-/zod-3.23.0.tgz",
"integrity": "sha512-OFLT+LTocvabn6q76BTwVB0hExEBS0IduTr3cqZyMqEDbOnYmcU+y0tUAYbND4uwclpBGi4I4UUBGzylWpjLGA==",
"funding": {
"url": "https://github.com/sponsors/colinhacks"
}
+16 -14
View File
@@ -1,6 +1,6 @@
{
"name": "big-agi",
"version": "1.15.1",
"version": "1.16.0",
"private": true,
"author": "Enrico Ros <enrico.ros@gmail.com>",
"repository": "https://github.com/enricoros/big-agi",
@@ -24,8 +24,8 @@
"@emotion/styled": "^11.11.5",
"@mui/icons-material": "^5.15.15",
"@mui/joy": "^5.0.0-beta.32",
"@next/bundle-analyzer": "^14.1.4",
"@next/third-parties": "^14.2.0-canary.60",
"@next/bundle-analyzer": "^14.2.2",
"@next/third-parties": "^14.2.2",
"@prisma/client": "^5.12.1",
"@sanity/diff-match-patch": "^3.1.1",
"@t3-oss/env-nextjs": "^0.9.2",
@@ -39,9 +39,9 @@
"browser-fs-access": "^0.35.0",
"eventsource-parser": "^1.1.2",
"idb-keyval": "^6.2.1",
"next": "^14.1.4",
"next": "~14.1.4",
"nprogress": "^0.2.0",
"pdfjs-dist": "4.0.379",
"pdfjs-dist": "4.1.392",
"plantuml-encoder": "^1.4.0",
"prismjs": "^1.29.0",
"react": "^18.2.0",
@@ -50,36 +50,38 @@
"react-dom": "^18.2.0",
"react-katex": "^3.0.1",
"react-markdown": "^9.0.1",
"react-player": "^2.15.1",
"react-resizable-panels": "^2.0.16",
"react-player": "^2.16.0",
"react-resizable-panels": "^2.0.18",
"react-timeago": "^7.2.0",
"rehype-katex": "^7.0.0",
"remark-gfm": "^4.0.0",
"remark-math": "^6.0.0",
"sharp": "^0.33.3",
"superjson": "^2.2.1",
"tesseract.js": "^5.0.5",
"tiktoken": "^1.0.13",
"tiktoken": "^1.0.14",
"uuid": "^9.0.1",
"zod": "^3.22.4",
"zod": "^3.23.0",
"zustand": "^4.5.2"
},
"devDependencies": {
"@cloudflare/puppeteer": "0.0.5",
"@types/node": "^20.12.5",
"@types/node": "^20.12.7",
"@types/nprogress": "^0.2.3",
"@types/plantuml-encoder": "^1.4.2",
"@types/prismjs": "^1.26.3",
"@types/react": "^18.2.74",
"@types/react": "^18.2.79",
"@types/react-beautiful-dnd": "^13.1.8",
"@types/react-csv": "^1.1.10",
"@types/react-dom": "^18.2.24",
"@types/react-dom": "^18.2.25",
"@types/react-katex": "^3.0.4",
"@types/react-timeago": "^4.1.7",
"@types/uuid": "^9.0.8",
"eslint": "^8.57.0",
"eslint-config-next": "^14.1.4",
"eslint-config-next": "14.2.2",
"prettier": "^3.2.5",
"prisma": "^5.12.1",
"typescript": "^5.4.4"
"typescript": "^5.4.5"
},
"engines": {
"node": "^20.0.0 || ^18.0.0"
Binary file not shown.

After

Width:  |  Height:  |  Size: 248 KiB

+9 -2
View File
@@ -11,7 +11,7 @@
"utilities"
],
"display": "standalone",
"start_url": "/",
"start_url": "/?source=pwa",
"scope": "/",
"icons": [
{
@@ -51,5 +51,12 @@
"text": "text",
"url": "url"
}
}
},
"shortcuts": [
{
"name": "Call",
"url": "/call",
"description": "Call a Persona"
}
]
}
File diff suppressed because one or more lines are too long
+13 -8
View File
@@ -30,8 +30,16 @@ export function AppBeam() {
// state
const [showDebug, setShowDebug] = React.useState(false);
const conversation = React.useRef<DConversation>(initTestConversation());
const beamStoreApi = React.useRef(initTestBeamStore(conversation.current.messages)).current;
const [conversation, setConversation] = React.useState<DConversation>(() => initTestConversation());
const [beamStoreApi] = React.useState(() => createBeamVanillaStore());
// reinit the beam store if the conversation changes
React.useEffect(() => {
initTestBeamStore(conversation.messages, beamStoreApi);
}, [beamStoreApi, conversation]);
// external state
const isMobile = useIsMobile();
@@ -44,7 +52,7 @@ export function AppBeam() {
const handleClose = React.useCallback(() => {
beamStoreApi.getState().terminate();
beamStoreApi.getState().terminateKeepingSettings();
}, [beamStoreApi]);
@@ -56,10 +64,7 @@ export function AppBeam() {
</Button>
{/* 'open' */}
<Button size='sm' variant='plain' color='neutral' onClick={() => {
conversation.current = initTestConversation();
initTestBeamStore(conversation.current.messages, beamStoreApi);
}}>
<Button size='sm' variant='plain' color='neutral' onClick={() => setConversation(initTestConversation())}>
.open
</Button>
@@ -67,7 +72,7 @@ export function AppBeam() {
<Button size='sm' variant='plain' color='neutral' onClick={handleClose}>
.close
</Button>
</>, [beamStoreApi, handleClose, showDebug]), null, 'AppBeam');
</>, [handleClose, showDebug]), null, 'AppBeam');
return (
+91 -184
View File
@@ -1,8 +1,10 @@
import * as React from 'react';
import { Panel, PanelGroup, PanelResizeHandle } from 'react-resizable-panels';
import type { SxProps } from '@mui/joy/styles/types';
import { useTheme } from '@mui/joy';
import { DEV_MODE_SETTINGS } from '../settings-modal/UxLabsSettings';
import { DiagramConfig, DiagramsModal } from '~/modules/aifn/digrams/DiagramsModal';
import { FlattenerModal } from '~/modules/aifn/flatten/FlattenerModal';
import { TradeConfig, TradeModal } from '~/modules/trade/TradeModal';
@@ -17,17 +19,17 @@ import { ConfirmationModal } from '~/common/components/ConfirmationModal';
import { ConversationsManager } from '~/common/chats/ConversationsManager';
import { GlobalShortcutItem, ShortcutKeyName, useGlobalShortcuts } from '~/common/components/useGlobalShortcut';
import { PanelResizeInset } from '~/common/components/panes/GoodPanelResizeHandler';
import { PreferencesTab, useOptimaLayout, usePluggableOptimaLayout } from '~/common/layout/optima/useOptimaLayout';
import { ScrollToBottom } from '~/common/scroll-to-bottom/ScrollToBottom';
import { ScrollToBottomButton } from '~/common/scroll-to-bottom/ScrollToBottomButton';
import { addSnackbar, removeSnackbar } from '~/common/components/useSnackbarsStore';
import { createDMessage, DConversationId, DMessage, getConversation, getConversationSystemPurposeId, useConversation } from '~/common/state/store-chats';
import { getUXLabsHighPerformance, useUXLabsStore } from '~/common/state/store-ux-labs';
import { createDMessage, DConversationId, DMessage, DMessageMetadata, getConversation, getConversationSystemPurposeId, useConversation } from '~/common/state/store-chats';
import { themeBgAppChatComposer } from '~/common/app.theme';
import { useFolderStore } from '~/common/state/store-folders';
import { useIsMobile } from '~/common/components/useMatchMedia';
import { useOptimaLayout, usePluggableOptimaLayout } from '~/common/layout/optima/useOptimaLayout';
import { useRouterQuery } from '~/common/app.routes';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import type { ComposerOutputMultiPart } from './components/composer/composer.types';
import { ChatBarAltBeam } from './components/ChatBarAltBeam';
@@ -38,14 +40,9 @@ import { ChatDrawerMemo } from './components/ChatDrawer';
import { ChatMessageList } from './components/ChatMessageList';
import { ChatPageMenuItems } from './components/ChatPageMenuItems';
import { Composer } from './components/composer/Composer';
import { getInstantAppChatPanesCount, usePanesManager } from './components/panes/usePanesManager';
import { usePanesManager } from './components/panes/usePanesManager';
import { DEV_MODE_SETTINGS } from '../settings-modal/UxLabsSettings';
import { extractChatCommand, findAllChatCommands } from './commands/commands.registry';
import { runAssistantUpdatingState } from './editors/chat-stream';
import { runBrowseGetPageUpdatingState } from './editors/browse-load';
import { runImageGenerationUpdatingState } from './editors/image-generate';
import { runReActUpdatingState } from './editors/react-tangent';
import { _handleExecute } from './editors/_handleExecute';
// what to say when a chat is new and has no title
@@ -68,6 +65,19 @@ export interface AppChatIntent {
}
const composerOpenSx: SxProps = {
zIndex: 21, // just to allocate a surface, and potentially have a shadow
backgroundColor: themeBgAppChatComposer,
borderTop: `1px solid`,
borderTopColor: 'divider',
p: { xs: 1, md: 2 },
};
const composerClosedSx: SxProps = {
display: 'none',
};
export function AppChat() {
// state
@@ -91,7 +101,7 @@ export function AppChat() {
const showAltTitleBar = useUXLabsStore(state => DEV_MODE_SETTINGS && state.labsChatBarAlt === 'title');
const { openLlmOptions } = useOptimaLayout();
const { openLlmOptions, openModelsSetup, openPreferencesTab } = useOptimaLayout();
const { chatLLM } = useChatLLM();
@@ -187,116 +197,20 @@ export function AppChat() {
// Execution
const _handleExecute = React.useCallback(async (chatModeId: ChatModeId, conversationId: DConversationId, history: DMessage[]): Promise<void> => {
const chatLLMId = getChatLLMId();
if (!chatModeId || !conversationId || !chatLLMId) return;
const handleExecuteAndOutcome = React.useCallback(async (chatModeId: ChatModeId, conversationId: DConversationId, history: DMessage[]) => {
const outcome = await _handleExecute(chatModeId, conversationId, history);
if (outcome === 'err-no-chatllm')
openModelsSetup();
else if (outcome === 'err-t2i-unconfigured')
openPreferencesTab(PreferencesTab.Draw);
else if (outcome === 'err-no-persona')
addSnackbar({ key: 'chat-no-persona', message: 'No persona selected.', type: 'issue' });
else if (outcome === 'err-no-conversation')
addSnackbar({ key: 'chat-no-conversation', message: 'No active conversation.', type: 'issue' });
return outcome === true;
}, [openModelsSetup, openPreferencesTab]);
// Update the system message from the active persona to the history
// NOTE: this does NOT call setMessages anymore (optimization). make sure to:
// 1. all the callers need to pass a new array
// 2. all the exit points need to call setMessages
const cHandler = ConversationsManager.getHandler(conversationId);
cHandler.inlineUpdatePurposeInHistory(history, chatLLMId);
// Valid /commands are intercepted here, and override chat modes, generally for mechanics or sidebars
const lastMessage = history.length > 0 ? history[history.length - 1] : null;
if (lastMessage?.role === 'user') {
const chatCommand = extractChatCommand(lastMessage.text)[0];
if (chatCommand && chatCommand.type === 'cmd') {
switch (chatCommand.providerId) {
case 'ass-browse':
cHandler.messagesReplace(history); // show command
return await runBrowseGetPageUpdatingState(cHandler, chatCommand.params);
case 'ass-t2i':
cHandler.messagesReplace(history); // show command
return await runImageGenerationUpdatingState(cHandler, chatCommand.params);
case 'ass-react':
cHandler.messagesReplace(history); // show command
return await runReActUpdatingState(cHandler, chatCommand.params, chatLLMId);
case 'chat-alter':
// /clear
if (chatCommand.command === '/clear') {
if (chatCommand.params === 'all')
return cHandler.messagesReplace([]);
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Issue: this command requires the \'all\' parameter to confirm the operation.', undefined, 'issue', false);
return;
}
// /assistant, /system
Object.assign(lastMessage, {
role: chatCommand.command.startsWith('/s') ? 'system' : chatCommand.command.startsWith('/a') ? 'assistant' : 'user',
sender: 'Bot',
text: chatCommand.params || '',
} satisfies Partial<DMessage>);
return cHandler.messagesReplace(history);
case 'cmd-help':
const chatCommandsText = findAllChatCommands()
.map(cmd => ` - ${cmd.primary}` + (cmd.alternatives?.length ? ` (${cmd.alternatives.join(', ')})` : '') + `: ${cmd.description}`)
.join('\n');
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Available Chat Commands:\n' + chatCommandsText, undefined, 'help', false);
return;
case 'mode-beam':
if (chatCommand.isError)
return cHandler.messagesReplace(history);
// remove '/beam ', as we want to be a user chat message
Object.assign(lastMessage, { text: chatCommand.params || '' });
cHandler.messagesReplace(history);
return ConversationsManager.getHandler(conversationId).beamInvoke(history, [], null);
default:
return cHandler.messagesReplace([...history, createDMessage('assistant', 'This command is not supported.')]);
}
}
}
// get the system purpose (note: we don't react to it, or it would invalidate half UI components..)
if (!getConversationSystemPurposeId(conversationId)) {
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Issue: no Persona selected.', undefined, 'issue', false);
return;
}
// synchronous long-duration tasks, which update the state as they go
switch (chatModeId) {
case 'generate-text':
cHandler.messagesReplace(history);
return await runAssistantUpdatingState(conversationId, history, chatLLMId, getUXLabsHighPerformance() ? 0 : getInstantAppChatPanesCount());
case 'generate-text-beam':
cHandler.messagesReplace(history);
return cHandler.beamInvoke(history, [], null);
case 'append-user':
return cHandler.messagesReplace(history);
case 'generate-image':
if (!lastMessage?.text) break;
// also add a 'fake' user message with the '/draw' command
cHandler.messagesReplace(history.map(message => (message.id !== lastMessage.id) ? message : {
...message,
text: `/draw ${lastMessage.text}`,
}));
return await runImageGenerationUpdatingState(cHandler, lastMessage.text);
case 'generate-react':
if (!lastMessage?.text) break;
cHandler.messagesReplace(history);
return await runReActUpdatingState(cHandler, lastMessage.text, chatLLMId);
}
// ISSUE: if we're here, it means we couldn't do the job, at least sync the history
console.log('Chat execute: issue running', chatModeId, conversationId, lastMessage);
cHandler.messagesReplace(history);
}, []);
const handleComposerAction = React.useCallback((chatModeId: ChatModeId, conversationId: DConversationId, multiPartMessage: ComposerOutputMultiPart): boolean => {
const handleComposerAction = React.useCallback((conversationId: DConversationId, chatModeId: ChatModeId, multiPartMessage: ComposerOutputMultiPart, metadata?: DMessageMetadata): boolean => {
// validate inputs
if (multiPartMessage.length !== 1 || multiPartMessage[0].type !== 'text-block') {
addSnackbar({
@@ -312,35 +226,38 @@ export function AppChat() {
const userText = multiPartMessage[0].text;
// multicast: send the message to all the panes
const uniqueIds = new Set([conversationId]);
const uniqueConversationIds = new Set([conversationId]);
if (willMulticast)
chatPanes.forEach(pane => pane.conversationId && uniqueIds.add(pane.conversationId));
chatPanes.forEach(pane => pane.conversationId && uniqueConversationIds.add(pane.conversationId));
// we loop to handle both the normal and multicast modes
let enqueued = false;
for (const _cId of uniqueIds) {
const _conversation = getConversation(_cId);
if (_conversation) {
// start execution fire/forget
void _handleExecute(chatModeId, _cId, [..._conversation.messages, createDMessage('user', userText)]);
enqueued = true;
}
}
return enqueued;
}, [chatPanes, willMulticast, _handleExecute]);
let enqueuedAny = false;
for (const _cId of uniqueConversationIds) {
const history = getConversation(_cId)?.messages;
if (!history) continue;
const handleConversationExecuteHistory = React.useCallback(async (conversationId: DConversationId, history: DMessage[]): Promise<void> => {
await _handleExecute('generate-text', conversationId, history);
}, [_handleExecute]);
const newUserMessage = createDMessage('user', userText);
if (metadata) newUserMessage.metadata = metadata;
// fire/forget
void handleExecuteAndOutcome(chatModeId, _cId, [...history, newUserMessage]);
enqueuedAny = true;
}
return enqueuedAny;
}, [chatPanes, handleExecuteAndOutcome, willMulticast]);
const handleConversationExecuteHistory = React.useCallback(async (conversationId: DConversationId, history: DMessage[]) => {
await handleExecuteAndOutcome('generate-text', conversationId, history);
}, [handleExecuteAndOutcome]);
const handleMessageRegenerateLastInFocusedPane = React.useCallback(async () => {
const focusedConversation = getConversation(focusedPaneConversationId);
if (focusedConversation?.messages?.length) {
const lastMessage = focusedConversation.messages[focusedConversation.messages.length - 1];
const history = lastMessage.role === 'assistant' ? focusedConversation.messages.slice(0, -1) : [...focusedConversation.messages];
return await _handleExecute('generate-text', focusedConversation.id, history);
await handleExecuteAndOutcome('generate-text', focusedConversation.id, history);
}
}, [_handleExecute, focusedPaneConversationId]);
}, [focusedPaneConversationId, handleExecuteAndOutcome]);
const handleMessageBeamLastInFocusedPane = React.useCallback(async () => {
// Ctrl + Shift + B
@@ -356,16 +273,16 @@ export function AppChat() {
const handleTextDiagram = React.useCallback((diagramConfig: DiagramConfig | null) => setDiagramConfig(diagramConfig), []);
const handleTextImagine = React.useCallback(async (conversationId: DConversationId, messageText: string): Promise<void> => {
const handleTextImagine = React.useCallback(async (conversationId: DConversationId, messageText: string) => {
const conversation = getConversation(conversationId);
if (!conversation)
return;
const imaginedPrompt = await imaginePromptFromText(messageText) || 'An error sign.';
return await _handleExecute('generate-image', conversationId, [
await handleExecuteAndOutcome('generate-image', conversationId, [
...conversation.messages,
createDMessage('user', imaginedPrompt),
]);
}, [_handleExecute]);
}, [handleExecuteAndOutcome]);
const handleTextSpeak = React.useCallback(async (text: string): Promise<void> => {
await speakText(text);
@@ -560,8 +477,8 @@ export function AppChat() {
const _paneIsFocused = idx === focusedPaneIndex;
const _paneConversationId = pane.conversationId;
const _paneChatHandler = chatHandlers[idx] ?? null;
const _paneChatBeamStore = beamsStores[idx] ?? null;
const _paneChatBeamIsOpen = !!beamsOpens?.[idx];
const _paneBeamStore = beamsStores[idx] ?? null;
const _paneBeamIsOpen = !!beamsOpens?.[idx] && !!_paneBeamStore;
const _panesCount = chatPanes.length;
const _keyAndId = `chat-pane-${pane.paneId}`;
const _sepId = `sep-pane-${idx}`;
@@ -609,47 +526,45 @@ export function AppChat() {
<ScrollToBottom
bootToBottom
stickToBottomInitial
sx={_paneChatBeamIsOpen ? { display: 'none' } : undefined}
sx={{ display: 'flex', flexDirection: 'column' }}
>
<ChatMessageList
conversationId={_paneConversationId}
conversationHandler={_paneChatHandler}
capabilityHasT2I={capabilityHasT2I}
chatLLMContextTokens={chatLLM?.contextTokens ?? null}
fitScreen={isMobile || isMultiPane}
isMessageSelectionMode={isMessageSelectionMode}
setIsMessageSelectionMode={setIsMessageSelectionMode}
onConversationBranch={handleConversationBranch}
onConversationExecuteHistory={handleConversationExecuteHistory}
onTextDiagram={handleTextDiagram}
onTextImagine={handleTextImagine}
onTextSpeak={handleTextSpeak}
sx={{
minHeight: '100%', // ensures filling of the blank space on newer chats
}}
/>
{!_paneBeamIsOpen && (
<ChatMessageList
conversationId={_paneConversationId}
conversationHandler={_paneChatHandler}
capabilityHasT2I={capabilityHasT2I}
chatLLMContextTokens={chatLLM?.contextTokens ?? null}
fitScreen={isMobile || isMultiPane}
isMessageSelectionMode={isMessageSelectionMode}
setIsMessageSelectionMode={setIsMessageSelectionMode}
onConversationBranch={handleConversationBranch}
onConversationExecuteHistory={handleConversationExecuteHistory}
onTextDiagram={handleTextDiagram}
onTextImagine={handleTextImagine}
onTextSpeak={handleTextSpeak}
sx={{
flexGrow: 1,
}}
/>
)}
{/*<Ephemerals*/}
{/* conversationId={_paneConversationId}*/}
{/* sx={{*/}
{/* // TODO: Fixme post panels?*/}
{/* // flexGrow: 0.1,*/}
{/* flexShrink: 0.5,*/}
{/* overflowY: 'auto',*/}
{/* minHeight: 64,*/}
{/* }}*/}
{/*/>*/}
{_paneBeamIsOpen && (
<ChatBeamWrapper
beamStore={_paneBeamStore}
isMobile={isMobile}
inlineSx={{
flexGrow: 1,
// minHeight: 'calc(100vh - 69px - var(--AGI-Nav-width))',
}}
/>
)}
{/* Visibility and actions are handled via Context */}
<ScrollToBottomButton />
</ScrollToBottom>
{(_paneChatBeamIsOpen && !!_paneChatBeamStore) && (
<ChatBeamWrapper beamStore={_paneChatBeamStore} isMobile={isMobile} />
)}
</Panel>
{/* Panel Separators & Resizers */}
@@ -675,15 +590,7 @@ export function AppChat() {
onAction={handleComposerAction}
onTextImagine={handleTextImagine}
setIsMulticast={setIsComposerMulticast}
sx={beamOpenStoreInFocusedPane ? {
display: 'none',
} : {
zIndex: 21, // just to allocate a surface, and potentially have a shadow
backgroundColor: themeBgAppChatComposer,
borderTop: `1px solid`,
borderTopColor: 'divider',
p: { xs: 1, md: 2 },
}}
sx={beamOpenStoreInFocusedPane ? composerClosedSx : composerOpenSx}
/>
{/* Diagrams */}
+2 -3
View File
@@ -1,5 +1,4 @@
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import type { ICommandsProvider } from './ICommandsProvider';
@@ -7,11 +6,11 @@ export const CommandsBeam: ICommandsProvider = {
id: 'mode-beam',
rank: 9,
getCommands: () => useUXLabsStore.getState().labsBeam ? [{
getCommands: () => [{
primary: '/beam',
arguments: ['prompt'],
description: 'Combine the smarts of models',
Icon: ChatBeamIcon,
}] : [],
}],
};
+20 -16
View File
@@ -31,7 +31,7 @@ export function ChatBarAltBeam(props: {
requiresConfirmation: store.isScattering || store.isGatheringAny || store.raysReady > 0,
// actions
setIsMaximized: store.setIsMaximized,
terminateBeam: store.terminate,
terminateBeam: store.terminateKeepingSettings,
})));
@@ -63,16 +63,7 @@ export function ChatBarAltBeam(props: {
return (
<Box sx={{ display: 'flex', gap: { xs: 1, md: 3 }, alignItems: 'center' }}>
{/* [desktop] maximize button, or a disabled spacer */}
{props.isMobile ? null : (
<GoodTooltip title='Maximize'>
<IconButton size='sm' onClick={handleMaximizeBeam}>
<FullscreenRoundedIcon />
</IconButton>
</GoodTooltip>
)}
<Box sx={{ display: 'flex', gap: { xs: 1, md: 2 }, alignItems: 'center' }}>
{/* Title & Status */}
<Typography level='title-md'>
@@ -89,11 +80,24 @@ export function ChatBarAltBeam(props: {
</Typography>
{/* Right Close Icon */}
<GoodTooltip usePlain title={<Box sx={{ p: 1, display: 'flex', flexDirection: 'column', gap: 1 }}>Close Beam Mode <KeyStroke combo='Esc' /></Box>}>
<IconButton aria-label='Close' size='sm' onClick={handleCloseBeam}>
<CloseRoundedIcon />
</IconButton>
</GoodTooltip>
<Box sx={{ display: 'flex' }}>
{/* [desktop] maximize button, or a disabled spacer */}
{!props.isMobile && (
<GoodTooltip usePlain title={<Box sx={{ p: 1 }}>Maximize</Box>}>
<IconButton size='sm' onClick={handleMaximizeBeam}>
<FullscreenRoundedIcon />
</IconButton>
</GoodTooltip>
)}
<GoodTooltip usePlain title={<Box sx={{ p: 1, display: 'flex', flexDirection: 'column', gap: 1 }}>Back to Chat <KeyStroke combo='Esc' /></Box>}>
<IconButton aria-label='Close' size='sm' onClick={handleCloseBeam}>
<CloseRoundedIcon />
</IconButton>
</GoodTooltip>
</Box>
{/* Confirmation Modal */}
+14 -7
View File
@@ -1,16 +1,25 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Modal, ModalClose } from '@mui/joy';
import { BeamStoreApi, useBeamStore } from '~/modules/beam/store-beam.hooks';
import { BeamView } from '~/modules/beam/BeamView';
import { themeZIndexBeamView } from '~/common/app.theme';
import { ScrollToBottom } from '~/common/scroll-to-bottom/ScrollToBottom';
/*const overlaySx: SxProps = {
position: 'absolute',
inset: 0,
zIndex: themeZIndexBeamView, // stay on top of Message > Chips (:1), and Overlays (:2) - note: Desktop Drawer (:26)
}*/
export function ChatBeamWrapper(props: {
beamStore: BeamStoreApi,
isMobile: boolean,
inlineSx?: SxProps,
}) {
// state
@@ -36,16 +45,14 @@ export function ChatBeamWrapper(props: {
position: 'absolute',
inset: 0,
}}>
{beamView}
<ScrollToBottom disableAutoStick>
{beamView}
</ScrollToBottom>
<ModalClose sx={{ color: 'white', backgroundColor: 'background.surface', boxShadow: 'xs', mr: 2 }} />
</Box>
</Modal>
) : (
<Box sx={{
position: 'absolute',
inset: 0,
zIndex: themeZIndexBeamView, // stay on top of Message > Chips (:1), and Overlays (:2) - note: Desktop Drawer (:26)
}}>
<Box sx={props.inlineSx}>
{beamView}
</Box>
);
+1 -6
View File
@@ -277,7 +277,6 @@ function ChatDrawer(props: {
<Button
// variant='outlined'
variant={disableNewButton ? undefined : 'soft'}
color='primary'
disabled={disableNewButton}
onClick={handleButtonNew}
sx={{
@@ -285,16 +284,12 @@ function ChatDrawer(props: {
justifyContent: 'flex-start',
padding: '0px 0.75rem',
// text size
fontSize: 'sm',
fontWeight: 'lg',
// style
// backgroundColor: 'background.popup',
border: '1px solid',
borderColor: 'neutral.outlinedBorder',
borderRadius: 'sm',
'--ListItemDecorator-size': 'calc(2.5rem - 1px)', // compensate for the border
// backgroundColor: 'background.popup',
// boxShadow: (disableNewButton || props.isMobile) ? 'none' : 'xs',
// transition: 'box-shadow 0.2s',
}}
+14 -6
View File
@@ -136,6 +136,10 @@ export function ChatMessageList(props: {
}), false);
}, [conversationId, editMessage]);
const handleReplyTo = React.useCallback((_messageId: string, text: string) => {
props.conversationHandler?.getOverlayStore().getState().setReplyToText(text);
}, [props.conversationHandler]);
const handleTextDiagram = React.useCallback(async (messageId: string, text: string) => {
conversationId && onTextDiagram({ conversationId: conversationId, messageId, text });
}, [conversationId, onTextDiagram]);
@@ -225,12 +229,15 @@ export function ChatMessageList(props: {
return (
<List sx={{
p: 0, ...(props.sx || {}),
// this makes sure that the the window is scrolled to the bottom (column-reverse)
display: 'flex',
flexDirection: 'column',
p: 0,
...(props.sx || {}),
// fix for the double-border on the last message (one by the composer, one to the bottom of the message)
// marginBottom: '-1px',
// layout
display: 'flex',
flexDirection: 'column',
}}>
{optionalTranslationWarning}
@@ -276,9 +283,10 @@ export function ChatMessageList(props: {
onMessageEdit={handleMessageEdit}
onMessageToggleUserFlag={handleMessageToggleUserFlag}
onMessageTruncate={handleMessageTruncate}
// onReplyTo={handleReplyTo}
onTextDiagram={handleTextDiagram}
onTextImagine={handleTextImagine}
onTextSpeak={handleTextSpeak}
onTextImagine={capabilityHasT2I ? handleTextImagine : undefined}
onTextSpeak={isSpeakable ? handleTextSpeak : undefined}
/>
);
@@ -7,7 +7,6 @@ import { KeyStroke, platformAwareKeystrokes } from '~/common/components/KeyStrok
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { ChatModeId } from '../../AppChat';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
interface ChatModeDescription {
@@ -63,7 +62,6 @@ export function ChatModeMenu(props: {
}) {
// external state
const labsBeam = useUXLabsStore(state => state.labsBeam);
const enterIsNewline = useUIPreferencesStore(state => state.enterIsNewline);
return (
@@ -81,7 +79,6 @@ export function ChatModeMenu(props: {
{/* ChatMode items */}
{Object.entries(ChatModeItems)
.filter(([key, _data]) => key !== 'generate-text-beam' || labsBeam)
.filter(([_key, data]) => !data.hideOnDesktop || props.isMobile)
.map(([key, data]) =>
<MenuItem key={'chat-mode-' + key} onClick={() => props.onSetChatModeId(key as ChatModeId)}>
+69 -37
View File
@@ -1,5 +1,5 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { useShallow } from 'zustand/react/shallow';
import { fileOpen, FileWithHandle } from 'browser-fs-access';
import { Box, Button, ButtonGroup, Card, Dropdown, Grid, IconButton, Menu, MenuButton, MenuItem, Textarea, Tooltip, Typography } from '@mui/joy';
@@ -23,10 +23,11 @@ import type { LLMOptionsOpenAI } from '~/modules/llms/vendors/openai/openai.vend
import { useBrowseCapability } from '~/modules/browse/store-module-browsing';
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { ConversationsManager } from '~/common/chats/ConversationsManager';
import { PreferencesTab, useOptimaLayout } from '~/common/layout/optima/useOptimaLayout';
import { SpeechResult, useSpeechRecognition } from '~/common/components/useSpeechRecognition';
import { animationEnterBelow } from '~/common/util/animUtils';
import { conversationTitle, DConversationId, getConversation, useChatStore } from '~/common/state/store-chats';
import { conversationTitle, DConversationId, DMessageMetadata, getConversation, useChatStore } from '~/common/state/store-chats';
import { countModelTokens } from '~/common/util/token-counter';
import { isMacUser } from '~/common/util/pwaUtils';
import { launchAppCall } from '~/common/app.routes';
@@ -36,6 +37,7 @@ import { playSoundUrl } from '~/common/util/audioUtils';
import { supportsClipboardRead } from '~/common/util/clipboardUtils';
import { supportsScreenCapture } from '~/common/util/screenCaptureUtils';
import { useAppStateStore } from '~/common/state/store-appstate';
import { useChatOverlayStore } from '~/common/chats/store-chat-overlay-vanilla';
import { useDebouncer } from '~/common/components/useDebouncer';
import { useGlobalShortcut } from '~/common/components/useGlobalShortcut';
import { useUICounter, useUIPreferencesStore } from '~/common/state/store-ui';
@@ -48,7 +50,7 @@ import { useActileManager } from './actile/useActileManager';
import type { AttachmentId } from './attachments/store-attachments';
import { Attachments } from './attachments/Attachments';
import { getTextBlockText, useLLMAttachments } from './attachments/useLLMAttachments';
import { getSingleTextBlockText, useLLMAttachments } from './attachments/useLLMAttachments';
import { useAttachments } from './attachments/useAttachments';
import type { ComposerOutputMultiPart } from './composer.types';
@@ -63,6 +65,7 @@ import { ButtonMicMemo } from './buttons/ButtonMic';
import { ButtonMultiChatMemo } from './buttons/ButtonMultiChat';
import { ButtonOptionsDraw } from './buttons/ButtonOptionsDraw';
import { ChatModeMenu } from './ChatModeMenu';
import { ReplyToBubble } from '../message/ReplyToBubble';
import { TokenBadgeMemo } from './TokenBadge';
import { TokenProgressbarMemo } from './TokenProgressbar';
import { useComposerStartupText } from './store-composer';
@@ -98,7 +101,7 @@ export function Composer(props: {
capabilityHasT2I: boolean;
isMulticast: boolean | null;
isDeveloperMode: boolean;
onAction: (chatModeId: ChatModeId, conversationId: DConversationId, multiPartMessage: ComposerOutputMultiPart) => boolean;
onAction: (conversationId: DConversationId, chatModeId: ChatModeId, multiPartMessage: ComposerOutputMultiPart, metadata?: DMessageMetadata) => boolean;
onTextImagine: (conversationId: DConversationId, text: string) => void;
setIsMulticast: (on: boolean) => void;
sx?: SxProps;
@@ -114,11 +117,11 @@ export function Composer(props: {
// external state
const { openPreferencesTab /*, setIsFocusedMode*/ } = useOptimaLayout();
const { labsAttachScreenCapture, labsBeam, labsCameraDesktop } = useUXLabsStore(state => ({
const { labsAttachScreenCapture, labsCameraDesktop, labsShowCost } = useUXLabsStore(useShallow(state => ({
labsAttachScreenCapture: state.labsAttachScreenCapture,
labsBeam: state.labsBeam,
labsCameraDesktop: state.labsCameraDesktop,
}), shallow);
labsShowCost: state.labsShowCost,
})));
const timeToShowTips = useAppStateStore(state => state.usageCount > 2);
const { novel: explainShiftEnter, touch: touchShiftEnter } = useUICounter('composer-shift-enter');
const { novel: explainAltEnter, touch: touchAltEnter } = useUICounter('composer-alt-enter');
@@ -126,7 +129,7 @@ export function Composer(props: {
const [startupText, setStartupText] = useComposerStartupText();
const enterIsNewline = useUIPreferencesStore(state => state.enterIsNewline);
const chatMicTimeoutMs = useChatMicTimeoutMsValue();
const { assistantAbortible, systemPurposeId, tokenCount: _historyTokenCount, stopTyping } = useChatStore(state => {
const { assistantAbortible, systemPurposeId, tokenCount: _historyTokenCount, stopTyping } = useChatStore(useShallow(state => {
const conversation = state.conversations.find(_c => _c.id === props.conversationId);
return {
assistantAbortible: conversation ? !!conversation.abortController : false,
@@ -134,11 +137,18 @@ export function Composer(props: {
tokenCount: conversation ? conversation.tokenCount : 0,
stopTyping: state.stopTyping,
};
}, shallow);
}));
const { inComposer: browsingInComposer } = useBrowseCapability();
const { attachAppendClipboardItems, attachAppendDataTransfer, attachAppendEgoMessage, attachAppendFile, attachments: _attachments, clearAttachments, removeAttachment } =
useAttachments(browsingInComposer && !composeText.startsWith('/'));
// external overlay state (extra conversationId-dependent state)
const conversationHandler = props.conversationId ? ConversationsManager.getHandler(props.conversationId) : null;
const conversationOverlayStore = conversationHandler?.getOverlayStore() ?? null;
const { replyToGenerateText } = useChatOverlayStore(conversationOverlayStore, useShallow(store => ({
replyToGenerateText: chatModeId === 'generate-text' ? store.replyToText?.trim() || null : null,
})));
// derived state
@@ -163,6 +173,8 @@ export function Composer(props: {
const tokensHistory = _historyTokenCount;
const tokensReponseMax = (props.chatLLM?.options as LLMOptionsOpenAI /* FIXME: BIG ASSUMPTION */)?.llmResponseTokens || 0;
const tokenLimit = props.chatLLM?.contextTokens || 0;
const tokenPriceIn = props.chatLLM?.pricing?.chatIn;
const tokenPriceOut = props.chatLLM?.pricing?.chatOut;
// Effect: load initial text if queued up (e.g. by /link/share_targe)
@@ -174,6 +186,18 @@ export function Composer(props: {
}, [setComposeText, setStartupText, startupText]);
// Overlay actions
const handleReplyToCleared = React.useCallback(() => {
conversationOverlayStore?.getState().setReplyToText(null);
}, [conversationOverlayStore]);
React.useEffect(() => {
if (replyToGenerateText)
setTimeout(() => props.composerTextAreaRef.current?.focus(), 1 /* prevent focus theft */);
}, [replyToGenerateText, props.composerTextAreaRef]);
// Primary button
const { conversationId, onAction } = props;
@@ -182,28 +206,32 @@ export function Composer(props: {
if (!conversationId)
return false;
// get attachments
const multiPartMessage = llmAttachments.getAttachmentsOutputs(composerText || null);
// get the multipart output including all attachments
const multiPartMessage = llmAttachments.collapseWithAttachments(composerText || null);
if (!multiPartMessage.length)
return false;
// metadata
const metadata = replyToGenerateText ? { inReplyToText: replyToGenerateText } : undefined;
// send the message
const enqueued = onAction(_chatModeId, conversationId, multiPartMessage);
const enqueued = onAction(conversationId, _chatModeId, multiPartMessage, metadata);
if (enqueued) {
clearAttachments();
handleReplyToCleared();
setComposeText('');
}
return enqueued;
}, [clearAttachments, conversationId, llmAttachments, onAction, setComposeText]);
}, [clearAttachments, conversationId, handleReplyToCleared, llmAttachments, onAction, replyToGenerateText, setComposeText]);
const handleSendClicked = React.useCallback(() => {
handleSendAction(chatModeId, composeText);
}, [chatModeId, composeText, handleSendAction]);
const handleSendTextBeamClicked = React.useCallback(() => {
labsBeam && handleSendAction('generate-text-beam', composeText);
}, [composeText, handleSendAction, labsBeam]);
handleSendAction('generate-text-beam', composeText);
}, [composeText, handleSendAction]);
const handleStopClicked = React.useCallback(() => {
!!props.conversationId && stopTyping(props.conversationId);
@@ -304,15 +332,15 @@ export function Composer(props: {
// Alt (Windows) or Option (Mac) + Enter: append the message instead of sending it
if (e.altKey) {
touchAltEnter();
handleSendAction('append-user', composeText);
if (handleSendAction('append-user', composeText))
touchAltEnter();
return e.preventDefault();
}
// Ctrl (Windows) or Command (Mac) + Enter: send for beaming
if (labsBeam && ((isMacUser && e.metaKey && !e.ctrlKey) || (!isMacUser && e.ctrlKey && !e.metaKey))) {
touchCtrlEnter();
handleSendAction('generate-text-beam', composeText);
if ((isMacUser && e.metaKey && !e.ctrlKey) || (!isMacUser && e.ctrlKey && !e.metaKey)) {
if (handleSendAction('generate-text-beam', composeText))
touchCtrlEnter();
return e.preventDefault();
}
@@ -326,7 +354,7 @@ export function Composer(props: {
}
}
}, [actileInterceptKeydown, assistantAbortible, chatModeId, composeText, enterIsNewline, handleSendAction, labsBeam, touchAltEnter, touchCtrlEnter, touchShiftEnter]);
}, [actileInterceptKeydown, assistantAbortible, chatModeId, composeText, enterIsNewline, handleSendAction, touchAltEnter, touchCtrlEnter, touchShiftEnter]);
// Focus mode
@@ -427,8 +455,8 @@ export function Composer(props: {
const handleAttachmentInlineText = React.useCallback((attachmentId: AttachmentId) => {
setComposeText(currentText => {
const attachmentOutputs = llmAttachments.getAttachmentOutputs(currentText, attachmentId);
const inlinedText = getTextBlockText(attachmentOutputs) || '';
const inlinedMultiPart = llmAttachments.collapseWithAttachment(currentText, attachmentId);
const inlinedText = getSingleTextBlockText(inlinedMultiPart) || '';
removeAttachment(attachmentId);
return inlinedText;
});
@@ -436,8 +464,8 @@ export function Composer(props: {
const handleAttachmentsInlineText = React.useCallback(() => {
setComposeText(currentText => {
const attachmentsOutputs = llmAttachments.getAttachmentsOutputs(currentText);
const inlinedText = getTextBlockText(attachmentsOutputs) || '';
const inlinedMultiPart = llmAttachments.collapseWithAttachments(currentText);
const inlinedText = getSingleTextBlockText(inlinedMultiPart) || '';
clearAttachments();
return inlinedText;
});
@@ -495,7 +523,8 @@ export function Composer(props: {
const isReAct = chatModeId === 'generate-react';
const isDraw = chatModeId === 'generate-image';
const showChatExtras = isText;
const showChatReplyTo = !!replyToGenerateText;
const showChatExtras = isText && !showChatReplyTo;
const buttonVariant: VariantProp = (isAppend || (isMobile && isTextBeam)) ? 'outlined' : 'solid';
@@ -525,15 +554,16 @@ export function Composer(props: {
isDraw ? 'Describe an idea or a drawing...'
: isReAct ? 'Multi-step reasoning question...'
: isTextBeam ? 'Beam: combine the smarts of models...'
: props.isDeveloperMode ? 'Chat with me' + (isDesktop ? ' · drop source' : '') + ' · attach code...'
: props.capabilityHasT2I ? 'Chat · /beam · /draw · drop files...'
: 'Chat · /react · drop files...';
: showChatReplyTo ? 'Chat about this'
: props.isDeveloperMode ? 'Chat with me' + (isDesktop ? ' · drop source' : '') + ' · attach code...'
: props.capabilityHasT2I ? 'Chat · /beam · /draw · drop files...'
: 'Chat · /react · drop files...';
if (isDesktop && timeToShowTips) {
if (explainShiftEnter)
textPlaceholder += !enterIsNewline ? '\n\n💡 Shift + Enter to add a new line' : '\n\n💡 Shift + Enter to send';
else if (explainAltEnter)
textPlaceholder += platformAwareKeystrokes('\n\n💡 Tip: Alt + Enter to just append the message');
else if (labsBeam && explainCtrlEnter)
else if (explainCtrlEnter)
textPlaceholder += platformAwareKeystrokes('\n\n💡 Tip: Ctrl + Enter to beam');
}
@@ -618,7 +648,7 @@ export function Composer(props: {
variant='outlined'
color={isDraw ? 'warning' : isReAct ? 'success' : undefined}
autoFocus
minRows={isMobile ? 4 : 5}
minRows={isMobile ? 4 : showChatReplyTo ? 4 : 5}
maxRows={isMobile ? 8 : 10}
placeholder={textPlaceholder}
value={composeText}
@@ -629,6 +659,7 @@ export function Composer(props: {
onPasteCapture={handleAttachCtrlV}
// onFocusCapture={handleFocusModeOn}
// onBlurCapture={handleFocusModeOff}
endDecorator={showChatReplyTo && <ReplyToBubble replyToText={replyToGenerateText} onClear={handleReplyToCleared} className='reply-to-bubble' />}
slotProps={{
textarea: {
enterKeyHint: enterIsNewline ? 'enter' : 'send',
@@ -641,16 +672,16 @@ export function Composer(props: {
}}
sx={{
backgroundColor: 'background.level1',
'&:focus-within': { backgroundColor: 'background.popup' },
'&:focus-within': { backgroundColor: 'background.popup', '.reply-to-bubble': { backgroundColor: 'background.popup' } },
lineHeight: lineHeightTextareaMd,
}} />
{tokenLimit > 0 && (tokensComposer > 0 || (tokensHistory + tokensReponseMax) > 0) && (
<TokenProgressbarMemo direct={tokensComposer} history={tokensHistory} responseMax={tokensReponseMax} limit={tokenLimit} />
{!showChatReplyTo && tokenLimit > 0 && (tokensComposer > 0 || (tokensHistory + tokensReponseMax) > 0) && (
<TokenProgressbarMemo direct={tokensComposer} history={tokensHistory} responseMax={tokensReponseMax} limit={tokenLimit} tokenPriceIn={tokenPriceIn} tokenPriceOut={tokenPriceOut} />
)}
{!!tokenLimit && (
<TokenBadgeMemo direct={tokensComposer} history={tokensHistory} responseMax={tokensReponseMax} limit={tokenLimit} showExcess absoluteBottomRight />
{!showChatReplyTo && tokenLimit > 0 && (
<TokenBadgeMemo direct={tokensComposer} history={tokensHistory} responseMax={tokensReponseMax} limit={tokenLimit} tokenPriceIn={tokenPriceIn} tokenPriceOut={tokenPriceOut} showCost={labsShowCost} showExcess absoluteBottomRight />
)}
</Box>
@@ -811,9 +842,10 @@ export function Composer(props: {
</ButtonGroup>
{/* [desktop] secondary-top buttons */}
{labsBeam && isDesktop && showChatExtras && !assistantAbortible && (
{isDesktop && showChatExtras && !assistantAbortible && (
<ButtonBeamMemo
disabled={!props.conversationId || !chatLLMId || !llmAttachments.isOutputAttacheable}
hasContent={!!composeText}
onClick={handleSendTextBeamClicked}
/>
)}
+110 -45
View File
@@ -3,41 +3,81 @@ import * as React from 'react';
import { Badge, Box, ColorPaletteProp, Tooltip } from '@mui/joy';
function alignRight(value: number, columnSize: number = 7) {
function alignRight(value: number, columnSize: number = 8) {
const str = value.toLocaleString();
return str.padStart(columnSize);
}
function formatCost(cost: number) {
return cost < 1
? (cost * 100).toFixed(cost < 0.010 ? 2 : 1) + ' ¢'
: '$ ' + cost.toFixed(2);
}
export function tokensPrettyMath(tokenLimit: number | 0, directTokens: number, historyTokens?: number, responseMaxTokens?: number): {
color: ColorPaletteProp, message: string, remainingTokens: number
export function tokensPrettyMath(tokenLimit: number | 0, directTokens: number, historyTokens?: number, responseMaxTokens?: number, tokenPriceIn?: number, tokenPriceOut?: number): {
color: ColorPaletteProp,
message: string,
remainingTokens: number,
costMax?: number,
costMin?: number,
} {
const usedTokens = directTokens + (historyTokens || 0) + (responseMaxTokens || 0);
const remainingTokens = tokenLimit - usedTokens;
const usedInputTokens = directTokens + (historyTokens || 0);
const usedMaxTokens = usedInputTokens + (responseMaxTokens || 0);
const remainingTokens = tokenLimit - usedMaxTokens;
const gteLimit = (remainingTokens <= 0 && tokenLimit > 0);
// message
let message: string = gteLimit ? '⚠️ ' : '';
// costs
let costMax: number | undefined = undefined;
let costMin: number | undefined = undefined;
// no limit: show used tokens only
if (!tokenLimit) {
message += `Requested: ${usedTokens.toLocaleString()} tokens`;
message += `Requested: ${usedMaxTokens.toLocaleString()} tokens`;
}
// has full information (d + i < l)
else if (historyTokens || responseMaxTokens) {
message +=
`${Math.abs(remainingTokens).toLocaleString()} ${remainingTokens >= 0 ? 'available' : 'excess'} message tokens\n\n` +
`${Math.abs(remainingTokens).toLocaleString()} ${remainingTokens >= 0 ? 'available' : 'excess'} message tokens\n\n` +
` = Model max tokens: ${alignRight(tokenLimit)}\n` +
` - This message: ${alignRight(directTokens)}\n` +
` - History: ${alignRight(historyTokens || 0)}\n` +
` - Max response: ${alignRight(responseMaxTokens || 0)}`;
// add the price, if available
if (tokenPriceIn || tokenPriceOut) {
costMin = tokenPriceIn ? usedInputTokens * tokenPriceIn / 1E6 : undefined;
const costOutMax = (tokenPriceOut && responseMaxTokens) ? responseMaxTokens * tokenPriceOut / 1E6 : undefined;
if (costMin || costOutMax) {
message += `\n\n\n▶ Chat Turn Cost (max, approximate)\n`;
if (costMin) message += '\n' +
` Input tokens: ${alignRight(usedInputTokens)}\n` +
` Input Price $/M: ${tokenPriceIn!.toFixed(2).padStart(8)}\n` +
` Input cost: ${('$' + costMin!.toFixed(4)).padStart(8)}\n`;
if (costOutMax) message += '\n' +
` Max output tokens: ${alignRight(responseMaxTokens!)}\n` +
` Output Price $/M: ${tokenPriceOut!.toFixed(2).padStart(8)}\n` +
` Max output cost: ${('$' + costOutMax!.toFixed(4)).padStart(8)}\n`;
if (costMin) message += '\n' +
` > Min turn cost: ${formatCost(costMin).padStart(8)}`;
costMax = (costMin && costOutMax) ? costMin + costOutMax : undefined;
if (costMax) message += '\n' +
` < Max turn cost: ${formatCost(costMax).padStart(8)}`;
}
}
}
// Cleaner mode: d + ? < R (total is the remaining in this case)
else {
message +=
`${(tokenLimit + usedTokens).toLocaleString()} available tokens after deleting this\n\n` +
`${(tokenLimit + usedMaxTokens).toLocaleString()} available tokens after deleting this\n\n` +
` = Currently free: ${alignRight(tokenLimit)}\n` +
` + This message: ${alignRight(usedTokens)}`;
` + This message: ${alignRight(usedMaxTokens)}`;
}
const color: ColorPaletteProp =
@@ -47,23 +87,21 @@ export function tokensPrettyMath(tokenLimit: number | 0, directTokens: number, h
? 'warning'
: 'primary';
return { color, message, remainingTokens };
return { color, message, remainingTokens, costMax, costMin };
}
export const TokenTooltip = (props: { message: string | null, color: ColorPaletteProp, placement?: 'top' | 'top-end', children: React.JSX.Element }) =>
export const TokenTooltip = (props: { message: string | null, color: ColorPaletteProp, placement?: 'top' | 'top-end', children: React.ReactElement }) =>
<Tooltip
placement={props.placement}
variant={props.color !== 'primary' ? 'solid' : 'soft'} color={props.color}
title={props.message
? <Box sx={{ p: 2, whiteSpace: 'pre' }}>
{props.message}
</Box>
: null
}
title={props.message ? <Box sx={{ p: 2, whiteSpace: 'pre' }}>{props.message}</Box> : null}
sx={{
fontFamily: 'code',
boxShadow: 'xl',
// fontSize: '0.8125rem',
border: '1px solid',
borderColor: `${props.color}.outlinedColor`,
boxShadow: 'md',
}}
>
{props.children}
@@ -76,38 +114,65 @@ export const TokenTooltip = (props: { message: string | null, color: ColorPalett
export const TokenBadgeMemo = React.memo(TokenBadge);
function TokenBadge(props: {
direct: number, history?: number, responseMax?: number, limit: number,
showExcess?: boolean, absoluteBottomRight?: boolean, inline?: boolean,
direct: number,
history?: number,
responseMax?: number,
limit: number,
tokenPriceIn?: number,
tokenPriceOut?: number,
showCost?: boolean
showExcess?: boolean,
absoluteBottomRight?: boolean,
inline?: boolean,
}) {
const { message, color, remainingTokens } = tokensPrettyMath(props.limit, props.direct, props.history, props.responseMax);
const { message, color, remainingTokens, costMax, costMin } =
tokensPrettyMath(props.limit, props.direct, props.history, props.responseMax, props.tokenPriceIn, props.tokenPriceOut);
// show the direct tokens, unless we exceed the limit and 'showExcess' is enabled
const value = (props.showExcess && (props.limit && remainingTokens <= 0))
? Math.abs(remainingTokens)
: props.direct;
let badgeValue: string;
const showAltCosts = !!props.showCost && !!costMax && costMin !== undefined;
if (showAltCosts) {
badgeValue = '< ' + formatCost(costMax);
} else {
// show the direct tokens, unless we exceed the limit and 'showExcess' is enabled
const value = (props.showExcess && (props.limit && remainingTokens <= 0))
? Math.abs(remainingTokens)
: props.direct;
badgeValue = value.toLocaleString();
}
const shallHide = !props.direct && remainingTokens >= 0 && !showAltCosts;
if (shallHide) return null;
return (
<Badge
variant='solid' color={color} max={100000}
invisible={!props.direct && remainingTokens >= 0}
badgeContent={
<TokenTooltip color={color} message={message}>
<span>{value.toLocaleString()}</span>
</TokenTooltip>
}
sx={{
...((props.absoluteBottomRight) && { position: 'absolute', bottom: 8, right: 8 }),
cursor: 'help',
}}
slotProps={{
badge: {
sx: {
fontFamily: 'code',
...((props.absoluteBottomRight || props.inline) && { position: 'static', transform: 'none' }),
<TokenTooltip color={color} message={message} placement='top-end'>
<Badge
variant='soft' color={color} max={1000000}
// invisible={shallHide}
badgeContent={badgeValue}
slotProps={{
root: {
sx: {
...((props.absoluteBottomRight) && { position: 'absolute', bottom: 8, right: 8 }),
cursor: 'help',
},
},
},
}}
/>
badge: {
sx: {
// the badge (not the tooltip)
// boxShadow: 'sm',
fontFamily: 'code',
fontSize: 'xs',
...((props.absoluteBottomRight || props.inline) && { position: 'static', transform: 'none' }),
},
},
}}
/>
</TokenTooltip>
);
}
@@ -12,7 +12,15 @@ import { tokensPrettyMath, TokenTooltip } from './TokenBadge';
*/
export const TokenProgressbarMemo = React.memo(TokenProgressbar);
function TokenProgressbar(props: { direct: number, history: number, responseMax: number, limit: number }) {
function TokenProgressbar(props: {
direct: number,
history: number,
responseMax: number,
limit: number,
tokenPriceIn?: number,
tokenPriceOut?: number,
}) {
// external state
const theme = useTheme();
@@ -40,7 +48,7 @@ function TokenProgressbar(props: { direct: number, history: number, responseMax:
const overflowColor = theme.palette.danger.softColor;
// tooltip message/color
const { message, color } = tokensPrettyMath(props.limit, props.direct, props.history, props.responseMax);
const { message, color } = tokensPrettyMath(props.limit, props.direct, props.history, props.responseMax, props.tokenPriceIn, props.tokenPriceOut);
// sizes
const containerHeight = 8;
@@ -153,7 +153,7 @@ export function Attachments(props: {
</MenuItem>
<MenuItem onClick={handleClearAttachments}>
<ListItemDecorator><ClearIcon /></ListItemDecorator>
Clear
Clear{attachments.length > 5 ? <span style={{ opacity: 0.5 }}> {attachments.length} attachments</span> : null}
</MenuItem>
</CloseableMenu>
)}
@@ -10,8 +10,8 @@ import type { ComposerOutputMultiPart, ComposerOutputPartType } from '../compose
export interface LLMAttachments {
attachments: LLMAttachment[];
getAttachmentOutputs: (initialTextBlockText: string | null, attachmentId: AttachmentId) => ComposerOutputMultiPart;
getAttachmentsOutputs: (initialTextBlockText: string | null) => ComposerOutputMultiPart;
collapseWithAttachment: (initialTextBlockText: string | null, attachmentId: AttachmentId) => ComposerOutputMultiPart;
collapseWithAttachments: (initialTextBlockText: string | null) => ComposerOutputMultiPart;
isOutputAttacheable: boolean;
isOutputTextInlineable: boolean;
tokenCountApprox: number;
@@ -37,13 +37,13 @@ export function useLLMAttachments(attachments: Attachment[], chatLLMId: DLLMId |
const llmAttachments = attachments.map(attachment => toLLMAttachment(attachment, supportedOutputPartTypes, chatLLMId));
const getAttachmentOutputs = (initialTextBlockText: string | null, attachmentId: AttachmentId): ComposerOutputMultiPart => {
const collapseWithAttachment = (initialTextBlockText: string | null, attachmentId: AttachmentId): ComposerOutputMultiPart => {
// get outputs of a specific attachment
const outputs = attachments.find(a => a.id === attachmentId)?.outputs || [];
return attachmentCollapseOutputs(initialTextBlockText, outputs);
};
const getAttachmentsOutputs = (initialTextBlockText: string | null): ComposerOutputMultiPart => {
const collapseWithAttachments = (initialTextBlockText: string | null): ComposerOutputMultiPart => {
// accumulate all outputs of all attachments
const allOutputs = llmAttachments.reduce((acc, a) => acc.concat(a.attachment.outputs), [] as ComposerOutputMultiPart);
return attachmentCollapseOutputs(initialTextBlockText, allOutputs);
@@ -51,8 +51,8 @@ export function useLLMAttachments(attachments: Attachment[], chatLLMId: DLLMId |
return {
attachments: llmAttachments,
getAttachmentOutputs,
getAttachmentsOutputs,
collapseWithAttachment,
collapseWithAttachments,
isOutputAttacheable: llmAttachments.every(a => a.isOutputAttachable),
isOutputTextInlineable: llmAttachments.every(a => a.isOutputTextInlineable),
tokenCountApprox: llmAttachments.reduce((acc, a) => acc + (a.tokenCountApprox || 0), 0),
@@ -60,7 +60,7 @@ export function useLLMAttachments(attachments: Attachment[], chatLLMId: DLLMId |
}, [attachments, chatLLMId]);
}
export function getTextBlockText(outputs: ComposerOutputMultiPart): string | null {
export function getSingleTextBlockText(outputs: ComposerOutputMultiPart): string | null {
const textOutputs = outputs.filter(part => part.type === 'text-block');
return (textOutputs.length === 1 && textOutputs[0].type === 'text-block') ? textOutputs[0].text : null;
}
@@ -11,10 +11,14 @@ import { animationEnterBelow } from '~/common/util/animUtils';
const desktopLegend =
<Box sx={{ px: 1, py: 0.75, lineHeight: '1.5rem' }}>
Combine the answers from multiple models<br />
{/*{platformAwareKeystrokes('Ctrl + Enter')}*/}
<KeyStroke combo='Ctrl + Enter' sx={{ mt: 0.5, mb: 0.25 }} />
</Box>;
const desktopLegendNoContent =
<Box sx={{ px: 1, py: 0.75, lineHeight: '1.5rem' }}>
Enter the text to Beam, then press this
</Box>;
const mobileSx: SxProps = {
mr: { xs: 1, md: 2 },
};
@@ -31,13 +35,13 @@ const desktopSx: SxProps = {
export const ButtonBeamMemo = React.memo(ButtonBeam);
function ButtonBeam(props: { isMobile?: boolean, disabled?: boolean, onClick: () => void }) {
function ButtonBeam(props: { isMobile?: boolean, disabled?: boolean, hasContent?: boolean, onClick: () => void }) {
return props.isMobile ? (
<IconButton variant='soft' color='primary' disabled={props.disabled} onClick={props.onClick} sx={mobileSx}>
<ChatBeamIcon />
</IconButton>
) : (
<Tooltip disableInteractive variant='solid' arrow placement='right' title={desktopLegend}>
<Tooltip disableInteractive variant='solid' arrow placement='right' title={props.hasContent ? desktopLegend : desktopLegendNoContent}>
<Button variant='soft' color='primary' disabled={props.disabled} onClick={props.onClick} endDecorator={<ChatBeamIcon />} sx={desktopSx}>
Beam
</Button>
@@ -36,8 +36,9 @@ export function FolderListItem(props: {
// Menu
const handleMenuOpen = (event: React.MouseEvent<HTMLAnchorElement>) => {
setMenuAnchorEl(event.currentTarget);
const handleMenuToggle = (event: React.MouseEvent<HTMLAnchorElement>) => {
event.preventDefault(); // added for the Right mouse click (to prevent the menu)
setMenuAnchorEl(anchor => anchor ? null : event.currentTarget);
setDeleteArmed(false); // Reset delete armed state
};
@@ -188,9 +189,11 @@ export function FolderListItem(props: {
{/* Icon to show the Popup menu */}
<IconButton
size='sm'
variant='outlined'
className='menu-icon'
onClick={handleMenuOpen}
onClick={handleMenuToggle}
onContextMenu={handleMenuToggle}
sx={{
visibility: 'hidden',
my: '-0.25rem', /* absorb the button padding */
+282 -127
View File
@@ -1,19 +1,21 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { useShallow } from 'zustand/react/shallow';
import type { SxProps } from '@mui/joy/styles/types';
import { Avatar, Box, CircularProgress, IconButton, ListDivider, ListItem, ListItemDecorator, MenuItem, Switch, Tooltip, Typography } from '@mui/joy';
import AccountTreeTwoToneIcon from '@mui/icons-material/AccountTreeTwoTone';
import { Avatar, Box, ButtonGroup, CircularProgress, IconButton, ListDivider, ListItem, ListItemDecorator, MenuItem, Switch, Tooltip, Typography } from '@mui/joy';
import { ClickAwayListener, Popper } from '@mui/base';
import AccountTreeOutlinedIcon from '@mui/icons-material/AccountTreeOutlined';
import ClearIcon from '@mui/icons-material/Clear';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
import DifferenceIcon from '@mui/icons-material/Difference';
import EditRoundedIcon from '@mui/icons-material/EditRounded';
import Face6Icon from '@mui/icons-material/Face6';
import ForkRightIcon from '@mui/icons-material/ForkRight';
import FormatPaintTwoToneIcon from '@mui/icons-material/FormatPaintTwoTone';
import FormatPaintOutlinedIcon from '@mui/icons-material/FormatPaintOutlined';
import MoreVertIcon from '@mui/icons-material/MoreVert';
import RecordVoiceOverTwoToneIcon from '@mui/icons-material/RecordVoiceOverTwoTone';
import RecordVoiceOverOutlinedIcon from '@mui/icons-material/RecordVoiceOverOutlined';
import ReplayIcon from '@mui/icons-material/Replay';
import ReplyRoundedIcon from '@mui/icons-material/ReplyRounded';
import SettingsSuggestIcon from '@mui/icons-material/SettingsSuggest';
import SmartToyOutlinedIcon from '@mui/icons-material/SmartToyOutlined';
import StarOutlineRoundedIcon from '@mui/icons-material/StarOutlineRounded';
@@ -32,29 +34,31 @@ import { DMessage, DMessageUserFlag, messageHasUserFlag } from '~/common/state/s
import { InlineTextarea } from '~/common/components/InlineTextarea';
import { KeyStroke } from '~/common/components/KeyStroke';
import { Link } from '~/common/components/Link';
import { adjustContentScaling, themeScalingMap } from '~/common/app.theme';
import { adjustContentScaling, themeScalingMap, themeZIndexPageBar } from '~/common/app.theme';
import { animationColorRainbow } from '~/common/util/animUtils';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import { prettyBaseModel } from '~/common/util/modelUtils';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { ReplyToBubble } from './ReplyToBubble';
import { useChatShowTextDiff } from '../../store-app-chat';
// Enable the menu on text selection
const ENABLE_SELECTION_RIGHT_CLICK_MENU: boolean = true;
const ENABLE_SELECTION_RIGHT_CLICK_MENU = false;
const ENABLE_SELECTION_TOOLBAR = true;
const SELECTION_TOOLBAR_MIN_LENGTH = 3;
// Enable the hover button to copy the whole message. The Copy button is also available in Blocks, or in the Avatar Menu.
const ENABLE_COPY_MESSAGE_OVERLAY: boolean = false;
export function messageBackground(messageRole: DMessage['role'] | string, wasEdited: boolean, unknownAssistantIssue: boolean): string {
export function messageBackground(messageRole: DMessage['role'] | string, wasEdited: boolean, isAssistantIssue: boolean): string {
switch (messageRole) {
case 'user':
return 'primary.plainHoverBg'; // was .background.level1
case 'assistant':
return unknownAssistantIssue ? 'danger.softBg' : 'background.surface';
return isAssistantIssue ? 'danger.softBg' : 'background.surface';
case 'system':
return wasEdited ? 'warning.softHoverBg' : 'neutral.softBg';
default:
@@ -114,7 +118,7 @@ export function makeAvatar(messageAvatar: string | null, messageRole: DMessage['
// icon: text-to-image
if (isTextToImage)
return <FormatPaintTwoToneIcon sx={{
return <FormatPaintOutlinedIcon sx={{
...avatarIconSx,
animation: `${animationColorRainbow} 1s linear 2.66`,
}} />;
@@ -228,6 +232,7 @@ export function ChatMessage(props: {
onMessageEdit?: (messageId: string, text: string) => void,
onMessageToggleUserFlag?: (messageId: string, flag: DMessageUserFlag) => void,
onMessageTruncate?: (messageId: string) => void,
onReplyTo?: (messageId: string, selectedText: string) => void,
onTextDiagram?: (messageId: string, text: string) => Promise<void>
onTextImagine?: (text: string) => Promise<void>
onTextSpeak?: (text: string) => Promise<void>
@@ -235,20 +240,21 @@ export function ChatMessage(props: {
}) {
// state
const blocksRendererRef = React.useRef<HTMLDivElement>(null);
const [isHovering, setIsHovering] = React.useState(false);
const [opsMenuAnchor, setOpsMenuAnchor] = React.useState<HTMLElement | null>(null);
const [selMenuAnchor, setSelMenuAnchor] = React.useState<HTMLElement | null>(null);
const [selMenuText, setSelMenuText] = React.useState<string | null>(null);
const [selToolbarAnchor, setSelToolbarAnchor] = React.useState<HTMLElement | null>(null);
const [selText, setSelText] = React.useState<string | null>(null);
const [isEditing, setIsEditing] = React.useState(false);
// external state
const labsBeam = useUXLabsStore(state => state.labsBeam);
const { showAvatar, contentScaling, doubleClickToEdit, renderMarkdown } = useUIPreferencesStore(state => ({
const { showAvatar, contentScaling, doubleClickToEdit, renderMarkdown } = useUIPreferencesStore(useShallow(state => ({
showAvatar: props.showAvatar !== undefined ? props.showAvatar : state.zenMode !== 'cleaner',
contentScaling: adjustContentScaling(state.contentScaling, props.adjustContentScaling),
doubleClickToEdit: state.doubleClickToEdit,
renderMarkdown: state.renderMarkdown,
}), shallow);
})));
const [showDiff, setShowDiff] = useChatShowTextDiff();
const textDiffs = useSanityTextDiffs(props.message.text, props.diffPreviousText, showDiff);
@@ -262,6 +268,7 @@ export function ChatMessage(props: {
role: messageRole,
purposeId: messagePurposeId,
originLLM: messageOriginLLM,
metadata: messageMetadata,
created: messageCreated,
updated: messageUpdated,
} = props.message;
@@ -272,10 +279,10 @@ export function ChatMessage(props: {
const fromSystem = messageRole === 'system';
const wasEdited = !!messageUpdated;
const textSel = selMenuText ? selMenuText : messageText;
const textSel = selText ? selText : messageText;
const isSpecialT2I = textSel.startsWith('https://images.prodia.xyz/') || textSel.startsWith('/draw ') || textSel.startsWith('/imagine ') || textSel.startsWith('/img ');
const couldDiagram = textSel?.length >= 100 && !isSpecialT2I;
const couldImagine = textSel?.length >= 2 && !isSpecialT2I;
const couldDiagram = textSel.length >= 100 && !isSpecialT2I;
const couldImagine = textSel.length >= 3 && !isSpecialT2I;
const couldSpeak = couldImagine;
@@ -290,21 +297,27 @@ export function ChatMessage(props: {
const { onMessageToggleUserFlag } = props;
const closeOpsMenu = () => setOpsMenuAnchor(null);
const handleOpsMenuToggle = React.useCallback((event: React.MouseEvent<HTMLElement>) => {
event.preventDefault(); // added for the Right mouse click (to prevent the menu)
setOpsMenuAnchor(anchor => anchor ? null : event.currentTarget);
}, []);
const handleCloseOpsMenu = React.useCallback(() => setOpsMenuAnchor(null), []);
const handleOpsCopy = (e: React.MouseEvent) => {
copyToClipboard(textSel, 'Text');
e.preventDefault();
closeOpsMenu();
handleCloseOpsMenu();
closeSelectionMenu();
closeToolbar();
};
const handleOpsEdit = React.useCallback((e: React.MouseEvent) => {
if (messageTyping && !isEditing) return; // don't allow editing while typing
setIsEditing(!isEditing);
e.preventDefault();
closeOpsMenu();
}, [isEditing, messageTyping]);
handleCloseOpsMenu();
}, [handleCloseOpsMenu, isEditing, messageTyping]);
const handleOpsToggleStarred = React.useCallback(() => {
onMessageToggleUserFlag?.(messageId, 'starred');
@@ -312,21 +325,21 @@ export function ChatMessage(props: {
const handleOpsAssistantFrom = async (e: React.MouseEvent) => {
e.preventDefault();
closeOpsMenu();
handleCloseOpsMenu();
await props.onMessageAssistantFrom?.(messageId, fromAssistant ? -1 : 0);
};
const handleOpsBeamFrom = async (e: React.MouseEvent) => {
e.stopPropagation();
closeOpsMenu();
labsBeam && await props.onMessageBeam?.(messageId);
handleCloseOpsMenu();
await props.onMessageBeam?.(messageId);
};
const handleOpsBranch = (e: React.MouseEvent) => {
e.preventDefault();
e.stopPropagation(); // to try to not steal the focus from the banched conversation
props.onMessageBranch?.(messageId);
closeOpsMenu();
handleCloseOpsMenu();
};
const handleOpsToggleShowDiff = () => setShowDiff(!showDiff);
@@ -335,8 +348,9 @@ export function ChatMessage(props: {
e.preventDefault();
if (props.onTextDiagram) {
await props.onTextDiagram(messageId, textSel);
closeOpsMenu();
handleCloseOpsMenu();
closeSelectionMenu();
closeToolbar();
}
};
@@ -344,8 +358,19 @@ export function ChatMessage(props: {
e.preventDefault();
if (props.onTextImagine) {
await props.onTextImagine(textSel);
closeOpsMenu();
handleCloseOpsMenu();
closeSelectionMenu();
closeToolbar();
}
};
const handleOpsReplyTo = (e: React.MouseEvent) => {
e.preventDefault();
if (props.onReplyTo && textSel.trim().length >= SELECTION_TOOLBAR_MIN_LENGTH) {
props.onReplyTo(messageId, textSel.trim());
handleCloseOpsMenu();
closeSelectionMenu();
closeToolbar();
}
};
@@ -353,14 +378,15 @@ export function ChatMessage(props: {
e.preventDefault();
if (props.onTextSpeak) {
await props.onTextSpeak(textSel);
closeOpsMenu();
handleCloseOpsMenu();
closeSelectionMenu();
closeToolbar();
}
};
const handleOpsTruncate = (_e: React.MouseEvent) => {
props.onMessageTruncate?.(messageId);
closeOpsMenu();
handleCloseOpsMenu();
};
const handleOpsDelete = (_e: React.MouseEvent) => {
@@ -395,17 +421,17 @@ export function ChatMessage(props: {
document.body.appendChild(anchorEl);
setSelMenuAnchor(anchorEl);
setSelMenuText(selectedText);
setSelText(selectedText);
}, [removeSelectionAnchor]);
const closeSelectionMenu = React.useCallback(() => {
// window.getSelection()?.removeAllRanges?.();
removeSelectionAnchor();
setSelMenuAnchor(null);
setSelMenuText(null);
setSelText(null);
}, [removeSelectionAnchor]);
const handleMouseUp = React.useCallback((event: MouseEvent) => {
const handleContextMenu = React.useCallback((event: MouseEvent) => {
const selection = window.getSelection();
if (selection && selection.rangeCount > 0) {
const range = selection.getRangeAt(0);
@@ -416,16 +442,74 @@ export function ChatMessage(props: {
}, [openSelectionMenu]);
// Selection Toolbar
const closeToolbar = React.useCallback((anchorEl?: HTMLElement) => {
window.getSelection()?.removeAllRanges?.();
try {
const anchor = anchorEl || selToolbarAnchor;
anchor && document.body.removeChild(anchor);
} catch (e) {
// ignore...
}
setSelToolbarAnchor(null);
setSelText(null);
}, [selToolbarAnchor]);
const handleOpenToolbar = React.useCallback((_event: MouseEvent) => {
// check for selection
const selection = window.getSelection();
if (!selection || selection.rangeCount <= 0) return;
// check for enought selection
const selectionText = selection.toString().trim();
if (selectionText.length < SELECTION_TOOLBAR_MIN_LENGTH) return;
// check for the selection being inside the blocks renderer (core of the message)
const selectionRange = selection.getRangeAt(0);
const blocksElement = blocksRendererRef.current;
if (!blocksElement || !blocksElement.contains(selectionRange.commonAncestorContainer)) return;
const rangeRects = selectionRange.getClientRects();
if (rangeRects.length <= 0) return;
const firstRect = rangeRects[0];
const anchorEl = document.createElement('div');
anchorEl.style.position = 'fixed';
anchorEl.style.left = `${firstRect.left + window.scrollX}px`;
anchorEl.style.top = `${firstRect.top + window.scrollY}px`;
document.body.appendChild(anchorEl);
anchorEl.setAttribute('role', 'dialog');
// auto-close logic on unselect
const closeOnUnselect = () => {
const selection = window.getSelection();
if (!selection || selection.toString().trim() === '') {
closeToolbar(anchorEl);
document.removeEventListener('selectionchange', closeOnUnselect);
}
};
document.addEventListener('selectionchange', closeOnUnselect);
setSelToolbarAnchor(anchorEl);
setSelText(selectionText);
}, [closeToolbar]);
// Blocks renderer
const handleBlocksContextMenu = React.useCallback((event: React.MouseEvent) => {
handleMouseUp(event.nativeEvent);
}, [handleMouseUp]);
handleContextMenu(event.nativeEvent);
}, [handleContextMenu]);
const handleBlocksDoubleClick = React.useCallback((event: React.MouseEvent) => {
doubleClickToEdit && props.onMessageEdit && handleOpsEdit(event);
}, [doubleClickToEdit, handleOpsEdit, props.onMessageEdit]);
const handleBlocksMouseUp = React.useCallback((event: React.MouseEvent) => {
handleOpenToolbar(event.nativeEvent);
}, [handleOpenToolbar]);
// prettier upstream errors
const { isAssistantError, errorMessage } = React.useMemo(
@@ -446,6 +530,7 @@ export function ChatMessage(props: {
return (
<ListItem
role='chat-message'
onMouseUp={(ENABLE_SELECTION_TOOLBAR && !fromSystem && !isAssistantError) ? handleBlocksMouseUp : undefined}
sx={{
// style
backgroundColor: backgroundColor,
@@ -468,92 +553,97 @@ export function ChatMessage(props: {
}),
// style: make room for a top decorator if set
...(!!props.topDecorator && {
pt: '2.5rem',
}),
'&:hover > button': { opacity: 1 },
// layout
display: 'flex',
flexDirection: !fromAssistant ? 'row-reverse' : 'row',
alignItems: 'flex-start',
gap: { xs: 0, md: 1 },
display: 'block', // this is Needed, otherwise there will be a horizontal overflow
...props.sx,
}}
>
{/* (Optional) underlayed top decorator */}
{props.topDecorator && (
<Box sx={{ position: 'absolute', left: 0, right: 0, top: 0, textAlign: 'center' }}>
{props.topDecorator}
</Box>
)}
{props.topDecorator}
{/* Avatar (Persona) */}
{showAvatar && (
<Box sx={personaSx}>
{/* Message Row: Avatar, Blocks (1 text -> blocksRenderer) */}
<Box sx={{
display: 'flex',
flexDirection: !fromAssistant ? 'row-reverse' : 'row',
alignItems: 'flex-start',
gap: { xs: 0, md: 1 },
}}>
{/* Persona Avatar or Menu Button */}
<Box
onClick={event => setOpsMenuAnchor(event.currentTarget)}
onMouseEnter={() => setIsHovering(true)}
onMouseLeave={() => setIsHovering(false)}
sx={{ display: 'flex' }}
>
{(isHovering || opsMenuAnchor) ? (
<IconButton variant={opsMenuAnchor ? 'solid' : 'soft'} color={(fromAssistant || fromSystem) ? 'neutral' : 'primary'} sx={avatarIconSx}>
<MoreVertIcon />
</IconButton>
) : (
avatarEl
{/* Avatar (Persona) */}
{showAvatar && (
<Box sx={personaSx}>
{/* Persona Avatar or Menu Button */}
<Box
onClick={handleOpsMenuToggle}
onContextMenu={handleOpsMenuToggle}
onMouseEnter={() => setIsHovering(true)}
onMouseLeave={() => setIsHovering(false)}
sx={{ display: 'flex' }}
>
{(isHovering || opsMenuAnchor) ? (
<IconButton variant={opsMenuAnchor ? 'solid' : 'soft'} color={(fromAssistant || fromSystem) ? 'neutral' : 'primary'} sx={avatarIconSx}>
<MoreVertIcon />
</IconButton>
) : (
avatarEl
)}
</Box>
{/* Assistant model name */}
{fromAssistant && (
<Tooltip arrow title={messageTyping ? null : (messageOriginLLM || 'unk-model')} variant='solid'>
<Typography level='body-xs' sx={{
overflowWrap: 'anywhere',
...(messageTyping ? { animation: `${animationColorRainbow} 5s linear infinite` } : {}),
}}>
{prettyBaseModel(messageOriginLLM)}
</Typography>
</Tooltip>
)}
</Box>
{/* Assistant model name */}
{fromAssistant && (
<Tooltip arrow title={messageTyping ? null : (messageOriginLLM || 'unk-model')} variant='solid'>
<Typography level='body-xs' sx={{
overflowWrap: 'anywhere',
...(messageTyping ? { animation: `${animationColorRainbow} 5s linear infinite` } : {}),
}}>
{prettyBaseModel(messageOriginLLM)}
</Typography>
</Tooltip>
)}
</Box>
)}
)}
{/* Edit / Blocks */}
{isEditing ? (
{/* Edit / Blocks */}
{isEditing ? (
<InlineTextarea
initialText={messageText} onEdit={handleTextEdited}
sx={editBlocksSx}
/>
<InlineTextarea
initialText={messageText} onEdit={handleTextEdited}
sx={editBlocksSx}
/>
) : (
) : (
<BlocksRenderer
text={messageText}
fromRole={messageRole}
contentScaling={contentScaling}
errorMessage={errorMessage}
fitScreen={props.fitScreen}
isBottom={props.isBottom}
renderTextAsMarkdown={renderMarkdown}
renderTextDiff={textDiffs || undefined}
showDate={props.showBlocksDate === true ? messageUpdated || messageCreated || undefined : undefined}
showUnsafeHtml={props.showUnsafeHtml}
wasUserEdited={wasEdited}
onContextMenu={(props.onMessageEdit && ENABLE_SELECTION_RIGHT_CLICK_MENU) ? handleBlocksContextMenu : undefined}
onDoubleClick={(props.onMessageEdit && doubleClickToEdit) ? handleBlocksDoubleClick : undefined}
optiAllowMemo={messageTyping}
/>
<BlocksRenderer
ref={blocksRendererRef}
text={messageText}
fromRole={messageRole}
contentScaling={contentScaling}
errorMessage={errorMessage}
fitScreen={props.fitScreen}
isBottom={props.isBottom}
renderTextAsMarkdown={renderMarkdown}
renderTextDiff={textDiffs || undefined}
showDate={props.showBlocksDate === true ? messageUpdated || messageCreated || undefined : undefined}
showUnsafeHtml={props.showUnsafeHtml}
wasUserEdited={wasEdited}
onContextMenu={(props.onMessageEdit && ENABLE_SELECTION_RIGHT_CLICK_MENU) ? handleBlocksContextMenu : undefined}
onDoubleClick={(props.onMessageEdit && doubleClickToEdit) ? handleBlocksDoubleClick : undefined}
optiAllowMemo={messageTyping}
/>
)}
)}
</Box>
{/* Reply-To Bubble */}
{!!messageMetadata?.inReplyToText && <ReplyToBubble inlineMessage replyToText={messageMetadata.inReplyToText} className='reply-to-bubble' />}
{/* Overlay copy icon */}
@@ -575,7 +665,7 @@ export function ChatMessage(props: {
{!!opsMenuAnchor && (
<CloseableMenu
dense placement='bottom-end'
open anchorEl={opsMenuAnchor} onClose={closeOpsMenu}
open anchorEl={opsMenuAnchor} onClose={handleCloseOpsMenu}
sx={{ minWidth: 280 }}
>
@@ -637,6 +727,26 @@ export function ChatMessage(props: {
<span style={{ opacity: 0.5 }}>after this</span>
</MenuItem>
)}
{/* Diagram / Draw / Speak */}
{!!props.onTextDiagram && <ListDivider />}
{!!props.onTextDiagram && (
<MenuItem onClick={handleOpsDiagram} disabled={!couldDiagram}>
<ListItemDecorator><AccountTreeOutlinedIcon /></ListItemDecorator>
Auto-Diagram ...
</MenuItem>
)}
{!!props.onTextImagine && (
<MenuItem onClick={handleOpsImagine} disabled={!couldImagine || props.isImagining}>
<ListItemDecorator>{props.isImagining ? <CircularProgress size='sm' /> : <FormatPaintOutlinedIcon />}</ListItemDecorator>
Auto-Draw
</MenuItem>
)}
{!!props.onTextSpeak && (
<MenuItem onClick={handleOpsSpeak} disabled={!couldSpeak || props.isSpeaking}>
<ListItemDecorator>{props.isSpeaking ? <CircularProgress size='sm' /> : <RecordVoiceOverOutlinedIcon />}</ListItemDecorator>
Speak
</MenuItem>
)}
{/* Diff Viewer */}
{!!props.diffPreviousText && <ListDivider />}
{!!props.diffPreviousText && (
@@ -646,26 +756,6 @@ export function ChatMessage(props: {
<Switch checked={showDiff} onChange={handleOpsToggleShowDiff} sx={{ ml: 'auto' }} />
</MenuItem>
)}
{/* Diagram / Draw / Speak */}
{!!props.onTextDiagram && <ListDivider />}
{!!props.onTextDiagram && (
<MenuItem onClick={handleOpsDiagram} disabled={!couldDiagram}>
<ListItemDecorator><AccountTreeTwoToneIcon /></ListItemDecorator>
Auto-Diagram ...
</MenuItem>
)}
{!!props.onTextImagine && (
<MenuItem onClick={handleOpsImagine} disabled={!couldImagine || props.isImagining}>
<ListItemDecorator>{props.isImagining ? <CircularProgress size='sm' /> : <FormatPaintTwoToneIcon />}</ListItemDecorator>
Auto-Draw
</MenuItem>
)}
{!!props.onTextSpeak && (
<MenuItem onClick={handleOpsSpeak} disabled={!couldSpeak || props.isSpeaking}>
<ListItemDecorator>{props.isSpeaking ? <CircularProgress size='sm' /> : <RecordVoiceOverTwoToneIcon />}</ListItemDecorator>
Speak
</MenuItem>
)}
{/* Beam/Restart */}
{(!!props.onMessageAssistantFrom || !!props.onMessageBeam) && <ListDivider />}
{!!props.onMessageAssistantFrom && (
@@ -678,7 +768,7 @@ export function ChatMessage(props: {
: <Box sx={{ flexGrow: 1, display: 'flex', justifyContent: 'space-between', gap: 1 }}>Retry<KeyStroke combo='Ctrl + Shift + R' /></Box>}
</MenuItem>
)}
{!!props.onMessageBeam && labsBeam && (
{!!props.onMessageBeam && (
<MenuItem disabled={fromSystem} onClick={handleOpsBeamFrom}>
<ListItemDecorator>
<ChatBeamIcon color={fromSystem ? undefined : 'primary'} />
@@ -693,6 +783,71 @@ export function ChatMessage(props: {
</CloseableMenu>
)}
{/* Selection Toolbar */}
{ENABLE_SELECTION_TOOLBAR && !!selToolbarAnchor && (
<Popper placement='top-start' open anchorEl={selToolbarAnchor} slotProps={{
root: { style: { zIndex: themeZIndexPageBar + 1 } },
}}>
<ClickAwayListener onClickAway={() => closeToolbar()}>
<ButtonGroup
variant='plain'
sx={{
'--ButtonGroup-separatorColor': 'none !important',
'--ButtonGroup-separatorSize': 0,
borderRadius: '0',
backgroundColor: 'background.popup',
border: '1px solid',
borderColor: 'primary.outlinedBorder',
boxShadow: '0px 4px 12px -4px rgb(var(--joy-palette-neutral-darkChannel) / 50%)',
mb: 1,
ml: -1,
alignItems: 'center',
'& > button': {
'--Icon-fontSize': '1rem',
minHeight: '2.5rem',
minWidth: '2.75rem',
},
}}
>
{!!props.onReplyTo && fromAssistant && <Tooltip disableInteractive arrow placement='top' title='Reply'>
<IconButton color='primary' onClick={handleOpsReplyTo}>
<ReplyRoundedIcon sx={{ fontSize: 'xl' }} />
</IconButton>
</Tooltip>}
{/*{!!props.onMessageBeam && fromAssistant && <Tooltip disableInteractive arrow placement='top' title='Beam'>*/}
{/* <IconButton color='primary'>*/}
{/* <ChatBeamIcon sx={{ fontSize: 'xl' }} />*/}
{/* </IconButton>*/}
{/*</Tooltip>}*/}
{!!props.onReplyTo && fromAssistant && <MoreVertIcon sx={{ color: 'neutral.outlinedBorder', fontSize: 'md' }} />}
<Tooltip disableInteractive arrow placement='top' title='Copy'>
<IconButton onClick={handleOpsCopy}>
<ContentCopyIcon />
</IconButton>
</Tooltip>
{(!!props.onTextDiagram || !!props.onTextSpeak) && <MoreVertIcon sx={{ color: 'neutral.outlinedBorder', fontSize: 'md' }} />}
{!!props.onTextDiagram && <Tooltip disableInteractive arrow placement='top' title={couldDiagram ? 'Auto-Diagram...' : 'Too short to Auto-Diagram'}>
<IconButton onClick={couldDiagram ? handleOpsDiagram : undefined}>
<AccountTreeOutlinedIcon sx={{ color: couldDiagram ? 'primary' : 'neutral.plainDisabledColor' }} />
</IconButton>
</Tooltip>}
{/*{!!props.onTextImagine && <Tooltip disableInteractive arrow placement='top' title='Auto-Draw'>*/}
{/* <IconButton onClick={handleOpsImagine} disabled={!couldImagine || props.isImagining}>*/}
{/* {!props.isImagining ? <FormatPaintOutlinedIcon /> : <CircularProgress sx={{ '--CircularProgress-size': '16px' }} />}*/}
{/* </IconButton>*/}
{/*</Tooltip>}*/}
{!!props.onTextSpeak && <Tooltip disableInteractive arrow placement='top' title='Speak'>
<IconButton onClick={handleOpsSpeak} disabled={!couldSpeak || props.isSpeaking}>
{!props.isSpeaking ? <RecordVoiceOverOutlinedIcon /> : <CircularProgress sx={{ '--CircularProgress-size': '16px' }} />}
</IconButton>
</Tooltip>}
</ButtonGroup>
</ClickAwayListener>
</Popper>
)}
{/* Selection (Contextual) Menu */}
{!!selMenuAnchor && (
<CloseableMenu
@@ -706,15 +861,15 @@ export function ChatMessage(props: {
</MenuItem>
{!!props.onTextDiagram && <ListDivider />}
{!!props.onTextDiagram && <MenuItem onClick={handleOpsDiagram} disabled={!couldDiagram || props.isImagining}>
<ListItemDecorator><AccountTreeTwoToneIcon /></ListItemDecorator>
<ListItemDecorator><AccountTreeOutlinedIcon /></ListItemDecorator>
Auto-Diagram ...
</MenuItem>}
{!!props.onTextImagine && <MenuItem onClick={handleOpsImagine} disabled={!couldImagine || props.isImagining}>
<ListItemDecorator>{props.isImagining ? <CircularProgress size='sm' /> : <FormatPaintTwoToneIcon />}</ListItemDecorator>
<ListItemDecorator>{props.isImagining ? <CircularProgress size='sm' /> : <FormatPaintOutlinedIcon />}</ListItemDecorator>
Auto-Draw
</MenuItem>}
{!!props.onTextSpeak && <MenuItem onClick={handleOpsSpeak} disabled={!couldSpeak || props.isSpeaking}>
<ListItemDecorator>{props.isSpeaking ? <CircularProgress size='sm' /> : <RecordVoiceOverTwoToneIcon />}</ListItemDecorator>
<ListItemDecorator>{props.isSpeaking ? <CircularProgress size='sm' /> : <RecordVoiceOverOutlinedIcon />}</ListItemDecorator>
Speak
</MenuItem>}
</CloseableMenu>
@@ -0,0 +1,85 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, IconButton, Tooltip, Typography } from '@mui/joy';
import CloseRoundedIcon from '@mui/icons-material/CloseRounded';
import ReplyRoundedIcon from '@mui/icons-material/ReplyRounded';
// configuration
const INLINE_COLOR = 'primary';
const bubbleComposerSx: SxProps = {
// contained
width: '100%',
zIndex: 2, // stays on top of the 'tokens' bubble in the composer
// style
backgroundColor: 'background.surface',
border: '1px solid',
borderColor: 'neutral.outlinedBorder',
borderRadius: 'sm',
boxShadow: 'xs',
padding: '0.5rem 0.25rem 0.5rem 0.5rem',
// layout
display: 'flex',
alignItems: 'start',
};
const inlineMessageSx: SxProps = {
...bubbleComposerSx,
// redefine
// border: 'none',
mt: 1,
borderColor: `${INLINE_COLOR}.outlinedColor`,
borderRadius: 'sm',
boxShadow: 'xs',
width: undefined,
padding: '0.375rem 0.25rem 0.375rem 0.5rem',
// self-layout (parent: 'block', as 'grid' was not working and the user would scroll the app on the x-axis on mobile)
// ml: 'auto',
float: 'inline-end',
mr: { xs: 7.75, md: 10.5 }, // personaSx.minWidth + gap (md: 1) + 1.5 (text margin)
};
export function ReplyToBubble(props: {
replyToText: string | null,
inlineMessage?: boolean
onClear?: () => void,
className?: string,
}) {
return (
<Box className={props.className} sx={!props.inlineMessage ? bubbleComposerSx : inlineMessageSx}>
<Tooltip disableInteractive arrow title='Referring to this assistant text' placement='top'>
<ReplyRoundedIcon sx={{
color: props.inlineMessage ? `${INLINE_COLOR}.outlinedColor` : 'primary.solidBg',
fontSize: 'xl',
mt: 0.125,
}} />
</Tooltip>
<Typography level='body-sm' sx={{
flex: 1,
ml: 1,
mr: 0.5,
overflow: 'auto',
maxHeight: '5.75rem',
lineHeight: 'xl',
color: /*props.inlineMessage ? 'text.tertiary' :*/ 'text.secondary',
whiteSpace: 'break-spaces', // 'balance'
}}>
{props.replyToText}
</Typography>
{!!props.onClear && (
<IconButton size='sm' onClick={props.onClear} sx={{ my: -0.5, background: 'none' }}>
<CloseRoundedIcon />
</IconButton>
)}
</Box>
);
}
@@ -1,5 +1,6 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { v4 as uuidv4 } from 'uuid';
import type { SxProps } from '@mui/joy/styles/types';
import { Alert, Avatar, Box, Button, Card, CardContent, Checkbox, IconButton, Input, List, ListItem, ListItemButton, Textarea, Tooltip, Typography } from '@mui/joy';
@@ -10,17 +11,19 @@ import EditNoteIcon from '@mui/icons-material/EditNote';
import SearchIcon from '@mui/icons-material/Search';
import TelegramIcon from '@mui/icons-material/Telegram';
import { SystemPurposeData, SystemPurposeId, SystemPurposes } from '../../../../data';
import { bareBonesPromptMixer } from '~/modules/persona/pmix/pmix';
import { useChatLLM } from '~/modules/llms/store-llms';
import { DConversationId, useChatStore } from '~/common/state/store-chats';
import { DConversationId, DMessage, useChatStore } from '~/common/state/store-chats';
import { ExpanderControlledBox } from '~/common/components/ExpanderControlledBox';
import { lineHeightTextareaMd } from '~/common/app.theme';
import { navigateToPersonas } from '~/common/app.routes';
import { useChipBoolean } from '~/common/components/useChipBoolean';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { SystemPurposeData, SystemPurposeId, SystemPurposes } from '../../../../data';
import { YouTubeURLInput } from './YouTubeURLInput';
import { usePurposeStore } from './store-purposes';
@@ -116,6 +119,8 @@ export function PersonaSelector(props: { conversationId: DConversationId, runExa
const [searchQuery, setSearchQuery] = React.useState('');
const [filteredIDs, setFilteredIDs] = React.useState<SystemPurposeId[] | null>(null);
const [editMode, setEditMode] = React.useState(false);
const [isYouTubeTranscriberActive, setIsYouTubeTranscriberActive] = React.useState(false);
// external state
const showFinder = useUIPreferencesStore(state => state.showPersonaFinder);
@@ -153,11 +158,52 @@ export function PersonaSelector(props: { conversationId: DConversationId, runExa
// Handlers
// Modify the handlePurposeChanged function to check for the YouTube Transcriber
const handlePurposeChanged = React.useCallback((purposeId: SystemPurposeId | null) => {
if (purposeId && setSystemPurposeId)
setSystemPurposeId(props.conversationId, purposeId);
if (purposeId) {
if (purposeId === 'YouTubeTranscriber') {
// If the YouTube Transcriber tile is clicked, set the state accordingly
setIsYouTubeTranscriberActive(true);
} else {
setIsYouTubeTranscriberActive(false);
}
if (setSystemPurposeId) {
setSystemPurposeId(props.conversationId, purposeId);
}
}
}, [props.conversationId, setSystemPurposeId]);
React.useEffect(() => {
const isTranscriberActive = systemPurposeId === 'YouTubeTranscriber';
setIsYouTubeTranscriberActive(isTranscriberActive);
}, [systemPurposeId]);
// Implement handleAddMessage function
const handleAddMessage = (messageText: string) => {
// Retrieve the appendMessage action from the useChatStore
const { appendMessage } = useChatStore.getState();
const conversationId = props.conversationId;
// Create a new message object
const newMessage: DMessage = {
id: uuidv4(),
text: messageText,
sender: 'Bot',
avatar: null,
typing: false,
role: 'assistant' as 'assistant',
tokenCount: 0,
created: Date.now(),
updated: null,
};
// Append the new message to the conversation
appendMessage(conversationId, newMessage);
};
const handleCustomSystemMessageChange = React.useCallback((v: React.ChangeEvent<HTMLTextAreaElement>): void => {
// TODO: persist this change? Right now it's reset every time.
// maybe we shall have a "save" button just save on a state to persist between sessions
@@ -418,6 +464,17 @@ export function PersonaSelector(props: { conversationId: DConversationId, runExa
/>
)}
{/* [row -1] YouTube URL */}
{isYouTubeTranscriberActive && (
<YouTubeURLInput
onSubmit={(url) => handleAddMessage(url)}
isFetching={false}
sx={{
gridColumn: '1 / -1',
}}
/>
)}
</Box>
</Box>
@@ -0,0 +1,74 @@
import * as React from 'react';
import { Box, Button, Input } from '@mui/joy';
import YouTubeIcon from '@mui/icons-material/YouTube';
import type { SxProps } from '@mui/joy/styles/types';
import { useYouTubeTranscript, YTVideoTranscript } from '~/modules/youtube/useYouTubeTranscript';
interface YouTubeURLInputProps {
onSubmit: (transcript: string) => void;
isFetching: boolean;
sx?: SxProps;
}
export const YouTubeURLInput: React.FC<YouTubeURLInputProps> = ({ onSubmit, isFetching, sx }) => {
const [url, setUrl] = React.useState('');
const [submitFlag, setSubmitFlag] = React.useState(false);
// Function to extract video ID from URL
function extractVideoID(videoURL: string): string | null {
const regExp = /^(?:https?:\/\/)?(?:www\.)?(?:youtube\.com\/(?:watch\?v=|embed\/)|youtu\.be\/)([^#&?]*).*/;
const match = videoURL.match(regExp);
return (match && match[1]?.length == 11) ? match[1] : null;
}
const videoID = extractVideoID(url);
// Callback function to handle new transcript
const handleNewTranscript = (newTranscript: YTVideoTranscript) => {
onSubmit(newTranscript.transcript); // Pass the transcript text to the onSubmit handler
setSubmitFlag(false); // Reset submit flag after handling
};
const { transcript, isFetching: isTranscriptFetching, isError, error } = useYouTubeTranscript(videoID && submitFlag ? videoID : null, handleNewTranscript);
const handleChange = (event: React.ChangeEvent<HTMLInputElement>) => {
setUrl(event.target.value);
};
const handleSubmit = (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault(); // Prevent form from causing a page reload
setSubmitFlag(true); // Set flag to indicate a submit action
};
return (
<Box sx={{ mb: 1, ...sx }}>
<form onSubmit={handleSubmit}>
<Input
required
type='url'
fullWidth
disabled={isFetching || isTranscriptFetching}
variant='outlined'
placeholder='Enter YouTube Video URL'
value={url}
onChange={handleChange}
startDecorator={<YouTubeIcon sx={{ color: '#f00' }} />}
sx={{ mb: 1.5, backgroundColor: 'background.popup' }}
/>
<Button
type='submit'
variant='solid'
disabled={isFetching || isTranscriptFetching || !url}
loading={isFetching || isTranscriptFetching}
sx={{ minWidth: 140 }}
>
Get Transcript
</Button>
{isError && <div>Error fetching transcript. Please try again.</div>}
</form>
</Box>
);
};
@@ -18,7 +18,7 @@ export const usePurposeStore = create<PurposeStore>()(
(set) => ({
// default state
hiddenPurposeIDs: ['Developer', 'Designer'],
hiddenPurposeIDs: ['Developer', 'Designer', 'YouTubeTranscriber'],
toggleHiddenPurposeId: (purposeId: string) => {
set(state => {
@@ -37,14 +37,19 @@ export const usePurposeStore = create<PurposeStore>()(
/* versioning:
* 1: hide 'Developer' as 'DeveloperPreview' is best
* 2: add a hidden 'YouTubeTranscriber' purpose
*/
version: 1,
version: 2,
migrate: (state: any, fromVersion: number): PurposeStore => {
// 0 -> 1: rename 'enterToSend' to 'enterIsNewline' (flip the meaning)
if (state && fromVersion === 0)
if (!state.hiddenPurposeIDs.includes('Developer'))
state.hiddenPurposeIDs.push('Developer');
// 1 -> 2: add a hidden 'YouTubeTranscriber' purpose
if (state && fromVersion === 1)
if (!state.hiddenPurposeIDs.includes('YouTubeTranscriber'))
state.hiddenPurposeIDs.push('YouTubeTranscriber');
return state;
},
}),
+151
View File
@@ -0,0 +1,151 @@
import { getChatLLMId } from '~/modules/llms/store-llms';
import { updateHistoryForReplyTo } from '~/modules/aifn/replyto/replyTo';
import { ConversationsManager } from '~/common/chats/ConversationsManager';
import { createDMessage, DConversationId, DMessage, getConversationSystemPurposeId } from '~/common/state/store-chats';
import { getUXLabsHighPerformance } from '~/common/state/store-ux-labs';
import { extractChatCommand, findAllChatCommands } from '../commands/commands.registry';
import { getInstantAppChatPanesCount } from '../components/panes/usePanesManager';
import { runAssistantUpdatingState } from './chat-stream';
import { runBrowseGetPageUpdatingState } from './browse-load';
import { runImageGenerationUpdatingState } from './image-generate';
import { runReActUpdatingState } from './react-tangent';
import type { ChatModeId } from '../AppChat';
export async function _handleExecute(chatModeId: ChatModeId, conversationId: DConversationId, history: DMessage[]) {
// Handle missing conversation
if (!conversationId)
return 'err-no-conversation';
const chatLLMId = getChatLLMId();
// Update the system message from the active persona to the history
// NOTE: this does NOT call setMessages anymore (optimization). make sure to:
// 1. all the callers need to pass a new array
// 2. all the exit points need to call setMessages
const cHandler = ConversationsManager.getHandler(conversationId);
cHandler.inlineUpdatePurposeInHistory(history, chatLLMId || undefined);
// FIXME: shouldn't do this for all the code paths. The advantage for having it here (vs Composer output only) is re-executing history
// TODO: move this to the server side after transferring metadata?
updateHistoryForReplyTo(history);
// Handle unconfigured
if (!chatLLMId || !chatModeId) {
// set the history (e.g. the updated system prompt and the user prompt) at least, see #523
cHandler.messagesReplace(history);
return !chatLLMId ? 'err-no-chatllm' : 'err-no-chatmode';
}
// Valid /commands are intercepted here, and override chat modes, generally for mechanics or sidebars
const lastMessage = history.length > 0 ? history[history.length - 1] : null;
if (lastMessage?.role === 'user') {
const chatCommand = extractChatCommand(lastMessage.text)[0];
if (chatCommand && chatCommand.type === 'cmd') {
switch (chatCommand.providerId) {
case 'ass-browse':
cHandler.messagesReplace(history); // show command
return await runBrowseGetPageUpdatingState(cHandler, chatCommand.params);
case 'ass-t2i':
cHandler.messagesReplace(history); // show command
return await runImageGenerationUpdatingState(cHandler, chatCommand.params);
case 'ass-react':
cHandler.messagesReplace(history); // show command
return await runReActUpdatingState(cHandler, chatCommand.params, chatLLMId);
case 'chat-alter':
// /clear
if (chatCommand.command === '/clear') {
if (chatCommand.params === 'all') {
cHandler.messagesReplace([]);
} else {
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Issue: this command requires the \'all\' parameter to confirm the operation.', undefined, 'issue', false);
}
return true;
}
// /assistant, /system
Object.assign(lastMessage, {
role: chatCommand.command.startsWith('/s') ? 'system' : chatCommand.command.startsWith('/a') ? 'assistant' : 'user',
sender: 'Bot',
text: chatCommand.params || '',
} satisfies Partial<DMessage>);
cHandler.messagesReplace(history);
return true;
case 'cmd-help':
const chatCommandsText = findAllChatCommands()
.map(cmd => ` - ${cmd.primary}` + (cmd.alternatives?.length ? ` (${cmd.alternatives.join(', ')})` : '') + `: ${cmd.description}`)
.join('\n');
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Available Chat Commands:\n' + chatCommandsText, undefined, 'help', false);
return true;
case 'mode-beam':
if (chatCommand.isError) {
cHandler.messagesReplace(history);
return false;
}
// remove '/beam ', as we want to be a user chat message
Object.assign(lastMessage, { text: chatCommand.params || '' });
cHandler.messagesReplace(history);
ConversationsManager.getHandler(conversationId).beamInvoke(history, [], null);
return true;
default:
cHandler.messagesReplace([...history, createDMessage('assistant', 'This command is not supported.')]);
return false;
}
}
}
// get the system purpose (note: we don't react to it, or it would invalidate half UI components..)
if (!getConversationSystemPurposeId(conversationId)) {
cHandler.messagesReplace(history);
cHandler.messageAppendAssistant('Issue: no Persona selected.', undefined, 'issue', false);
return 'err-no-persona';
}
// synchronous long-duration tasks, which update the state as they go
switch (chatModeId) {
case 'generate-text':
cHandler.messagesReplace(history);
return await runAssistantUpdatingState(conversationId, history, chatLLMId, getUXLabsHighPerformance() ? 0 : getInstantAppChatPanesCount());
case 'generate-text-beam':
cHandler.messagesReplace(history);
cHandler.beamInvoke(history, [], null);
return true;
case 'append-user':
cHandler.messagesReplace(history);
return true;
case 'generate-image':
if (!lastMessage?.text) break;
// also add a 'fake' user message with the '/draw' command
cHandler.messagesReplace(history.map(message => (message.id !== lastMessage.id) ? message : {
...message,
text: `/draw ${lastMessage.text}`,
}));
return await runImageGenerationUpdatingState(cHandler, lastMessage.text);
case 'generate-react':
if (!lastMessage?.text) break;
cHandler.messagesReplace(history);
return await runReActUpdatingState(cHandler, lastMessage.text, chatLLMId);
}
// ISSUE: if we're here, it means we couldn't do the job, at least sync the history
console.log('Chat execute: issue running', chatModeId, conversationId, lastMessage);
cHandler.messagesReplace(history);
return false;
}
+3 -1
View File
@@ -6,7 +6,7 @@ import type { ConversationHandler } from '~/common/chats/ConversationHandler';
export const runBrowseGetPageUpdatingState = async (cHandler: ConversationHandler, url?: string) => {
if (!url) {
cHandler.messageAppendAssistant('Issue: no URL provided.', undefined, 'issue', false);
return;
return false;
}
// noinspection HttpUrlsUsage
@@ -16,8 +16,10 @@ export const runBrowseGetPageUpdatingState = async (cHandler: ConversationHandle
try {
const page = await callBrowseFetchPage(url);
cHandler.messageEdit(assistantMessageId, { text: page.content || 'Issue: page load did not produce an answer: no text found', typing: false }, true);
return true;
} catch (error: any) {
console.error(error);
cHandler.messageEdit(assistantMessageId, { text: 'Issue: browse did not produce an answer (error: ' + (error?.message || error?.toString() || 'unknown') + ').', typing: false }, true);
return false;
}
};
+4 -1
View File
@@ -31,7 +31,7 @@ export async function runAssistantUpdatingState(conversationId: string, history:
cHandler.setAbortController(abortController);
// stream the assistant's messages
await streamAssistantMessage(
const messageStatus = await streamAssistantMessage(
assistantLlmId,
history.map((m): VChatMessageIn => ({ role: m.role, content: m.text })),
parallelViewCount,
@@ -41,6 +41,7 @@ export async function runAssistantUpdatingState(conversationId: string, history:
);
// clear to send, again
// FIXME: race condition?
cHandler.setAbortController(null);
if (autoTitleChat) {
@@ -50,6 +51,8 @@ export async function runAssistantUpdatingState(conversationId: string, history:
if (autoSuggestDiagrams || autoSuggestQuestions)
autoSuggestions(conversationId, assistantMessageId, autoSuggestDiagrams, autoSuggestQuestions);
return messageStatus.outcome === 'success';
}
type StreamMessageOutcome = 'success' | 'aborted' | 'errored';
+4 -2
View File
@@ -10,7 +10,7 @@ import type { TextToImageProvider } from '~/common/components/useCapabilities';
export async function runImageGenerationUpdatingState(cHandler: ConversationHandler, imageText?: string) {
if (!imageText) {
cHandler.messageAppendAssistant('Issue: no image description provided.', undefined, 'issue', false);
return;
return false;
}
// Acquire the active TextToImageProvider
@@ -19,7 +19,7 @@ export async function runImageGenerationUpdatingState(cHandler: ConversationHand
t2iProvider = getActiveTextToImageProviderOrThrow();
} catch (error: any) {
cHandler.messageAppendAssistant(`[Issue] Sorry, I can't generate images right now. ${error?.message || error?.toString() || 'Unknown error'}.`, undefined, 'issue', false);
return;
return 'err-t2i-unconfigured';
}
// if the imageText ends with " xN" or " [N]" (where N is a number), then we'll generate N images
@@ -36,8 +36,10 @@ export async function runImageGenerationUpdatingState(cHandler: ConversationHand
try {
const imageUrls = await t2iGenerateImageOrThrow(t2iProvider, imageText, repeat);
cHandler.messageEdit(assistantMessageId, { text: imageUrls.join('\n'), typing: false }, true);
return true;
} catch (error: any) {
const errorMessage = error?.message || error?.toString() || 'Unknown error';
cHandler.messageEdit(assistantMessageId, { text: `[Issue] Sorry, I couldn't create an image for you. ${errorMessage}`, typing: false }, false);
return false;
}
}
+4 -2
View File
@@ -15,11 +15,11 @@ const EPHEMERAL_DELETION_DELAY = 5 * 1000;
export async function runReActUpdatingState(cHandler: ConversationHandler, question: string | undefined, assistantLlmId: DLLMId) {
if (!question) {
cHandler.messageAppendAssistant('Issue: no question provided.', undefined, 'issue', false);
return;
return false;
}
// create a blank and 'typing' message for the assistant - to be filled when we're done
const assistantModelLabel = 'react-' + assistantLlmId.slice(4, 7); // HACK: this is used to change the Avatar animation
const assistantModelLabel = 'react-' + assistantLlmId; //.slice(4, 7); // HACK: this is used to change the Avatar animation
const assistantMessageId = cHandler.messageAppendAssistant(STREAM_TEXT_INDICATOR, undefined, assistantModelLabel, true);
const { enableReactTool: enableBrowse } = useBrowseStore.getState();
@@ -42,9 +42,11 @@ export async function runReActUpdatingState(cHandler: ConversationHandler, quest
cHandler.messageEdit(assistantMessageId, { text: reactResult, typing: false }, false);
setTimeout(() => eHandler.delete(), EPHEMERAL_DELETION_DELAY);
return true;
} catch (error: any) {
console.error(error);
logToEphemeral(ephemeralText + `\nIssue: ${error || 'unknown'}`);
cHandler.messageEdit(assistantMessageId, { text: 'Issue: ReAct did not produce an answer.', typing: false }, false);
return false;
}
}
+22 -19
View File
@@ -2,7 +2,7 @@ import * as React from 'react';
import NextImage from 'next/image';
import TimeAgo from 'react-timeago';
import { AspectRatio, Box, Button, Card, CardContent, CardOverflow, Container, Grid, IconButton, Typography } from '@mui/joy';
import { AspectRatio, Box, Button, Card, CardContent, CardOverflow, Container, Grid, Typography } from '@mui/joy';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import LaunchIcon from '@mui/icons-material/Launch';
@@ -17,7 +17,8 @@ import { beamNewsCallout } from './beam.data';
// number of news items to show by default, before the expander
const DEFAULT_NEWS_COUNT = 4;
const NEWS_INITIAL_COUNT = 3;
const NEWS_LOAD_STEP = 2;
export const newsRoadmapCallout =
@@ -54,12 +55,15 @@ export const newsRoadmapCallout =
export function AppNews() {
// state
const [lastNewsIdx, setLastNewsIdx] = React.useState<number>(DEFAULT_NEWS_COUNT - 1);
const [lastNewsIdx, setLastNewsIdx] = React.useState<number>(NEWS_INITIAL_COUNT - 1);
// news selection
const news = NewsItems.filter((_, idx) => idx <= lastNewsIdx);
const firstNews = news[0] ?? null;
// show expander
const canExpand = news.length < NewsItems.length;
return (
<Box sx={{
@@ -103,13 +107,11 @@ export function AppNews() {
<Container disableGutters maxWidth='sm'>
{news?.map((ni, idx) => {
// const firstCard = idx === 0;
const hasCardAfter = news.length < NewsItems.length;
const showExpander = hasCardAfter && (idx === news.length - 1);
const addPadding = false; //!firstCard; // || showExpander;
return <React.Fragment key={idx}>
{/* Inject the Beam item here*/}
{idx === 0 && (
{idx === 2 && (
<Box sx={{ mb: 3 }}>
{beamNewsCallout}
</Box>
@@ -150,19 +152,6 @@ export function AppNews() {
</ul>
)}
{showExpander && (
<IconButton
variant='solid'
onClick={() => setLastNewsIdx(idx + 1)}
sx={{
position: 'absolute', right: 0, bottom: 0, mr: -1, mb: -1,
// backgroundColor: 'background.surface',
borderRadius: '50%',
}}
>
<ExpandMoreIcon />
</IconButton>
)}
</CardContent>
{!!ni.versionCoverImage && (
@@ -181,6 +170,7 @@ export function AppNews() {
</AspectRatio>
</CardOverflow>
)}
</Card>
{/* Inject the roadmap item here*/}
@@ -192,6 +182,19 @@ export function AppNews() {
</React.Fragment>;
})}
{canExpand && (
<Button
fullWidth
variant='soft'
color='neutral'
onClick={() => setLastNewsIdx(index => index + NEWS_LOAD_STEP)}
endDecorator={<ExpandMoreIcon />}
>
Previous News
</Button>
)}
</Container>
{/*<Typography sx={{ textAlign: 'center' }}>*/}
+1 -1
View File
@@ -14,7 +14,7 @@ export const beamNewsCallout =
<Card variant='solid' invertedColors>
<CardContent sx={{ gap: 2 }}>
<Typography level='title-lg'>
Beam - just launched in 1.15
Beam - launched in 1.15
</Typography>
<Typography level='body-sm'>
Beam is a world-first, multi-model AI chat modality that accelerates the discovery of superior solutions by leveraging the collective strengths of diverse LLMs.
+33 -13
View File
@@ -17,8 +17,12 @@ import { Link } from '~/common/components/Link';
import { clientUtmSource } from '~/common/util/pwaUtils';
import { platformAwareKeystrokes } from '~/common/components/KeyStroke';
import { beamBlogUrl } from './beam.data';
// Cover Images
// A landscape image of a capybara made entirely of clear, translucent crystal, wearing oversized black sunglasses, sitting at a sleek, minimalist desk. The desk is bathed in a soft, ethereal light emanating from within the capybara, symbolizing clarity and transparency. The capybara is typing on a futuristic, holographic keyboard, with floating code snippets and diagrams surrounding it, illustrating an improved developer experience and Auto-Diagrams feature. The background is a clean, white space with subtle, geometric patterns. Close-up photography style with a bokeh effect.
import coverV116 from '../../../public/images/covers/release-cover-v1.16.0.png';
// (not exactly) Imagine a futuristic, holographically bounded space. Inside this space, four capybaras stand. Three of them are in various stages of materialization, their forms made up of thousands of tiny, vibrant particles of electric blues, purples, and greens. These particles represent the merging of different intelligent inputs, symbolizing the concept of 'Beaming'. Positioned slightly towards the center and ahead of the others, the fourth capybara is fully materialized and composed of shimmering golden cotton candy, representing the optimal solution the 'Beam' feature seeks to achieve. The golden capybara gazes forward confidently, embodying a target achieved. Illuminated grid lines softly glow on the floor and walls of the setting, amplifying the futuristic aspect. In front of the golden capybara, floating, holographic interfaces depict complex networks of points and lines symbolizing the solution space 'Beaming' explores. The capybara interacts with these interfaces, implying the user's ability to control and navigate towards the best outcomes.
import coverV115 from '../../../public/images/covers/release-cover-v1.15.0.png';
// An image of a capybara sculpted entirely from iridescent blue cotton candy, gazing into a holographic galaxy of floating AI model icons (representing various AI models like Perplexity, Groq, etc.). The capybara is wearing a lightweight, futuristic headset, and its paws are gesturing as if orchestrating the movement of the models in the galaxy. The backdrop is minimalist, with occasional bursts of neon light beams, creating a sense of depth and wonder. Close-up photography, bokeh effect, with a dark but vibrant background to make the colors pop.
@@ -27,7 +31,6 @@ import coverV114 from '../../../public/images/covers/release-cover-v1.14.0.png';
import coverV113 from '../../../public/images/covers/release-cover-v1.13.0.png';
// An image of a capybara sculpted entirely from black cotton candy, set against a minimalist backdrop with splashes of bright, contrasting sparkles. The capybara is calling on a 3D origami old-school pink telephone and the camera is zooming on the telephone. Close up photography, bokeh, white background.
import coverV112 from '../../../public/images/covers/release-cover-v1.12.0.png';
import { beamBlogUrl } from './beam.data';
interface NewsItem {
@@ -57,7 +60,24 @@ export const NewsItems: NewsItem[] = [
]
}*/
{
versionCode: '1.15.1',
versionCode: '1.16',
versionName: 'Crystal Clear',
versionDate: new Date('2024-05-09T00:00:00Z'),
versionCoverImage: coverV116,
items: [
{ text: <><B href={beamBlogUrl} wow>Beam</B> core and UX improvements based on user feedback</>, issue: 470, icon: ChatBeamIcon },
{ text: <>Chat <B>Cost estimation</B> with supported models* 💰</> },
{ text: <>Major <B>Auto-Diagrams</B> enhancements</> },
{ text: <>Save/load chat files with Ctrl+S / O</>, issue: 466 },
{ text: <><B issue={500}>YouTube Transcriber</B> persona: chat with videos</>, issue: 500 },
{ text: <>Improved <B issue={508}>formula render</B>, dark-mode diagrams</>, issue: 508 },
{ text: <>More: <B issue={517}>code soft-wrap</B>, selection toolbar, <B issue={507}>3x faster</B> on Apple silicon</>, issue: 507 },
{ text: <>Updated <B>Anthropic</B>*, <B>Groq</B>, <B>Ollama</B>, <B>OpenAI</B>*, <B>OpenRouter</B>*, and <B>Perplexity</B></> },
{ text: <>Developers: update LLMs data structures</>, dev: true },
],
},
{
versionCode: '1.15',
versionName: 'Beam',
versionDate: new Date('2024-04-10T08:00:00Z'),
versionCoverImage: coverV115,
@@ -73,7 +93,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.14.1',
versionCode: '1.14',
versionName: 'Modelmorphic',
versionCoverImage: coverV114,
versionDate: new Date('2024-03-07T08:00:00Z'),
@@ -92,7 +112,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.13.0',
versionCode: '1.13',
versionName: 'Multi + Mind',
versionMoji: '🧠🔀',
versionDate: new Date('2024-02-08T07:47:00Z'),
@@ -108,7 +128,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.12.0',
versionCode: '1.12',
versionName: 'AGI Hotline',
versionMoji: '✨🗣️',
versionDate: new Date('2024-01-26T12:30:00Z'),
@@ -127,7 +147,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.11.0',
versionCode: '1.11',
versionName: 'Singularity',
versionMoji: '🌌🌠',
versionDate: new Date('2024-01-16T06:30:00Z'),
@@ -141,7 +161,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.10.0',
versionCode: '1.10',
versionName: 'The Year of AGI',
// versionMoji: '🎊✨',
versionDate: new Date('2024-01-06T08:00:00Z'),
@@ -155,7 +175,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.9.0',
versionCode: '1.9',
versionName: 'Creative Horizons',
// versionMoji: '🎨🌌',
versionDate: new Date('2023-12-28T22:30:00Z'),
@@ -170,7 +190,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.8.0',
versionCode: '1.8',
versionName: 'To The Moon And Back',
// versionMoji: '🚀🌕🔙❤️',
versionDate: new Date('2023-12-20T09:30:00Z'),
@@ -187,7 +207,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.7.0',
versionCode: '1.7',
versionName: 'Attachment Theory',
// versionDate: new Date('2023-12-11T06:00:00Z'), // 1.7.3
versionDate: new Date('2023-12-10T12:00:00Z'), // 1.7.0
@@ -203,7 +223,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.6.0',
versionCode: '1.6',
versionName: 'Surf\'s Up',
versionDate: new Date('2023-11-28T21:00:00Z'),
items: [
@@ -218,7 +238,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.5.0',
versionCode: '1.5',
versionName: 'Loaded!',
versionDate: new Date('2023-11-19T21:00:00Z'),
items: [
@@ -234,7 +254,7 @@ export const NewsItems: NewsItem[] = [
],
},
{
versionCode: '1.4.0',
versionCode: '1.4',
items: [
{ text: <><B>Share and clone</B> conversations, with public links</> },
{ text: <><B code='/docs/config-azure-openai.md'>Azure</B> models, incl. gpt-4-32k</> },
+20 -9
View File
@@ -1,6 +1,6 @@
import * as React from 'react';
import { Box, Button, IconButton, ListItemButton, ListItemDecorator, Sheet, Tooltip, Typography } from '@mui/joy';
import { Box, Button, IconButton, ListItemDecorator, Sheet, Tooltip } from '@mui/joy';
import CheckBoxIcon from '@mui/icons-material/CheckBox';
import CheckBoxOutlineBlankIcon from '@mui/icons-material/CheckBoxOutlineBlank';
import DeleteOutlineIcon from '@mui/icons-material/DeleteOutline';
@@ -136,17 +136,28 @@ export function CreatorDrawer(props: {
</Sheet>
) : (
// Create Button
<ListItemButton
<Button
variant={props.selectedSimplePersonaId ? 'plain' : 'soft'}
onClick={handleSimplePersonaUnselect}
sx={{
m: 2,
// ...PageDrawerTallItemSx,
justifyContent: 'flex-start',
padding: '0px 0.75rem',
// style
border: '1px solid',
borderColor: 'neutral.outlinedBorder',
borderRadius: 'sm',
'--ListItemDecorator-size': 'calc(2.5rem - 1px)', // compensate for the border
}}
>
<ListItemDecorator>
<Diversity2Icon />
</ListItemDecorator>
<Typography level='title-sm' sx={!props.selectedSimplePersonaId ? { fontWeight: 'lg' } : undefined}>
Create
</Typography>
</ListItemButton>
<ListItemDecorator><Diversity2Icon /></ListItemDecorator>
{/*<Typography level='title-sm' sx={!props.selectedSimplePersonaId ? { fontWeight: 'lg' } : undefined}>*/}
Create
{/*</Typography>*/}
</Button>
)}
{/* Personas [] */}
+2 -2
View File
@@ -19,8 +19,8 @@ const shortcutsMd = platformAwareKeystrokes(`
| Ctrl + Shift + V | Attach clipboard (better than Ctrl + V) |
| Ctrl + M | Microphone (voice typing) |
| **Chats** | |
| Ctrl + O | Open Chat ... |
| Ctrl + S | Save Chat ... |
| Ctrl + O | Open Chat File ... |
| Ctrl + S | Save Chat File ... |
| Ctrl + Alt + N | **New** chat |
| Ctrl + Alt + X | **Reset** chat |
| Ctrl + Alt + D | **Delete** chat |
+22 -13
View File
@@ -2,11 +2,11 @@ import * as React from 'react';
import { FormControl, Typography } from '@mui/joy';
import AddAPhotoIcon from '@mui/icons-material/AddAPhoto';
import LocalAtmOutlinedIcon from '@mui/icons-material/LocalAtmOutlined';
import ScreenshotMonitorIcon from '@mui/icons-material/ScreenshotMonitor';
import SpeedIcon from '@mui/icons-material/Speed';
import TitleIcon from '@mui/icons-material/Title';
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { FormSwitchControl } from '~/common/components/forms/FormSwitchControl';
import { Link } from '~/common/components/Link';
@@ -24,48 +24,57 @@ export function UxLabsSettings() {
const isMobile = useIsMobile();
const {
labsAttachScreenCapture, setLabsAttachScreenCapture,
labsBeam, setLabsBeam,
labsCameraDesktop, setLabsCameraDesktop,
labsChatBarAlt, setLabsChatBarAlt,
labsHighPerformance, setLabsHighPerformance,
labsShowCost, setLabsShowCost,
} = useUXLabsStore();
return <>
<FormSwitchControl
title={<><ChatBeamIcon color={labsBeam ? 'primary' : undefined} sx={{ mr: 0.25 }} />Chat Beam</>} description={'v1.15 · ' + (labsBeam ? 'Active' : 'Off')}
checked={labsBeam} onChange={setLabsBeam}
/>
{/* 'v1.15 · ' + .. */}
<FormSwitchControl
title={<><SpeedIcon color={labsHighPerformance ? 'primary' : undefined} sx={{ mr: 0.25 }} />Performance</>} description={'v1.14 · ' + (labsHighPerformance ? 'Unlocked' : 'Default')}
title={<><SpeedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Performance</>} description={labsHighPerformance ? 'Unlocked' : 'Default'}
checked={labsHighPerformance} onChange={setLabsHighPerformance}
/>
{DEV_MODE_SETTINGS && <FormSwitchControl
title={<><TitleIcon color={labsChatBarAlt ? 'primary' : undefined} sx={{ mr: 0.25 }} />Chat Title</>} description={'v1.14 · ' + (labsChatBarAlt === 'title' ? 'Show Title' : 'Show Models')}
title={<><TitleIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Chat Title</>} description={labsChatBarAlt === 'title' ? 'Show Title' : 'Show Models'}
checked={labsChatBarAlt === 'title'} onChange={(on) => setLabsChatBarAlt(on ? 'title' : false)}
/>}
{!isMobile && <FormSwitchControl
title={<><ScreenshotMonitorIcon color={labsAttachScreenCapture ? 'primary' : undefined} sx={{ mr: 0.25 }} /> Screen Capture</>} description={'v1.13 · ' + (labsAttachScreenCapture ? 'Enabled' : 'Disabled')}
title={<><ScreenshotMonitorIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} /> Screen Capture</>} description={labsAttachScreenCapture ? 'Enabled' : 'Disabled'}
checked={labsAttachScreenCapture} onChange={setLabsAttachScreenCapture}
/>}
{!isMobile && <FormSwitchControl
title={<><AddAPhotoIcon color={labsCameraDesktop ? 'primary' : undefined} sx={{ mr: 0.25 }} /> Webcam</>} description={/*'v1.8 · ' +*/ (labsCameraDesktop ? 'Enabled' : 'Disabled')}
title={<><AddAPhotoIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} /> Webcam Capture</>} description={/*'v1.8 · ' +*/ (labsCameraDesktop ? 'Enabled' : 'Disabled')}
checked={labsCameraDesktop} onChange={setLabsCameraDesktop}
/>}
<FormSwitchControl
title={<><LocalAtmOutlinedIcon sx={{ fontSize: 'lg', mr: 0.5, mb: 0.25 }} />Cost of messages</>} description={labsShowCost ? 'Show when available' : 'Disabled'}
checked={labsShowCost} onChange={setLabsShowCost}
/>
{/*
Other Graduated (removed or backlog):
- <Link href='https://github.com/enricoros/big-AGI/issues/359' target='_blank'>Draw App</Link>
- Text Tools: dinamically shown where applicable (e.g. Diff)
- Chat Mode: follow-ups; moved to Chat Advanced UI
*/}
<FormControl orientation='horizontal' sx={{ justifyContent: 'space-between', alignItems: 'center' }}>
<FormLabelStart title='Graduated' description='Ex-labs' />
<Typography level='body-xs'>
<Link href='https://github.com/enricoros/big-AGI/issues/208' target='_blank'>Split Chats</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/359' target='_blank'>Draw App</Link>
<Link href='https://big-agi.com/blog/beam-multi-model-ai-reasoning' target='_blank'>Beam</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/208' target='_blank'>Split Chats</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/354' target='_blank'>Call AGI</Link>
{' · '}<Link href='https://github.com/enricoros/big-AGI/issues/282' target='_blank'>Persona Creator</Link>
{' · '}<Link href='https://github.com/enricoros/big-agi/issues/192' target='_blank'>Auto Diagrams</Link>
{' · '}Imagine · Relative chat size · Text Tools · LLM Overheat
{' · '}Imagine · Chat Search · Text Tools · LLM Overheat
</Typography>
</FormControl>
+4
View File
@@ -169,6 +169,7 @@ export function adjustContentScaling(scaling: ContentScaling, offset?: number) {
interface ContentScalingOptions {
// BlocksRenderer
blockCodeFontSize: string;
blockCodeMarginY: number;
blockFontSize: string;
blockImageGap: number;
blockLineHeight: string | number;
@@ -182,6 +183,7 @@ interface ContentScalingOptions {
export const themeScalingMap: Record<ContentScaling, ContentScalingOptions> = {
xs: {
blockCodeFontSize: '0.75rem',
blockCodeMarginY: 0.5,
blockFontSize: 'xs',
blockImageGap: 1,
blockLineHeight: 1.666667,
@@ -191,6 +193,7 @@ export const themeScalingMap: Record<ContentScaling, ContentScalingOptions> = {
},
sm: {
blockCodeFontSize: '0.75rem',
blockCodeMarginY: 1,
blockFontSize: 'sm',
blockImageGap: 1.5,
blockLineHeight: 1.714286,
@@ -200,6 +203,7 @@ export const themeScalingMap: Record<ContentScaling, ContentScalingOptions> = {
},
md: {
blockCodeFontSize: '0.875rem',
blockCodeMarginY: 1.5,
blockFontSize: 'md',
blockImageGap: 2,
blockLineHeight: 1.75,
+11 -4
View File
@@ -8,6 +8,7 @@ import { ChatActions, createDMessage, DConversationId, DMessage, getConversation
import { createBeamVanillaStore } from '~/modules/beam/store-beam-vanilla';
import { EphemeralHandler, EphemeralsStore } from './EphemeralsStore';
import { createChatOverlayVanillaStore } from './store-chat-overlay-vanilla';
/**
@@ -21,6 +22,7 @@ export class ConversationHandler {
private readonly conversationId: DConversationId;
private readonly beamStore = createBeamVanillaStore();
private readonly overlayStore = createChatOverlayVanillaStore();
readonly ephemeralsStore: EphemeralsStore = new EphemeralsStore();
@@ -84,7 +86,7 @@ export class ConversationHandler {
// if zeroing the messages, also terminate an active beam
if (!messages.length)
this.beamStore.getState().terminate();
this.beamStore.getState().terminateKeepingSettings();
}
@@ -100,7 +102,7 @@ export class ConversationHandler {
* @param destReplaceMessageId If set, the output will replace the message with this id, otherwise it will append to the history
*/
beamInvoke(viewHistory: Readonly<DMessage[]>, importMessages: DMessage[], destReplaceMessageId: DMessage['id'] | null): void {
const { open: beamOpen, importRays: beamImportRays, terminate: beamTerminate } = this.beamStore.getState();
const { open: beamOpen, importRays: beamImportRays, terminateKeepingSettings } = this.beamStore.getState();
const onBeamSuccess = (messageText: string, llmId: DLLMId) => {
// set output when going back to the chat
@@ -116,11 +118,11 @@ export class ConversationHandler {
}
// close beam
this.beamStore.getState().terminate();
terminateKeepingSettings();
};
beamOpen(viewHistory, useModelsStore.getState().chatLLMId, onBeamSuccess);
importMessages.length && beamImportRays(importMessages);
importMessages.length && beamImportRays(importMessages, useModelsStore.getState().chatLLMId);
}
@@ -130,4 +132,9 @@ export class ConversationHandler {
return new EphemeralHandler(title, initialText, this.ephemeralsStore);
}
// Overlay Store
getOverlayStore = () => this.overlayStore;
}
@@ -0,0 +1,54 @@
import { StoreApi, useStore } from 'zustand';
import { createStore, StateCreator } from 'zustand/vanilla';
/// Composer Slice: per-chat composer overlay state ///
interface ComposerOverlayState {
// if set, this is the 'reply to' mode text
replyToText: string | null;
}
const initComposerOverlayStateSlice = (): ComposerOverlayState => ({
replyToText: null,
});
interface ComposerOverlayStore extends ComposerOverlayState {
setReplyToText: (text: string | null) => void;
}
const createComposerOverlayStoreSlice: StateCreator<ComposerOverlayStore, [], [], ComposerOverlayStore> = (_set, _get) => ({
// init state
...initComposerOverlayStateSlice(),
// actions
setReplyToText: (text: string | null) => _set({ replyToText: text }),
});
/// Chat Overlay Store: per-chat overlay state ///
// Note: at this time there are numerous overlay stores, including beam (vanilla), ephemerals (EventTarget), and this one.
export type OverlayStore = ComposerOverlayStore;
export type OverlayStoreApi = Readonly<StoreApi<OverlayStore>>;
export const createChatOverlayVanillaStore = () => createStore<OverlayStore>()((...a) => ({
...createComposerOverlayStoreSlice(...a),
}));
const fallbackOverlayStore = createChatOverlayVanillaStore();
export const useChatOverlayStore = <T, >(vanillaStore: OverlayStoreApi | null, selector: (store: OverlayStore) => T): T =>
useStore(vanillaStore || fallbackOverlayStore, selector);
+82 -55
View File
@@ -3,18 +3,24 @@ import { sendGAEvent } from '@next/third-parties/google';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Button, Step, stepClasses, StepIndicator, stepIndicatorClasses, Stepper, Typography } from '@mui/joy';
import ArrowBackRoundedIcon from '@mui/icons-material/ArrowBackRounded';
import ArrowForwardRoundedIcon from '@mui/icons-material/ArrowForwardRounded';
import CheckRoundedIcon from '@mui/icons-material/CheckRounded';
import KeyboardArrowDownRoundedIcon from '@mui/icons-material/KeyboardArrowDownRounded';
import { ChatMessageMemo } from '../../apps/chat/components/message/ChatMessage';
import { BlocksRenderer } from '~/modules/blocks/BlocksRenderer';
import { AgiSquircleIcon } from '~/common/components/icons/AgiSquircleIcon';
import { ChatBeamIcon } from '~/common/components/icons/ChatBeamIcon';
import { GlobalShortcutItem, ShortcutKeyName, useGlobalShortcuts } from '~/common/components/useGlobalShortcut';
import { createDMessage } from '~/common/state/store-chats';
import { hasGoogleAnalytics } from '~/common/components/GoogleAnalytics';
import { useIsMobile } from '~/common/components/useMatchMedia';
import { animationTextShadowLimey } from '~/common/util/animUtils';
// configuration
const colorButtons = 'neutral' as const;
const colorStepper = 'neutral' as const;
// Steps - the top stepper
@@ -27,13 +33,13 @@ interface ExplainerStep {
const stepSequenceSx: SxProps = {
// width: '100%',
[`& .${stepClasses.completed}::after`]: {
bgcolor: 'primary.500',
bgcolor: `${colorStepper}.500`,
},
[`& .${stepClasses.active} .${stepIndicatorClasses.root}`]: {
borderColor: 'primary.500',
borderColor: `${colorStepper}.500`,
},
[`& .${stepClasses.root}:has(+ .${stepClasses.active})::after`]: {
color: 'primary.500',
color: `${colorStepper}.500`,
backgroundColor: 'transparent',
backgroundImage: 'radial-gradient(currentColor 2px, transparent 2px)',
backgroundSize: '7px 7px',
@@ -41,6 +47,18 @@ const stepSequenceSx: SxProps = {
},
};
const buttonBaseSx: SxProps = {
justifyContent: 'space-between',
minHeight: '2.5rem',
minWidth: 120,
};
const buttonNextSx: SxProps = {
...buttonBaseSx,
boxShadow: `0 8px 24px -4px rgb(var(--joy-palette-${colorButtons}-mainChannel) / 20%)`,
minWidth: 180,
};
function AllStepsStepper(props: {
steps: ExplainerStep[],
@@ -59,10 +77,14 @@ function AllStepsStepper(props: {
orientation='vertical'
completed={completed}
active={active}
onClick={() => props.onStepClicked(stepIndex)}
indicator={
<StepIndicator variant={(completed || active) ? 'solid' : 'outlined'} color='primary'>
{completed ? <CheckRoundedIcon /> : active ? <KeyboardArrowDownRoundedIcon /> : undefined}
<StepIndicator
variant={(completed || active) ? 'solid' : 'outlined'}
color={colorStepper}
onClick={() => props.onStepClicked(stepIndex)}
sx={{ cursor: 'pointer' }}
>
{completed ? <CheckRoundedIcon sx={{ fontSize: 'md' }} /> : active ? <KeyboardArrowDownRoundedIcon sx={{ fontSize: 'lg' }} /> : undefined}
</StepIndicator>
}
>
@@ -95,6 +117,7 @@ export function ExplainerCarousel(props: {
explainerId: string,
steps: ExplainerPage[],
footer?: React.ReactNode,
noStepper?: boolean,
onFinished: () => any,
}) {
@@ -106,15 +129,13 @@ export function ExplainerCarousel(props: {
// derived state
const { onFinished } = props;
const isFirstPage = stepIndex === 0;
const isLastPage = stepIndex === props.steps.length - 1;
const activeStep = props.steps[stepIndex] ?? null;
// handlers
const mdText = activeStep?.mdContent ?? null;
const mdMessage = React.useMemo(() => {
return mdText ? createDMessage('assistant', mdText) : null;
}, [mdText]);
const handlePrevPage = React.useCallback(() => {
setStepIndex(step => step > 0 ? step - 1 : step);
@@ -161,7 +182,7 @@ export function ExplainerCarousel(props: {
// content
display: 'flex',
flexDirection: 'column',
justifyContent: 'space-around',
justifyContent: 'space-evenly',
gap: 2,
}}>
@@ -171,85 +192,91 @@ export function ExplainerCarousel(props: {
level='h1'
component='h1'
sx={{
fontSize: isMobile ? '2rem' : '2.75rem',
fontSize: isMobile ? '2rem' : '2.5rem',
fontWeight: 'md',
textAlign: 'center',
whiteSpace: 'balance',
}}>
{activeStep?.titlePrefix}{' '}
{!!activeStep?.titleSquircle && <AgiSquircleIcon inverted sx={{ color: 'white', fontSize: isMobile ? '1.55rem' : '2.04rem', borderRadius: 'md' }} />}
{!!activeStep?.titleSquircle && '-'}
{!!activeStep?.titleSpark && <Box component='span' sx={{ fontWeight: 'lg', /*animation: `${animationTextShadowLimey} 15s linear infinite`*/ color: 'primary.softColor' }}>
{!!activeStep?.titleSpark && <Box component='span' sx={{
fontWeight: 'lg',
color: 'neutral.softColor',
animation: `${animationTextShadowLimey} 5s infinite`,
/*, animation: `${animationTextShadowLimey} 15s linear infinite`*/
}}>
{activeStep.titleSpark}
</Box>}{activeStep?.titleSuffix}
</Typography>
{/* All Steps */}
<Box>
<AllStepsStepper
steps={props.steps}
activeIndex={stepIndex}
isMobile={isMobile}
onStepClicked={setStepIndex}
/>
</Box>
{/* Page Message */}
{!!mdMessage && (
<ChatMessageMemo
message={mdMessage}
fitScreen={isMobile}
showAvatar={false}
adjustContentScaling={isMobile ? 0 : undefined}
sx={{
minHeight: '19rem', // 256px
py: 2,
border: 'none',
bordreRadius: 0,
borderRadius: 'xl',
// boxShadow: '0 8px 24px -4px rgb(var(--joy-palette-primary-darkChannel) / 0.12)',
<Box sx={{ display: 'flex', flexDirection: 'column', alignItems: 'center', gap: 1 }}>
{/* Main Card with the markdown body */}
{!!mdText && (
<Box sx={{
minHeight: '24rem',
backgroundColor: 'background.popup',
borderRadius: 'lg',
boxShadow: '0 60px 32px -60px rgb(var(--joy-palette-primary-darkChannel) / 0.14)',
mb: 2,
px: { xs: 1, md: 2 },
py: 2,
// customize the embedded GitHub Markdown for transparent images
['.markdown-body img']: {
'--color-canvas-default': 'transparent!important',
},
}}
/>
)}
}}>
<BlocksRenderer
text={mdText}
fromRole='assistant'
contentScaling='md'
fitScreen={isMobile}
renderTextAsMarkdown
/>
</Box>
)}
{/* Buttons */}
<Box sx={{ display: 'flex', flexDirection: 'column', alignItems: 'center', gap: 1 }}>
{/* Advance Button */}
<Button
variant='solid'
size='lg'
endDecorator={isLastPage ? <ChatBeamIcon /> : <ArrowForwardRoundedIcon />}
color={colorButtons}
onClick={handleNextPage}
sx={{
boxShadow: '0 8px 24px -4px rgb(var(--joy-palette-primary-mainChannel) / 20%)',
minWidth: 180,
}}
endDecorator={isLastPage ? <ChatBeamIcon /> : <ArrowForwardRoundedIcon />}
sx={buttonNextSx}
>
{isLastPage ? 'Start' : 'Next'}
</Button>
{/* Back Button */}
<Button
variant='outlined'
color='neutral'
variant='plain'
color={colorButtons}
disabled={isFirstPage}
onClick={handlePrevPage}
sx={{
minWidth: 140,
}}
startDecorator={<ArrowBackRoundedIcon />}
sx={buttonBaseSx}
>
Previous
</Button>
</Box>
{/* All Steps */}
{props.noStepper ? null : (
<AllStepsStepper
steps={props.steps}
activeIndex={stepIndex}
isMobile={isMobile}
onStepClicked={setStepIndex}
/>
)}
{/* Final words of wisdom (also perfect for centering the other components) */}
{props.footer}
+2
View File
@@ -11,6 +11,7 @@ export const GoodTooltip = (props: {
title: React.ReactNode,
placement?: 'top' | 'bottom' | 'top-start',
isError?: boolean, isWarning?: boolean,
arrow?: boolean,
usePlain?: boolean,
children: React.JSX.Element,
sx?: SxProps
@@ -19,6 +20,7 @@ export const GoodTooltip = (props: {
title={props.title}
placement={props.placement}
disableInteractive
arrow={props.arrow}
variant={(props.isError || props.isWarning) ? 'soft' : props.usePlain ? 'plain' : undefined}
color={props.isError ? 'danger' : props.isWarning ? 'warning' : undefined}
sx={{
@@ -39,6 +39,7 @@ const FormLabelStartBase = (props: {
{!!props.description && (
<FormHelperText
sx={{
fontSize: 'xs',
display: 'block',
}}
>
+2 -2
View File
@@ -8,7 +8,7 @@ import { FormRadioOption } from './FormRadioControl';
/**
* Warning: this must be a constant to avoid re-rendering the radio group
*/
export function useFormRadio<T extends string>(initialValue: T, options: FormRadioOption<T>[], label?: string, hidden?: boolean): [T | null, React.JSX.Element | null] {
export function useFormRadio<T extends string>(initialValue: T, options: FormRadioOption<T>[], label?: string, hidden?: boolean): [T | null, React.JSX.Element | null, React.Dispatch<React.SetStateAction<T | null>>] {
// state
const [value, setValue] = React.useState<T | null>(initialValue);
@@ -33,5 +33,5 @@ export function useFormRadio<T extends string>(initialValue: T, options: FormRad
[handleChange, hidden, label, options, value],
);
return [value, component];
return [value, component, setValue];
}
+1 -1
View File
@@ -39,7 +39,7 @@ const DesktopDrawerTranslatingSheet = styled(Sheet)(({ theme }) => ({
// borderBottomRightRadius: 'var(--AGI-Optima-Radius)',
// contain: 'strict',
// boxShadow: theme.shadow.md, // too thin and complex; also tried 40px blurs
boxShadow: `1px 2px 6px 0 rgba(${theme.palette.neutral.darkChannel} / 0.12)`,
boxShadow: `0px 0px 6px 0 rgba(${theme.palette.neutral.darkChannel} / 0.12)`,
// content layout
display: 'flex',
+2 -2
View File
@@ -4,7 +4,7 @@ import Router from 'next/router';
import type { SxProps } from '@mui/joy/styles/types';
import { Divider, Dropdown, ListItemDecorator, Menu, MenuButton, MenuItem, Tooltip } from '@mui/joy';
import MenuIcon from '@mui/icons-material/Menu';
import MoreHorizRoundedIcon from '@mui/icons-material/MoreHorizRounded';
import MoreHorizIcon from '@mui/icons-material/MoreHoriz';
import { useModelsStore } from '~/modules/llms/store-llms';
@@ -93,7 +93,7 @@ export function DesktopNav(props: { component: React.ElementType, currentApp?: N
<Dropdown key='n-app-overflow'>
<Tooltip disableInteractive enterDelay={600} title='More Apps'>
<MenuButton slots={{ root: DesktopNavIcon }} slotProps={{ root: { className: navItemClasses.typeApp } }}>
<MoreHorizRoundedIcon />
<MoreHorizIcon />
</MenuButton>
</Tooltip>
<Menu variant='solid' invertedColors placement='right-start'>
+11 -1
View File
@@ -108,6 +108,11 @@ export function PageBar(props: { component: React.ElementType, currentApp?: NavI
return <CommonPageMenuItems onClose={closePageMenu} />;
}, [closePageMenu]);
const handlePageContextMenu = React.useCallback((event: React.MouseEvent) => {
event.preventDefault(); // added for the Right mouse click (to prevent the menu)
openPageMenu();
}, [openPageMenu]);
// [Desktop] hide the app bar if the current app doesn't use it
const desktopHide = !!props.currentApp?.hideBar && !props.isMobile;
if (desktopHide)
@@ -165,7 +170,12 @@ export function PageBar(props: { component: React.ElementType, currentApp?: NavI
{/* Page Menu Anchor */}
<InvertedBarCornerItem>
<IconButton disabled={!pageMenuAnchor /*|| (!appMenuItems && !props.isMobile)*/} onClick={openPageMenu} ref={pageMenuAnchor}>
<IconButton
ref={pageMenuAnchor}
disabled={!pageMenuAnchor /*|| (!appMenuItems && !props.isMobile)*/}
onClick={openPageMenu}
onContextMenu={handlePageContextMenu}
>
<MoreVertIcon />
</IconButton>
</InvertedBarCornerItem>
+2 -2
View File
@@ -57,8 +57,8 @@ export function PageWrapper(props: { component: React.ElementType, currentApp?:
sx={{
boxShadow: {
xs: 'none',
md: amplitude === 'narrow' ? 'md' : 'none',
xl: amplitude !== 'full' ? 'lg' : 'none',
md: amplitude === 'narrow' ? '0px 0px 4px 0 rgba(50 56 62 / 0.12)' : 'none',
xl: amplitude !== 'full' ? '0px 0px 4px 0 rgba(50 56 62 / 0.12)' : 'none',
},
}}
>
@@ -1,7 +1,7 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { IconButton, Sheet, Typography } from '@mui/joy';
import { Box, IconButton, Typography } from '@mui/joy';
import CloseRoundedIcon from '@mui/icons-material/CloseRounded';
@@ -11,23 +11,24 @@ export const PageDrawerHeader = (props: {
sx?: SxProps,
children?: React.ReactNode,
}) =>
<Sheet
variant='outlined'
<Box
// variant='soft'
// invertedColors
sx={{
minHeight: 'var(--AGI-Nav-width)',
// content
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
px: 1,
// style
borderTop: 'none',
borderLeft: 'none',
borderRight: 'none',
backgroundColor: 'background.popup',
// borderLeft: 'none',
// borderRight: 'none',
// borderTop: 'none',
// borderTopRightRadius: 'var(--AGI-Optima-Radius)',
// layout
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
}}
>
@@ -41,4 +42,4 @@ export const PageDrawerHeader = (props: {
<CloseRoundedIcon />
</IconButton>
</Sheet>;
</Box>;
+4 -8
View File
@@ -2,9 +2,9 @@ import { createStore } from 'zustand/vanilla';
import { persist } from 'zustand/middleware';
import { DModelSource, useModelsStore } from '~/modules/llms/store-llms';
import { createModelSourceForVendor, findAccessForSourceOrThrow, findAllVendors } from '~/modules/llms/vendors/vendors.registry';
import { createModelSourceForVendor, findAllVendors } from '~/modules/llms/vendors/vendors.registry';
import { getBackendCapabilities } from '~/modules/backend/store-backend-capabilities';
import { updateModelsForSource } from '~/modules/llms/vendors/useLlmUpdateModels';
import { llmsUpdateModelsForSourceOrThrow } from '~/modules/llms/llm.client';
interface AutoConfStore {
@@ -65,12 +65,8 @@ const autoConfVanillaStore = createStore<AutoConfStore>()(persist((_set, _get) =
source = useModelsStore.getState().sources.find(_s => _s.id === source.id)!;
}
// get the access, assuming there's no client config and the server will do all
const transportAcess = findAccessForSourceOrThrow(source.id);
// fetch models
const data = await vendor.rpcUpdateModelsOrThrow(transportAcess);
return updateModelsForSource(data, source, true);
// auto-configure this source
await llmsUpdateModelsForSourceOrThrow(source.id, true);
})
.catch(error => {
// catches errors and logs them, but does not stop the chain
@@ -4,6 +4,8 @@ import type { SxProps } from '@mui/joy/styles/types';
import { IconButton } from '@mui/joy';
import KeyboardDoubleArrowDownIcon from '@mui/icons-material/KeyboardDoubleArrowDown';
import { themeZIndexBeamView } from '~/common/app.theme';
import { useScrollToBottom } from './useScrollToBottom';
@@ -11,6 +13,9 @@ const inlineButtonSx: SxProps = {
// style it
// NOTE: just an IconButton when inline
// for usage inside BeamGatherPane, to not enlarge the row
my: -0.25,
// fade it in when hovering
// transition: 'all 0.15s',
// '&:hover': {
@@ -27,7 +32,7 @@ const absoluteButtonSx: SxProps = {
borderColor: 'neutral.500',
borderRadius: '50%',
boxShadow: 'sm',
zIndex: 3, // stay on top of the Chat Message buttons (e.g. copy)
zIndex: themeZIndexBeamView + 1, // stay on top of the Chat Message buttons (e.g. copy)
// place this on the bottom-right corner (FAB-like)
position: 'absolute',
@@ -57,6 +62,7 @@ export function ScrollToBottomButton(props: { inline?: boolean }) {
aria-label='Scroll To Bottom'
variant='plain'
onClick={handleStickToBottom}
size={props.inline ? 'sm' : undefined}
sx={props.inline ? inlineButtonSx : absoluteButtonSx}
>
<KeyboardDoubleArrowDownIcon sx={{ fontSize: 'xl' }} />
+29 -2
View File
@@ -65,7 +65,8 @@ export interface DMessage {
purposeId?: SystemPurposeId; // only assistant/system
originLLM?: string; // only assistant - model that generated this message, goes beyond known models
userFlags?: DMessageUserFlag[]; // user-set per-message flags
metadata?: DMessageMetadata; // metadata, mainly at creation and for UI
userFlags?: DMessageUserFlag[]; // (UI) user-set per-message flags
tokenCount: number; // cache for token count, using the current Conversation model (0 = not yet calculated)
@@ -76,6 +77,10 @@ export interface DMessage {
export type DMessageUserFlag =
| 'starred'; // user starred this
export interface DMessageMetadata {
inReplyToText?: string; // text this was in reply to
}
export function createDMessage(role: DMessage['role'], text: string): DMessage {
return {
id: uuidv4(),
@@ -130,6 +135,7 @@ export interface ChatActions {
appendMessage: (conversationId: string, message: DMessage) => void;
deleteMessage: (conversationId: string, messageId: string) => void;
editMessage: (conversationId: string, messageId: string, update: Partial<DMessage> | ((message: DMessage) => Partial<DMessage>), touchUpdated: boolean) => void;
updateMetadata: (conversationId: string, messageId: string, metadataDelta: Partial<DMessageMetadata>, touchUpdated?: boolean) => void;
setSystemPurposeId: (conversationId: string, systemPurposeId: SystemPurposeId) => void;
setAutoTitle: (conversationId: string, autoTitle: string) => void;
setUserTitle: (conversationId: string, userTitle: string) => void;
@@ -345,10 +351,31 @@ export const useChatStore = create<ConversationsStore>()(devtools(
return {
messages,
tokenCount: messages.reduce((sum, message) => sum + 4 + message.tokenCount || 0, 3),
...(touchUpdated && { updated: Date.now() }),
updated: touchUpdated ? Date.now() : conversation.updated,
};
}),
updateMetadata: (conversationId: string, messageId: string, metadataDelta: Partial<DMessageMetadata>, touchUpdated: boolean = true) => {
_get()._editConversation(conversationId, conversation => {
const messages = conversation.messages.map(message =>
message.id !== messageId ? message
: {
...message,
metadata: {
...message.metadata,
...metadataDelta,
},
updated: touchUpdated ? Date.now() : message.updated,
},
);
return {
messages,
updated: touchUpdated ? Date.now() : conversation.updated,
};
});
},
setSystemPurposeId: (conversationId: string, systemPurposeId: SystemPurposeId) =>
_get()._editConversation(conversationId,
{
+9 -12
View File
@@ -4,20 +4,14 @@ import { persist } from 'zustand/middleware';
// UX Labs Experiments
/**
* Graduated:
* - see `UxLabsSettings.tsx`, and also:
* - Text Tools: dinamically shown where applicable
* - Chat Mode: follow-ups; moved to Chat Advanced UI
*/
// UxLabsSettings.tsx contains the graduated settings, but the following are not stated:
// - Text Tools: dinamically shown where applicable
// - Chat Mode: Follow-Ups; moved to Chat Advanced UI
interface UXLabsStore {
labsAttachScreenCapture: boolean;
setLabsAttachScreenCapture: (labsAttachScreenCapture: boolean) => void;
labsBeam: boolean;
setLabsBeam: (labsBeam: boolean) => void;
labsCameraDesktop: boolean;
setLabsCameraDesktop: (labsCameraDesktop: boolean) => void;
@@ -27,6 +21,9 @@ interface UXLabsStore {
labsHighPerformance: boolean;
setLabsHighPerformance: (labsHighPerformance: boolean) => void;
labsShowCost: boolean;
setLabsShowCost: (labsShowCost: boolean) => void;
}
export const useUXLabsStore = create<UXLabsStore>()(
@@ -36,9 +33,6 @@ export const useUXLabsStore = create<UXLabsStore>()(
labsAttachScreenCapture: false,
setLabsAttachScreenCapture: (labsAttachScreenCapture: boolean) => set({ labsAttachScreenCapture }),
labsBeam: true,
setLabsBeam: (labsBeam: boolean) => set({ labsBeam }),
labsCameraDesktop: false,
setLabsCameraDesktop: (labsCameraDesktop: boolean) => set({ labsCameraDesktop }),
@@ -48,6 +42,9 @@ export const useUXLabsStore = create<UXLabsStore>()(
labsHighPerformance: false,
setLabsHighPerformance: (labsHighPerformance: boolean) => set({ labsHighPerformance }),
labsShowCost: true, // release 1.16.0 with this enabled by default
setLabsShowCost: (labsShowCost: boolean) => set({ labsShowCost }),
}),
{
name: 'app-ux-labs',
+6 -6
View File
@@ -250,20 +250,20 @@ export const animationShadowLimey = keyframes`
box-shadow: 2px 2px 12px -6px rgb(255, 153, 0);
}`;
/*export const animationTextShadowLimey = keyframes`
export const animationTextShadowLimey = keyframes`
100%, 0% {
text-shadow: 2px 2px 0 white, 4px 4px 0 rgb(183, 255, 0);
text-shadow: 2px 2px 0 rgba(183, 255, 0, 0.5);
}
25% {
text-shadow: 2px 2px 0 white, 4px 4px 0 rgb(255, 251, 0);
text-shadow: 2px 2px 0 rgba(255, 251, 0, 0.5);
}
50% {
text-shadow: 2px 2px 0 white, 4px 4px 0 rgba(0, 255, 81);
text-shadow: 2px 2px 0 rgba(0, 255, 81, 0.5);
}
75% {
text-shadow: 2px 2px 0 white, 4px 4px 0 rgb(255, 153, 0);
text-shadow: 2px 2px 0 rgba(255, 153, 0, 0.5);
}`;
*/
// export const animationShadowBlueDarker = keyframes`
// 0%, 100% {
// box-shadow: 3px 3px 0 rgb(135, 206, 235), /* Sky Blue */ 6px 6px 0 rgb(70, 130, 180), /* Steel Blue */ 9px 9px 0 rgb(0, 128, 128); /* Teal */
+15 -2
View File
@@ -1,6 +1,6 @@
import * as React from 'react';
export type SystemPurposeId = 'Catalyst' | 'Custom' | 'Designer' | 'Developer' | 'DeveloperPreview' | 'Executive' | 'Generic' | 'Scientist';
export type SystemPurposeId = 'Catalyst' | 'Custom' | 'Designer' | 'Developer' | 'DeveloperPreview' | 'Executive' | 'Generic' | 'Scientist' | 'YouTubeTranscriber';
export const defaultSystemPurposeId: SystemPurposeId = 'Generic';
@@ -96,7 +96,10 @@ Current date: {{LocaleNow}}
Designer: {
title: 'Designer',
description: 'Helps you design',
systemMessage: 'You are an AI visual design assistant. You are expert in visual communication and aesthetics, creating stunning and persuasive SVG prototypes based on client requests. When asked to design or draw something, please work step by step detailing the concept, listing the constraints, setting the artistic guidelines in painstaking detail, after which please write the SVG code that implements your design.',
systemMessage: `
You are an AI visual design assistant. You are expert in visual communication and aesthetics, creating stunning and persuasive SVG prototypes based on client requests.
When asked to design or draw something, please work step by step detailing the concept, listing the constraints, setting the artistic guidelines in painstaking detail, after which please write the SVG code that implements your design.
{{RenderSVG}}`.trim(),
symbol: '🖌️',
examples: ['minimalist logo for a tech startup', 'infographic on climate change', 'suggest color schemes for a website'],
call: { starters: ['Hey! What\'s the vision?', 'Designer on call. What\'s the project?', 'Ready for design talk.', 'Hey.'] },
@@ -110,4 +113,14 @@ Current date: {{LocaleNow}}
call: { starters: ['What\'s the task?', 'What can I do?', 'Ready for your task.', 'Yes?'] },
voices: { elevenLabs: { voiceId: 'flq6f7yk4E4fJM5XTYuZ' } },
},
YouTubeTranscriber: {
title: 'YouTube Transcriber',
description: 'Enter a YouTube URL to get the transcript and chat about the content.',
systemMessage: 'You are an expert in understanding video transcripts and answering questions about video content.',
symbol: '📺',
examples: ['Analyze the sentiment of this video', 'Summarize the key points of the lecture'],
call: { starters: ['Enter a YouTube URL to begin.', 'Ready to transcribe YouTube content.', 'Paste the YouTube link here.'] },
voices: { elevenLabs: { voiceId: 'z9fAnlkpzviPz146aGWa' } },
},
};
+136 -75
View File
@@ -2,6 +2,7 @@ import * as React from 'react';
import { Box, Button, ButtonGroup, CircularProgress, Divider, FormControl, FormLabel, Grid, IconButton, Input } from '@mui/joy';
import AccountTreeTwoToneIcon from '@mui/icons-material/AccountTreeTwoTone';
import AutoFixHighIcon from '@mui/icons-material/AutoFixHigh';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import ReplayIcon from '@mui/icons-material/Replay';
@@ -13,6 +14,7 @@ import { llmStreamingChatGenerate } from '~/modules/llms/llm.client';
import { GoodModal } from '~/common/components/GoodModal';
import { InlineError } from '~/common/components/InlineError';
import { adjustContentScaling } from '~/common/app.theme';
import { createDMessage, useChatStore } from '~/common/state/store-chats';
import { useFormRadio } from '~/common/components/forms/useFormRadio';
import { useFormRadioLlmType } from '~/common/components/forms/useFormRadioLlmType';
@@ -22,6 +24,10 @@ import { useUIPreferencesStore } from '~/common/state/store-ui';
import { bigDiagramPrompt, DiagramLanguage, diagramLanguages, DiagramType, diagramTypes } from './diagrams.data';
// configuration
const DIAGRAM_ACTOR_PREFIX = 'diagram';
// Used by the callers to setup the diagam session
export interface DiagramConfig {
conversationId: string;
@@ -37,7 +43,10 @@ function hotFixDiagramCode(llmCode: string): string {
llmCode = '```\n' + llmCode + '\n```';
// fix generation mistakes
return llmCode
.replaceAll('@endmindmap\n@enduml', '@endmindmap')
.replaceAll('@startumd', '@startuml') // haiku
.replaceAll('@endutml', '@enduml') // haiku
.replaceAll('@endmindmap\n@enduml', '@endmindmap') // gpt-3.5
.replaceAll('@endmindmap\n@end', '@endmindmap') // gpt-3.5
.replaceAll('```\n```', '```');
}
@@ -47,8 +56,8 @@ export function DiagramsModal(props: { config: DiagramConfig, onClose: () => voi
// state
const [showOptions, setShowOptions] = React.useState(true);
const [diagramCode, setDiagramCode] = React.useState<string | null>(null);
const [diagramType, diagramComponent] = useFormRadio<DiagramType>('auto', diagramTypes, 'Visualize');
const [diagramLanguage, languageComponent] = useFormRadio<DiagramLanguage>('plantuml', diagramLanguages, 'Style');
const [diagramType, diagramComponent] = useFormRadio<DiagramType>('mind', diagramTypes, 'Diagram');
const [diagramLanguage, languageComponent, setDiagramLanguage] = useFormRadio<DiagramLanguage>('mermaid', diagramLanguages, 'Syntax');
const [customInstruction, setCustomInstruction] = React.useState<string>('');
const [errorMessage, setErrorMessage] = React.useState<string | null>(null);
const [abortController, setAbortController] = React.useState<AbortController | null>(null);
@@ -56,10 +65,11 @@ export function DiagramsModal(props: { config: DiagramConfig, onClose: () => voi
// external state
const isMobile = useIsMobile();
const contentScaling = useUIPreferencesStore(state => state.contentScaling);
const [diagramLlm, llmComponent] = useFormRadioLlmType('Generator');
const [diagramLlm, llmComponent] = useFormRadioLlmType('Generator', 'chat');
// derived state
const { conversationId, text: subject } = props.config;
const diagramLlmId = diagramLlm?.id;
/**
@@ -113,95 +123,146 @@ export function DiagramsModal(props: { config: DiagramConfig, onClose: () => voi
}, [abortController]);
const handleInsertAndClose = () => {
// custom instruction
const handleCustomInstructionKeyDown = React.useCallback((event: React.KeyboardEvent<HTMLInputElement>) => {
if (event.key === 'Enter') {
event.preventDefault();
void handleGenerateNew();
}
}, [handleGenerateNew]);
const handleCustomInstructionChange = React.useCallback((event: React.ChangeEvent<HTMLInputElement>) => {
setCustomInstruction(event.target.value);
}, []);
// done
const handleAppendMessageAndClose = React.useCallback(() => {
if (!diagramCode)
return setErrorMessage('Nothing to add to the conversation.');
const diagramMessage = createDMessage('assistant', diagramCode);
// diagramMessage.purposeId = conversation.systemPurposeId;
diagramMessage.originLLM = 'diagram';
diagramMessage.originLLM = DIAGRAM_ACTOR_PREFIX + (diagramLlmId ? `-${diagramLlmId}` : '');
useChatStore.getState().appendMessage(conversationId, diagramMessage);
props.onClose();
};
}, [conversationId, diagramCode, diagramLlmId, props]);
return <GoodModal
title='Generate Diagram' noTitleBar
open onClose={props.onClose}
sx={{ maxWidth: { xs: '100vw', md: '95vw' } }}
startButton={
<Button variant='soft' color='success' disabled={!diagramCode || !!abortController} endDecorator={<TelegramIcon />} onClick={handleInsertAndClose}>
Add To Chat
</Button>
}
>
// [effect] Auto-switch language to match diagram type
React.useEffect(() => {
setDiagramLanguage(diagramType === 'mind' ? 'mermaid' : 'plantuml');
}, [diagramType, setDiagramLanguage]);
{showOptions && (
<Grid container spacing={2}>
<Grid xs={12} md={6}>
{diagramComponent}
</Grid>
{languageComponent && (
return (
<GoodModal
titleStartDecorator={<AutoFixHighIcon sx={{ fontSize: 'md', mr: 1 }} />}
title={<>
Auto-Diagram
<IconButton
aria-label={showOptions ? 'Hide Options' : 'Show Options'}
size='sm'
onClick={() => setShowOptions(options => !options)}
sx={{ ml: 1, my: -0.5 }}
>
{showOptions ? <ExpandMoreIcon /> : <ExpandLessIcon />}
</IconButton>
</>}
hideBottomClose
open onClose={props.onClose}
sx={{ maxWidth: { xs: '100vw', md: '95vw', lg: '88vw' } }}
>
{showOptions && (
<Grid container spacing={2}>
<Grid xs={12} md={6}>
{languageComponent}
{diagramComponent}
</Grid>
{languageComponent && (
<Grid xs={12} md={6}>
{languageComponent}
</Grid>
)}
<Grid xs={12} md={6}>
{llmComponent}
</Grid>
<Grid xs={12} md={6}>
<FormControl>
<FormLabel>Customize</FormLabel>
<Input
title='Custom Instruction'
placeholder='e.g. visualize as state'
value={customInstruction}
onKeyDown={handleCustomInstructionKeyDown}
onChange={handleCustomInstructionChange}
endDecorator={(abortController && customInstruction) ? <CircularProgress size='sm' /> : undefined}
/>
</FormControl>
</Grid>
)}
<Grid xs={12} xl={6}>
{llmComponent}
</Grid>
<Grid xs={12} md={6}>
<FormControl>
<FormLabel>Custom Instruction</FormLabel>
<Input title='Custom Instruction' placeholder='e.g. visualize as state' value={customInstruction} onChange={(e) => setCustomInstruction(e.target.value)} />
</FormControl>
</Grid>
</Grid>
)}
)}
<ButtonGroup color='primary' sx={{ flexGrow: 1 }}>
<Button
fullWidth
variant={abortController ? 'soft' : 'solid'} color='primary'
disabled={!diagramLlm}
onClick={abortController ? () => abortController.abort() : handleGenerateNew}
endDecorator={abortController ? <StopOutlinedIcon /> : diagramCode ? <ReplayIcon /> : <AccountTreeTwoToneIcon />}
sx={{ minWidth: 200 }}
>
{abortController ? 'Stop' : diagramCode ? 'Regenerate' : 'Generate'}
</Button>
<IconButton onClick={() => setShowOptions(options => !options)}>
{showOptions ? <ExpandLessIcon /> : <ExpandMoreIcon />}
</IconButton>
</ButtonGroup>
{errorMessage && <InlineError error={errorMessage} />}
{errorMessage && <InlineError error={errorMessage} />}
{!showOptions && !!abortController && <Box sx={{ display: 'flex', justifyContent: 'center' }}>
<CircularProgress size='lg' />
</Box>}
{!showOptions && !!abortController && <Box sx={{ display: 'flex', justifyContent: 'center' }}>
<CircularProgress size='lg' />
</Box>}
{!!diagramCode && (!abortController || showOptions) && (
<Box sx={{
backgroundColor: 'background.level2',
marginX: 'calc(-1 * var(--Card-padding))',
minHeight: 96,
p: { xs: 1, md: 2 },
overflow: 'hidden',
}}>
<BlocksRenderer
text={diagramCode}
fromRole='assistant'
fitScreen={isMobile}
contentScaling={adjustContentScaling(contentScaling, -1)}
renderTextAsMarkdown={false}
specialDiagramMode
// onMessageEdit={(text) => setMessage({ ...message, text })}
/>
</Box>
)}
{!diagramCode && <Divider />}
{/* End */}
<Box sx={{ mt: 'auto', display: 'flex', flexWrap: 'wrap', justifyContent: 'space-between' }}>
{/* Add Message to Chat (once complete) */}
<Button variant='soft' color='success' disabled={!diagramCode || !!abortController} endDecorator={<TelegramIcon />} onClick={handleAppendMessageAndClose}>
Add To Chat
</Button>
{/* Button Group to toggle controls visibility - NOT enabled at the moment */}
<ButtonGroup variant='solid' color='primary' sx={{ ml: 'auto' }}>
{/*<IconButton*/}
{/* aria-label={showOptions ? 'Hide Options' : 'Show Options'}*/}
{/* onClick={() => setShowOptions(options => !options)}*/}
{/*>*/}
{/* {showOptions ? <ExpandLessIcon /> : <ExpandMoreIcon />}*/}
{/*</IconButton>*/}
<Button
variant={abortController ? 'soft' : 'solid'} color='primary'
disabled={!diagramLlm}
onClick={abortController ? () => abortController.abort() : handleGenerateNew}
endDecorator={abortController ? <StopOutlinedIcon /> : diagramCode ? <ReplayIcon /> : <AccountTreeTwoToneIcon />}
sx={{ minWidth: isMobile ? 160 : 220 }}
>
{abortController ? 'Stop' : diagramCode ? 'Regenerate' : 'Generate'}
</Button>
</ButtonGroup>
{!!diagramCode && (!abortController || showOptions) && (
<Box sx={{
backgroundColor: 'background.level2',
marginX: 'calc(-1 * var(--Card-padding))',
minHeight: 96,
p: { xs: 1, md: 2 },
overflow: 'hidden',
}}>
<BlocksRenderer
text={diagramCode}
fromRole='assistant'
fitScreen={isMobile}
contentScaling={contentScaling}
renderTextAsMarkdown={false}
specialDiagramMode
// onMessageEdit={(text) => setMessage({ ...message, text })}
/>
</Box>
)}
{!diagramCode && <Divider />}
</GoodModal>;
</GoodModal>
);
}
+25 -23
View File
@@ -7,7 +7,7 @@ export type DiagramLanguage = 'mermaid' | 'plantuml';
// NOTE: keep these global, or it will trigger re-renders
export const diagramTypes: FormRadioOption<DiagramType>[] = [
{ label: 'Auto-diagram', value: 'auto' },
{ label: 'Automatic', value: 'auto' },
{ label: 'Mindmap', value: 'mind' },
];
@@ -16,7 +16,8 @@ export const diagramLanguages: FormRadioOption<DiagramLanguage>[] = [
{ label: 'Mermaid (mindmaps)', value: 'mermaid' },
];
const mermaidMindmapExample = `
const mermaidMindmapExample = `For example:
\`\`\`mermaid
mindmap
root((mindmap))
Origins
@@ -32,42 +33,43 @@ mindmap
Tools
Pen and paper
Mermaid
`.trim();
function mermaidDiagramPrompt(diagramType: DiagramType): { sys: string, usr: string } {
let promptDetails = diagramType === 'auto'
? 'You create a valid Mermaid diagram markdown (```mermaid\\n...), ready to be rendered into a diagram. Ensure the code contains no external references, and all names are properly enclosed in double quotes and escaped if necessary. Choose the most suitable diagram type from the following supported types: flowchart, sequence, class, state, erd, gantt, pie, git.'
: 'You create a valid Mermaid mindmap markdown (```mermaid\\n...), ready to be rendered into a mind map. Ensure the code contains no external references, and all names are properly enclosed in double quotes and escaped if necessary. For example:\n' + mermaidMindmapExample + '\n';
return {
sys: `You are an AI that generates correct Mermaid code based on provided text. ${promptDetails}`,
usr: `Generate the Mermaid code for a ${diagramType === 'auto' ? 'suitable diagram' : 'mind map'} that represents the preceding assistant message.`,
};
}
\`\`\`
`;
function plantumlDiagramPrompt(diagramType: DiagramType): { sys: string, usr: string } {
switch (diagramType) {
case 'auto':
return {
sys: 'You are an AI that writes PlantUML code based on provided text. You create a valid PlantUML string, enclosed by "```\n@startuml" and "@enduml\n```", ready to be rendered into a diagram or mindmap, ensuring the code contains no external references and all names are properly escaped without spaces. You choose the most suitable diagram typesequence, class, use case, activity, component, state, object, deployment, wireframe, mindmap, gantt, or flowchart.',
usr: 'Generate the PlantUML code for the diagram type that best represents the preceding assistant message.',
sys: 'Generate a valid PlantUML diagram markdown (```plantuml\\n@startuml\\n...@enduml\\n```), ready for rendering. No external references allowed and all strings must be escaped correctly (each in a single line). Choose the most suitable PlantUML diagram type: sequence, class, use case, activity, component, state, object, deployment, wireframe, mindmap, gantt, or flowchart.',
usr: 'Generate the PlantUML code for a suitable diagram that best captures the essence of the preceding message.',
};
case 'mind':
return {
sys: 'You are an AI that writes PlantUML code based on provided text. You create a valid PlantUML string, enclosed by "```\n@startmindmap" and "@endmindmap\n```", ready to be rendered into a mind map, ensuring the code contains no external references and all names are properly escaped without spaces.',
usr: 'Generate the PlantUML code for a mind map based on the preceding assistant message.',
sys: 'Generate a valid PlantUML mindmap markdown (```plantuml\\n@startmindmap\\n...@endmindmap\\n\`\`\`), ready for rendering. No external references allowed. Use one or more asterisks to indent and separate with spaces.',
usr: 'Generate a PlantUML mindmap that effectively summarizes the key points from the preceding message.',
};
}
}
function mermaidDiagramPrompt(diagramType: DiagramType): { sys: string, usr: string } {
let promptDetails = diagramType === 'auto'
? 'Generate a valid Mermaid diagram markdown (```mermaid\\n...```), ready for rendering. The code should have no external references and all names must be in double quotes and properly escaped. Select the most appropriate Mermaid diagram type: flowchart, sequence, class, state, erd, gantt, pie, or git.'
: 'Generate a valid Mermaid mindmap markdown (```mermaid\\n...```), ready for rendering. The code should have no external references and all names must be in double quotes and properly escaped. ' + mermaidMindmapExample;
return {
sys: `Your task is to generate accurate and well-structured Mermaid code from the given text. ${promptDetails}`,
usr: `Generate the Mermaid code for a ${diagramType === 'auto' ? 'suitable diagram' : 'mind map'} that ${diagramType === 'auto' ? 'best captures the essence' : 'effectively summarizes the key points'} of the preceding message.`,
};
}
const sysSuffixPM = 'The next three messages will outline: 1. your personality, 2. the data you\'ll work with, and 3. a clear restatement of the instructions.';
const usrSuffixCoT = 'Please think step by step, then generate valid diagram code in a markdown block as instructed, and stop your response.';
export function bigDiagramPrompt(diagramType: DiagramType, diagramLanguage: DiagramLanguage, chatSystemPrompt: string, subject: string, customInstruction: string): VChatMessageIn[] {
const { sys, usr } = diagramLanguage === 'mermaid' ? mermaidDiagramPrompt(diagramType) : plantumlDiagramPrompt(diagramType);
if (customInstruction) {
customInstruction = 'Also consider the following instructions: ' + customInstruction;
}
return [
{ role: 'system', content: sys },
{ role: 'system', content: chatSystemPrompt },
{ role: 'system', content: sys + '\n' + sysSuffixPM },
{ role: 'user', content: chatSystemPrompt },
{ role: 'assistant', content: subject },
{ role: 'user', content: `${usr} ${customInstruction}` },
{ role: 'user', content: (!customInstruction?.trim() ? usr : `${usr} Also consider the following instructions: ${customInstruction.trim()}`) + '\n' + usrSuffixCoT },
];
}
+22
View File
@@ -0,0 +1,22 @@
import { createDMessage, DMessage } from '~/common/state/store-chats';
const replyToSystemPrompt = `The user is referring to this in particular:
{{ReplyToText}}`;
/**
* Adds a system message to the history, explaining the context of the reply
*
* FIXME: HACK - this is a temporary solution to pass the metadata to the execution
*
* Only works with OpenAI and a couple more right now. Fix it by making it vendor-agnostic
*/
export function updateHistoryForReplyTo(history: DMessage[]) {
if (history?.length < 1)
return;
const lastMessage = history[history.length - 1];
if (lastMessage.role === 'user' && lastMessage.metadata?.inReplyToText)
history.push(createDMessage('system', replyToSystemPrompt.replace('{{ReplyToText}}', lastMessage.metadata.inReplyToText)));
}
+33 -40
View File
@@ -1,7 +1,7 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Typography } from '@mui/joy';
import { Box } from '@mui/joy';
import { ExplainerCarousel, ExplainerPage } from '~/common/components/ExplainerCarousel';
import { animationEnterScaleUp } from '~/common/util/animUtils';
@@ -11,59 +11,52 @@ const beamSteps: ExplainerPage[] = [
{
stepDigits: '',
stepName: 'Welcome',
titlePrefix: 'Welcome to',
// titleSquircle: true,
titleSpark: 'Beam',
// titlePrefix: 'Welcome to Beam.', // Better answers, faster.
titlePrefix: 'Welcome to ', titleSpark: 'Beam',
// titleSpark: 'B E A M',
// titleSuffix: ' azing',
// titleSquircle: true,
mdContent: `
**Hello, we just launched Beam for you.**
Beam is a new Big-AGI chat modality that allows you to engage multiple AI models in parallel.
**Beam** is a chat modality in Big-AGI to engage multiple AI models, [together](https://big-agi.com/blog/beam-multi-model-ai-reasoning).
It's like having a brainstorm session with several smart people,
only they are AI models. And as with people,
each AI model has its own unique perspective.
And Beam lets you make the best of them.
each adding their own unique perspective.
Beam lets you make the best of them all.
![big-AGI BEAM Rays](https://raw.githubusercontent.com/enricoros/big-AGI/main/public/images/explainers/explainer-beam-scatter-1200px-alpha.png)
Let&apos;s get you to **better chat answers, faster**.
`,
`, // Let&apos;s get you to better chat answers, faster.
},
{
stepDigits: '01',
stepName: 'Beam',
titleSpark: 'Beaming',
titleSuffix: ': Exploration',
titlePrefix: 'Explore with ', titleSpark: 'Beam', titleSuffix: '.',
// titleSpark: 'Beaming', titleSuffix: ': Exploration',
mdContent: `
**Beaming is the exploration phase, it's where you get the AI models to generate ideas.**
**Beaming is the exploration phase**, where AI models generate ideas.
To Beam, pick the AI models you want to use (you can also load/save combos), and start them all at once or one by one.
Keep the responses you like and delete the ones that aren't helpful.
Simply pick the AI models you want to use (you can load/save combos) and start them.
You can then select a single response to continue the chat,
or keep the responses you like and do a Merge.
**Important**: 💰 Beware of the token usage of Beaming and Merging.
Being multiple and high-intensity operations,
they can consume more tokens than regular chats.
It is better to _use them in early/shorter chats_.
**Important:** _Best used in earlier / shorter chats_. 💰 Beware of the token usage of Beaming and Merging;
being parallel and lengthy operations, they will use more tokens than regular chats.
Use a mix of different AI models to get a diverse set of ideas and perspectives.
**Once you see a response you love, send it back to the chat**, otherwise move to the Merge step.
`,
Use a mix of different AI models to get a diverse set of ideas and perspectives.
`, // and delete the ones that aren't helpful
},
{
stepDigits: '02',
stepName: 'Merge',
titleSpark: 'Merging',
titleSuffix: ': Convergence', // Synthesis, Convergence
titlePrefix: 'Combine with ', titleSpark: 'Merge', titleSuffix: '.',
// titleSpark: 'Merging', titleSuffix: ': Synthesis', // Synthesis, Convergence
mdContent: `
**Merging is the consolidation phase**, where AI combines the best parts of the responses into a great, coherent answer.
Merging is **combining the best parts of each response** into a great, coherent answer.
You can choose from various merge options, including **Fusion**, **Checklist**, **Compare**, and **Custom**.
Experiment with different options to find the one that works best for your chat.
![big-AGI BEAM Rays](https://raw.githubusercontent.com/enricoros/big-AGI/main/public/images/explainers/explainer-beam-gather-1600px-alpha.png)
You can choose from various merge options, including Fusion, Checklist, Compare, and Custom.
Feel free to experiment with different options to find the one that works best for you.
`, // > Merge until you have a single, high-quality response. Or choose the final response manually, skipping merge.
},
// {
@@ -96,7 +89,7 @@ const beamExplainerSx: SxProps = {
height: '100%',
// style
padding: { xs: '1rem', md: '1.5rem' },
padding: 3, // { xs: 3, md: 3 },
animation: `${animationEnterScaleUp} 0.2s cubic-bezier(.17,.84,.44,1)`,
// layout
@@ -118,14 +111,14 @@ export function BeamExplainer(props: {
<ExplainerCarousel
explainerId='beam-onboard'
steps={beamSteps}
footer={
<Typography level='body-xs' sx={{ textAlign: 'center', maxWidth: '400px', mx: 'auto' }}>
{/*Unlock beaming, combine AI wisdom, achieve clarity.*/}
{/*Discover, Design and Dream.*/}
{/*The journey from exploration to refinement is iterative.*/}
{/*Each cycle sharpens your ideas, bringing you closer to innovation.*/}
</Typography>
}
// footer={
// <Typography level='body-xs' sx={{ textAlign: 'center', maxWidth: '400px', mx: 'auto' }}>
// {/*Unlock beaming, combine AI wisdom, achieve clarity.*/}
// {/*Discover, Design and Dream.*/}
// {/*The journey from exploration to refinement is iterative.*/}
// {/*Each cycle sharpens your ideas, bringing you closer to innovation.*/}
// </Typography>
// }
onFinished={props.onWizardComplete}
/>
+85 -89
View File
@@ -4,7 +4,6 @@ import { useShallow } from 'zustand/react/shallow';
import { Alert, Box, CircularProgress } from '@mui/joy';
import { ConfirmationModal } from '~/common/components/ConfirmationModal';
import { ScrollToBottom } from '~/common/scroll-to-bottom/ScrollToBottom';
import { animationEnterScaleUp } from '~/common/util/animUtils';
import { useUICounter } from '~/common/state/store-ui';
@@ -15,13 +14,13 @@ import { BeamRayGrid } from './scatter/BeamRayGrid';
import { BeamScatterInput } from './scatter/BeamScatterInput';
import { BeamScatterPane } from './scatter/BeamScatterPane';
import { BeamStoreApi, useBeamStore } from './store-beam.hooks';
import { SCATTER_RAY_DEF } from './beam.config';
export function BeamView(props: {
beamStore: BeamStoreApi,
isMobile: boolean,
showExplainer?: boolean,
// sx?: SxProps,
}) {
// state
@@ -90,118 +89,115 @@ export function BeamView(props: {
// runnning
// [effect] pre-populate a default number of rays
const bootup = raysCount < SCATTER_RAY_DEF;
React.useEffect(() => {
bootup && handleRaySetCount(SCATTER_RAY_DEF);
}, [bootup, handleRaySetCount]);
// const bootup = raysCount < SCATTER_RAY_DEF;
// React.useEffect(() => {
// bootup && handleRaySetCount(SCATTER_RAY_DEF);
// }, [bootup, handleRaySetCount]);
// Explainer, if unseen
if (props.showExplainer && explainerUnseen)
return <BeamExplainer onWizardComplete={explainerCompleted} />;
return (
<ScrollToBottom disableAutoStick>
return <>
{/* Main V-Layout */}
<Box sx={{
// scroller fill
minHeight: '100%',
<Box sx={{
// scroller fill
minHeight: '100%',
// ...props.sx,
// enter animation
animation: `${animationEnterScaleUp} 0.2s cubic-bezier(.17,.84,.44,1)`,
// enter animation
animation: `${animationEnterScaleUp} 0.2s cubic-bezier(.17,.84,.44,1)`,
// config
'--Pad': { xs: '1rem', md: '1.5rem' },
'--Pad_2': 'calc(var(--Pad) / 2)',
// config
'--Pad': { xs: '1rem', md: '1.5rem' },
'--Pad_2': 'calc(var(--Pad) / 2)',
// layout
display: 'flex',
flexDirection: 'column',
gap: 'var(--Pad)',
}}>
// layout
display: 'flex',
flexDirection: 'column',
gap: 'var(--Pad)',
}}>
{/* Config Issues */}
{!!inputIssues && <Alert>{inputIssues}</Alert>}
{/* Config Issues */}
{!!inputIssues && <Alert>{inputIssues}</Alert>}
{/* User Message */}
<BeamScatterInput
isMobile={props.isMobile}
history={inputHistory}
editHistory={editInputHistoryMessage}
/>
{/* User Message */}
<BeamScatterInput
isMobile={props.isMobile}
history={inputHistory}
editHistory={editInputHistoryMessage}
/>
{/* Scatter Controls */}
<BeamScatterPane
beamStore={props.beamStore}
isMobile={props.isMobile}
rayCount={raysCount}
setRayCount={handleRaySetCount}
startEnabled={inputReady}
startBusy={isScattering}
onStart={startScatteringAll}
onStop={stopScatteringAll}
onExplainerShow={explainerShow}
/>
{/* Scatter Controls */}
<BeamScatterPane
beamStore={props.beamStore}
isMobile={props.isMobile}
rayCount={raysCount}
setRayCount={handleRaySetCount}
startEnabled={inputReady}
startBusy={isScattering}
onStart={startScatteringAll}
onStop={stopScatteringAll}
onExplainerShow={explainerShow}
/>
{/* Rays Grid */}
<BeamRayGrid
beamStore={props.beamStore}
isMobile={props.isMobile}
rayIds={rayIds}
onIncreaseRayCount={handleRayIncreaseCount}
// linkedLlmId={currentGatherLlmId}
/>
{/* Rays Grid */}
<BeamRayGrid
beamStore={props.beamStore}
isMobile={props.isMobile}
rayIds={rayIds}
onIncreaseRayCount={handleRayIncreaseCount}
// linkedLlmId={currentGatherLlmId}
/>
{/* Gapper between Rays and Merge, without compromising the auto margin of the Ray Grid */}
<Box />
{/* Gapper between Rays and Merge, without compromising the auto margin of the Ray Grid */}
<Box />
{/* Gather Controls */}
<BeamGatherPane
beamStore={props.beamStore}
canGather={canGather}
isMobile={props.isMobile}
onAddFusion={handleCreateFusion}
raysReady={raysReady}
/>
{/* Gather Controls */}
<BeamGatherPane
beamStore={props.beamStore}
canGather={canGather}
isMobile={props.isMobile}
onAddFusion={handleCreateFusion}
raysReady={raysReady}
/>
{/* Fusion Grid */}
<BeamFusionGrid
beamStore={props.beamStore}
canGather={canGather}
fusionIds={fusionIds}
isMobile={props.isMobile}
onAddFusion={handleCreateFusion}
raysCount={raysCount}
/>
{/* Fusion Grid */}
<BeamFusionGrid
beamStore={props.beamStore}
canGather={canGather}
fusionIds={fusionIds}
isMobile={props.isMobile}
onAddFusion={handleCreateFusion}
raysCount={raysCount}
/>
</Box>
</Box>
{/* Confirm Stop Scattering */}
{warnIsScattering && (
<ConfirmationModal
open
onClose={handleStopScatterDenial}
onPositive={handleStopScatterConfirmation}
// lowStakes
noTitleBar
confirmationText='Some responses are still being generated. Do you want to stop and proceed with merging the available responses now?'
positiveActionText='Proceed with Merge'
negativeActionText='Wait for All Responses'
negativeActionStartDecorator={
<CircularProgress color='neutral' sx={{ '--CircularProgress-size': '24px', '--CircularProgress-trackThickness': '1px' }} />
}
/>
)}
{/* Confirm Stop Scattering */}
{warnIsScattering && (
<ConfirmationModal
open
onClose={handleStopScatterDenial}
onPositive={handleStopScatterConfirmation}
// lowStakes
noTitleBar
confirmationText='Some responses are still being generated. Do you want to stop and proceed with merging the available responses now?'
positiveActionText='Proceed with Merge'
negativeActionText='Wait for All Responses'
negativeActionStartDecorator={
<CircularProgress color='neutral' sx={{ '--CircularProgress-size': '24px', '--CircularProgress-trackThickness': '1px' }} />
}
/>
)}
</ScrollToBottom>
);
</>;
}
+3 -2
View File
@@ -145,8 +145,9 @@ export function BeamFusionGrid(props: {
</Typography>
</Box> : (
<Typography level='body-sm'>
You need two or more replies for a {currentFactory?.shortLabel?.toLocaleLowerCase() ?? ''} merge.
<Typography level='body-sm' sx={{ opacity: 0.8 }}>
{/*You need two or more replies for a {currentFactory?.shortLabel?.toLocaleLowerCase() ?? ''} merge.*/}
Waiting for multiple Beams.
</Typography>
)}
</BeamCard>
+20 -21
View File
@@ -6,11 +6,9 @@ import { Box, Button, ButtonGroup, FormControl, Typography } from '@mui/joy';
import AutoAwesomeIcon from '@mui/icons-material/AutoAwesome';
import AutoAwesomeOutlinedIcon from '@mui/icons-material/AutoAwesomeOutlined';
import { ScrollToBottomButton } from '~/common/scroll-to-bottom/ScrollToBottomButton';
import { animationColorBeamGather } from '~/common/util/animUtils';
import { useLLMSelect } from '~/common/components/forms/useLLMSelect';
import { BeamGatherDropdown } from './BeamGatherPaneDropdown';
import { BeamStoreApi, useBeamStore } from '../store-beam.hooks';
import { FFactoryId, FUSION_FACTORIES } from './instructions/beam.gather.factories';
import { GATHER_COLOR } from '../beam.config';
@@ -67,7 +65,7 @@ export function BeamGatherPane(props: {
// external state
// const { setStickToBottom } = useScrollToBottom();
const {
currentFactoryId, currentGatherLlmId, isGatheringAny,
currentFactoryId, currentGatherLlmId, isGatheringAny, hasFusions,
setCurrentFactoryId, setCurrentGatherLlmId,
} = useBeamStore(props.beamStore, useShallow(state => ({
// state
@@ -75,13 +73,14 @@ export function BeamGatherPane(props: {
currentFactoryId: state.currentFactoryId,
currentGatherLlmId: state.currentGatherLlmId,
isGatheringAny: state.isGatheringAny,
hasFusions: state.fusions.length > 0,
// actions
setCurrentFactoryId: state.setCurrentFactoryId,
setCurrentGatherLlmId: state.setCurrentGatherLlmId,
})));
const [_, gatherLlmComponent, gatherLlmIcon] = useLLMSelect(
currentGatherLlmId, setCurrentGatherLlmId, props.isMobile ? '' : 'Merge Model', true,
const [_, gatherLlmComponent/*, gatherLlmIcon*/] = useLLMSelect(
currentGatherLlmId, setCurrentGatherLlmId, props.isMobile ? '' : 'Merge Model', true, !props.canGather,
);
// derived state
@@ -96,7 +95,7 @@ export function BeamGatherPane(props: {
}, [currentFactoryId, setCurrentFactoryId]);
const MainLlmIcon = gatherLlmIcon || (isGatheringAny ? AutoAwesomeIcon : AutoAwesomeOutlinedIcon);
const MainLlmIcon = /*gatherLlmIcon ||*/ (isGatheringAny ? AutoAwesomeIcon : AutoAwesomeOutlinedIcon);
return (
<Box
@@ -105,21 +104,20 @@ export function BeamGatherPane(props: {
>
{/* Title */}
<Box sx={{ display: 'flex', alignItems: 'center', gap: 0.25, minWidth: 184 }}>
<div>
<Typography
level='h4' component='h2'
endDecorator={<BeamGatherDropdown />}
// sx={{ my: 0.25 }}
>
<MainLlmIcon sx={{ fontSize: '1rem', animation: isGatheringAny ? `${animationColorBeamGather} 2s linear infinite` : undefined }} />&nbsp;Merge
</Typography>
<Typography level='body-sm' sx={{ whiteSpace: 'nowrap' }}>
{/* may merge or not (hasInputs) N replies.. put this in pretty messages */}
{props.canGather ? `Combine the ${props.raysReady} replies` : 'Two replies or more'}
</Typography>
</div>
<ScrollToBottomButton inline />
<Box>
<Typography
level='h4' component='h3'
// endDecorator={<ScrollToBottomButton inline />}
// sx={{ my: 0.25 }}
sx={(props.canGather || hasFusions || isGatheringAny) ? undefined : { color: 'primary.solidDisabledColor', ['& > svg']: { color: 'primary.solidDisabledColor' } }}
>
<MainLlmIcon sx={{ fontSize: '1rem', mr: 0.625, animation: isGatheringAny ? `${animationColorBeamGather} 2s linear infinite` : undefined }} />
Merge
</Typography>
<Typography level='body-sm' sx={{ whiteSpace: 'nowrap' }}>
{/* may merge or not (hasInputs) N replies.. put this in pretty messages */}
{props.canGather ? `Combine the ${props.raysReady} replies` : /*'Fuse all replies'*/ ''}
</Typography>
</Box>
{/* Method */}
@@ -128,6 +126,7 @@ export function BeamGatherPane(props: {
<ButtonGroup
variant='outlined'
size='md'
disabled={!props.canGather}
// sx={{ boxShadow: isNoFactorySelected ? 'xs' : undefined }}
>
{FUSION_FACTORIES.map(factory => {
@@ -1,42 +0,0 @@
import * as React from 'react';
import { useShallow } from 'zustand/react/shallow';
import { Dropdown, IconButton, ListItem, ListItemDecorator, Menu, MenuButton, MenuItem, Typography } from '@mui/joy';
import CheckRoundedIcon from '@mui/icons-material/CheckRounded';
import MoreHorizRoundedIcon from '@mui/icons-material/MoreHorizRounded';
import { useModuleBeamStore } from '../store-module-beam';
export function BeamGatherDropdown() {
// external (persisted) state
const {
gatherShowPrompts,
toggleGatherShowPrompts,
} = useModuleBeamStore(useShallow(state => ({
gatherShowPrompts: state.gatherShowPrompts,
toggleGatherShowPrompts: state.toggleGatherShowPrompts,
})));
return (
<Dropdown>
<MenuButton
aria-label='Merge Options'
slots={{ root: IconButton }}
slotProps={{ root: { size: 'sm', sx: { my: -0.5 /* to not disrupt the layouting */ } } }}
>
<MoreHorizRoundedIcon />
</MenuButton>
<Menu placement='right-end' sx={{ minWidth: 250, zIndex: 'var(--joy-zIndex-modal)' /* on top of its own modal in FS */ }}>
<ListItem>
<Typography level='body-sm'>Advanced</Typography>
</ListItem>
<MenuItem onClick={toggleGatherShowPrompts}>
<ListItemDecorator>{gatherShowPrompts && <CheckRoundedIcon />}</ListItemDecorator>
Show All Prompts
</MenuItem>
</Menu>
</Dropdown>
);
}
+1 -1
View File
@@ -30,7 +30,7 @@ function FusionControls(props: {
{/* LLM Icon */}
{!!props.llmVendorIcon && (
<GoodTooltip title={props.llmLabel}>
<GoodTooltip placement='top' arrow title={props.llmLabel}>
<Box sx={{ display: 'flex' }}>
<props.llmVendorIcon sx={{ fontSize: 'lg', my: 'auto' }} />
</Box>
@@ -88,7 +88,7 @@ function EditableInstruction(props: {
}) {
// external state
const gatherShowPrompts = useModuleBeamStore(state => state.gatherShowPrompts);
const gatherShowAllPrompts = useModuleBeamStore(state => state.gatherShowAllPrompts);
// derived state
const { instruction, instructionIndex, onInstructionEdit } = props;
@@ -101,7 +101,7 @@ function EditableInstruction(props: {
return (instruction.type === 'chat-generate') ? (
<>
{gatherShowPrompts && (
{gatherShowAllPrompts && (
<EditableChatInstructionPrompt
isEditable={props.isEditable}
itemKey='systemPrompt'
+9 -4
View File
@@ -11,6 +11,7 @@ import { GATHER_PLACEHOLDER } from '../beam.config';
import { RootStoreSlice } from '../store-beam-vanilla';
import { ScatterStoreSlice } from '../scatter/beam.scatter';
import { gatherStartFusion, gatherStopFusion, Instruction } from './instructions/beam.gather.execution';
import { updateBeamLastConfig } from '../store-module-beam';
/// Gather Store > BFusion ///
@@ -144,15 +145,19 @@ export const createGatherSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
...reInitGatherStateSlice([], null),
setCurrentFactoryId: (factoryId: FFactoryId | null) =>
setCurrentFactoryId: (factoryId: FFactoryId | null) => {
_set({
currentFactoryId: factoryId,
}),
});
updateBeamLastConfig({ gatherFactoryId: factoryId });
},
setCurrentGatherLlmId: (llmId: DLLMId | null) =>
setCurrentGatherLlmId: (llmId: DLLMId | null) => {
_set({
currentGatherLlmId: llmId,
}),
});
updateBeamLastConfig({ gatherLlmId: llmId });
},
_fusionUpdate: (fusionId: BFusionId, update: FusionUpdateOrFn) => {
@@ -27,7 +27,7 @@ export async function executeUserInputChecklist(
let options = parseTextToChecklist(previousResult, false);
const relaxMatch = options.length < 2;
if (relaxMatch)
options = parseTextToChecklist(_i.outputPrompt, relaxMatch);
options = parseTextToChecklist(previousResult, true);
// if no options, there's an error
if (options.length < 2) {
@@ -8,6 +8,7 @@ import { ChatMessageMemo } from '../../../apps/chat/components/message/ChatMessa
import type { DMessage } from '~/common/state/store-chats';
import { BEAM_INVERT_BACKGROUND } from '../beam.config';
import { useModuleBeamStore } from '../store-module-beam';
const userMessageWrapperSx: SxProps = {
@@ -51,8 +52,10 @@ export function BeamScatterInput(props: {
}) {
// state
const [showHistoryMessage, setShowHistoryMessage] = React.useState(true);
// const [showHistoryMessage, setShowHistoryMessage] = React.useState(true);
// external state
const scatterShowPrevMessages = useModuleBeamStore(state => state.scatterShowPrevMessages);
// derived state
@@ -66,14 +69,14 @@ export function BeamScatterInput(props: {
// user message decorator
const userMessageDecorator = React.useMemo(() => {
return (showHistoryMessage && otherHistoryCount >= 1) ? (
return (/*showHistoryMessage &&*/ otherHistoryCount >= 1 && scatterShowPrevMessages) ? (
// <Chip color='primary' variant='outlined' endDecorator={<ChipDelete />} sx={{ my: 1 }}>
<Typography level='body-xs' sx={{ my: 1.5, opacity: 0.9 }} onClick={() => setShowHistoryMessage(on => !on)}>
... {otherHistoryCount === 1 ? (isFirstMessageSystem ? '1 system message' : '1 message') : `${otherHistoryCount} messages`} before this input ...
<Typography level='body-xs' sx={{ my: 1, textAlign: 'center', color: 'neutral.softColor' }} onClick={undefined /*() => setShowHistoryMessage(on => !on)*/}>
... {otherHistoryCount === 1 ? (isFirstMessageSystem ? '1 system message' : '1 message') : `${otherHistoryCount} messages`} before this one ...
</Typography>
// </Chip>
) : null;
}, [isFirstMessageSystem, otherHistoryCount, showHistoryMessage]);
}, [scatterShowPrevMessages, isFirstMessageSystem, otherHistoryCount/*, showHistoryMessage*/]);
// skip rendering if no message
+3 -3
View File
@@ -63,13 +63,13 @@ export function BeamScatterPane(props: {
{/* Title */}
<Box>
<Typography
level='h4' component='h2'
level='h4' component='h3'
endDecorator={dropdownMemo}
// sx={{ my: 0.25 }}
>
{props.startBusy
? <AutoAwesomeIcon sx={{ fontSize: '1rem', animation: `${animationColorBeamScatter} 2s linear infinite` }} />
: <AutoAwesomeOutlinedIcon sx={{ fontSize: '1rem' }} />}&nbsp;Beam
? <AutoAwesomeIcon sx={{ fontSize: '1rem', mr: 0.625, animation: `${animationColorBeamScatter} 2s linear infinite` }} />
: <AutoAwesomeOutlinedIcon sx={{ fontSize: '1rem', mr: 0.625 }} />}Beam
</Typography>
<Typography level='body-sm' sx={{ whiteSpace: 'nowrap' }}>
Explore different replies
@@ -1,10 +1,10 @@
import * as React from 'react';
import { Box, Button, DialogContent, DialogTitle, Dropdown, FormControl, FormLabel, IconButton, Input, ListItem, ListItemDecorator, Menu, MenuButton, MenuItem, Modal, ModalClose, ModalDialog, Typography } from '@mui/joy';
import { Box, Button, DialogContent, DialogTitle, Dropdown, FormControl, FormLabel, IconButton, Input, ListDivider, ListItem, ListItemDecorator, Menu, MenuButton, MenuItem, Modal, ModalClose, ModalDialog, Typography } from '@mui/joy';
import CheckRoundedIcon from '@mui/icons-material/CheckRounded';
import DeleteOutlineRoundedIcon from '@mui/icons-material/DeleteOutlineRounded';
import DriveFileRenameOutlineRoundedIcon from '@mui/icons-material/DriveFileRenameOutlineRounded';
import MoreHorizRoundedIcon from '@mui/icons-material/MoreHorizRounded';
import MoreVertIcon from '@mui/icons-material/MoreVert';
import SchoolRoundedIcon from '@mui/icons-material/SchoolRounded';
import type { DLLMId } from '~/modules/llms/store-llms';
@@ -65,9 +65,11 @@ export function BeamScatterDropdown(props: {
// external state
const {
scatterPresets, addScatterPreset, deleteScatterPreset,
presets, addPreset, deletePreset,
cardScrolling, toggleCardScrolling,
scatterShowPrevMessages, toggleScatterShowPrevMessages,
scatterShowLettering, toggleScatterShowLettering,
gatherShowAllPrompts, toggleGatherShowAllPrompts,
} = useModuleBeamStore();
@@ -76,18 +78,23 @@ export function BeamScatterDropdown(props: {
const handleClosePresetNaming = React.useCallback(() => setNamingOpened(false), []);
const handlePresetSave = React.useCallback((presetName: string) => {
const { rays } = props.beamStore.getState();
addScatterPreset(presetName, rays.map(ray => ray.rayLlmId).filter(Boolean) as DLLMId[]);
const { rays, currentGatherLlmId, currentFactoryId } = props.beamStore.getState();
const rayLlmIds = rays.map(ray => ray.rayLlmId).filter(Boolean) as DLLMId[];
addPreset(presetName, rayLlmIds, currentGatherLlmId, currentFactoryId);
handleClosePresetNaming();
}, [addScatterPreset, handleClosePresetNaming, props.beamStore]);
}, [addPreset, handleClosePresetNaming, props.beamStore]);
const handlePresetLoad = React.useCallback((presetId: string) => {
const { scatterPresets } = useModuleBeamStore.getState();
const preset = scatterPresets.find(preset => preset.id === presetId);
if (preset && preset.rayLlmIds?.length)
props.beamStore.getState().setRayLlmIds(preset.rayLlmIds);
const preset = useModuleBeamStore.getState().presets.find(preset => preset.id === presetId);
preset && props.beamStore.getState().loadBeamConfig(preset);
}, [props.beamStore]);
// NOTE: DEVS only - DEBUG only
const handleClearLastConfig = React.useCallback(() => {
// this is used to debug the heuristics for model selection
useModuleBeamStore.getState().deleteLastConfig();
}, []);
return <>
@@ -96,9 +103,9 @@ export function BeamScatterDropdown(props: {
<MenuButton
aria-label='Beam Options'
slots={{ root: IconButton }}
slotProps={{ root: { size: 'sm', sx: { my: -0.5 } } }}
slotProps={{ root: { size: 'sm', sx: { my: -0.25 } } }}
>
<MoreHorizRoundedIcon />
<MoreVertIcon />
</MenuButton>
<Menu placement='right-end' sx={{ minWidth: 200, zIndex: 'var(--joy-zIndex-modal)' /* on top of its own modal in FS */ }}>
@@ -115,10 +122,10 @@ export function BeamScatterDropdown(props: {
</MenuItem>
{/* Load any preset */}
{scatterPresets.map(preset =>
<MenuItem key={preset.id}>
{presets.map(preset =>
<MenuItem key={preset.id} onClick={() => handlePresetLoad(preset.id)}>
<ListItemDecorator />
<Typography onClick={() => handlePresetLoad(preset.id)}>
<Typography>
Load &quot;{preset.name}&quot; &nbsp;<span style={{ opacity: 0.5, marginRight: '2rem' }}>x{preset.rayLlmIds?.length}</span>
</Typography>
<IconButton
@@ -126,7 +133,7 @@ export function BeamScatterDropdown(props: {
variant='outlined'
onClick={(event) => {
event.stopPropagation();
deleteScatterPreset(preset.id);
deletePreset(preset.id);
}}
sx={{ ml: 'auto' }}
>
@@ -141,9 +148,14 @@ export function BeamScatterDropdown(props: {
<Typography level='body-sm'>View</Typography>
</ListItem>
<MenuItem onClick={toggleScatterShowPrevMessages}>
<ListItemDecorator>{scatterShowPrevMessages && <CheckRoundedIcon />}</ListItemDecorator>
History
</MenuItem>
<MenuItem onClick={toggleCardScrolling}>
<ListItemDecorator>{cardScrolling && <CheckRoundedIcon />}</ListItemDecorator>
Fit Messages
Resize Beams
</MenuItem>
<MenuItem onClick={toggleScatterShowLettering}>
@@ -151,6 +163,17 @@ export function BeamScatterDropdown(props: {
Response Numbers
</MenuItem>
<ListItem onClick={() => handleClearLastConfig()}>
<Typography level='body-sm'>Advanced</Typography>
</ListItem>
<MenuItem onClick={toggleGatherShowAllPrompts}>
<ListItemDecorator>{gatherShowAllPrompts && <CheckRoundedIcon />}</ListItemDecorator>
Detailed Custom Merge
</MenuItem>
<ListDivider inset='gutter' />
<MenuItem onClick={props.onExplainerShow}>
<ListItemDecorator>
<SchoolRoundedIcon />
+40 -19
View File
@@ -11,6 +11,7 @@ import { getUXLabsHighPerformance } from '~/common/state/store-ux-labs';
import type { RootStoreSlice } from '../store-beam-vanilla';
import { SCATTER_DEBUG_STATE, SCATTER_PLACEHOLDER } from '../beam.config';
import { updateBeamLastConfig } from '../store-module-beam';
export type BRayId = string;
@@ -158,7 +159,7 @@ export interface ScatterStoreSlice extends ScatterStateSlice {
// ray actions
setRayCount: (count: number) => void;
removeRay: (rayId: BRayId) => void;
importRays: (messages: DMessage[]) => void;
importRays: (messages: DMessage[], raysLlmId: DLLMId | null) => void;
setRayLlmIds: (rayLlmIds: DLLMId[]) => void;
startScatteringAll: () => void;
stopScatteringAll: () => void;
@@ -166,6 +167,7 @@ export interface ScatterStoreSlice extends ScatterStateSlice {
raySetLlmId: (rayId: BRayId, llmId: DLLMId | null) => void;
_rayUpdate: (rayId: BRayId, update: Partial<BRay> | ((ray: BRay) => Partial<BRay>)) => void;
_storeLastScatterConfig: () => void;
_syncRaysStateToScatter: () => void;
}
@@ -178,7 +180,7 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
setRayCount: (count: number) => {
const { rays, inputChatLlmId, _syncRaysStateToScatter } = _get();
const { rays, _storeLastScatterConfig, _syncRaysStateToScatter } = _get();
if (count < rays.length) {
// Terminate exceeding rays
rays.slice(count).forEach(rayScatterStop);
@@ -188,15 +190,17 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
} else if (count > rays.length) {
_set({
rays: [...rays, ...Array(count - rays.length).fill(null)
// Create missing rays, using the last ray llmId as a fallback, or the inputChatLlmId
.map(() => createBRay(rays[rays.length - 1]?.rayLlmId || inputChatLlmId)),
// Create missing rays, copying the llmId of the former Ray, or using the fallback
.map(() => createBRay(rays[rays.length - 1]?.rayLlmId || null)),
],
});
}
_storeLastScatterConfig();
_syncRaysStateToScatter();
},
removeRay: (rayId: BRayId) => {
const { _storeLastScatterConfig, _syncRaysStateToScatter } = _get();
_set(state => ({
rays: state.rays.filter((ray) => {
const shallStay = ray.rayId !== rayId;
@@ -205,16 +209,22 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
return shallStay;
}),
}));
_get()._syncRaysStateToScatter();
_storeLastScatterConfig();
_syncRaysStateToScatter();
},
importRays: (messages: DMessage[]) => {
_set(state => ({
importRays: (messages: DMessage[], raysLlmId: DLLMId | null) => {
const { rays, _storeLastScatterConfig, _syncRaysStateToScatter } = _get();
// remove the empty rays that will be replaced by the imported messages
const raysToRemove = rays.filter((ray) => ray.status === 'empty' && ray.rayLlmId === raysLlmId).slice(0, messages.length);
_set({
rays: [
// prepend the imported rays
...messages.map((message) => {
// Note: message.originLLM misss the prefix (e.g. gpt-4-0125 wihtout 'openai-..') so it won't match here
const ray = createBRay(/*null ||*/ state.inputChatLlmId);
const ray = createBRay(raysLlmId);
// pre-fill the ray status with the message and to a successful state
if (message.text.trim()) {
ray.status = 'success';
@@ -225,23 +235,26 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
return ray;
},
),
// trim the back if too many empties
...state.rays.filter((ray) => ray.status !== 'empty'),
// append the other rays (excluding the ones to remove)
...rays.filter((ray) => !raysToRemove.includes(ray)),
],
}));
_get()._syncRaysStateToScatter();
});
_storeLastScatterConfig();
_syncRaysStateToScatter();
},
setRayLlmIds: (rayLlmIds: DLLMId[]) => {
const { rays, setRayCount, _syncRaysStateToScatter } = _get();
if (rayLlmIds.length > rays.length)
setRayCount(rayLlmIds.length);
const { setRayCount, _storeLastScatterConfig, _syncRaysStateToScatter } = _get();
// NOTE: the behavior was to only enlarge the set, but turns out that the UX would be less intuitive
// if (rayLlmIds.length > rays.length)
setRayCount(rayLlmIds.length);
_set(state => ({
rays: state.rays.map((ray, index): BRay => index >= rayLlmIds.length ? ray : {
...ray,
rayLlmId: rayLlmIds[index] || null,
}),
}));
_storeLastScatterConfig();
_syncRaysStateToScatter();
},
@@ -272,10 +285,13 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
_syncRaysStateToScatter();
},
raySetLlmId: (rayId: BRayId, llmId: DLLMId | null) =>
_get()._rayUpdate(rayId, {
raySetLlmId: (rayId: BRayId, llmId: DLLMId | null) => {
const { _rayUpdate, _storeLastScatterConfig } = _get();
_rayUpdate(rayId, {
rayLlmId: llmId,
}),
});
_storeLastScatterConfig();
},
_rayUpdate: (rayId: BRayId, update: Partial<BRay> | ((ray: BRay) => Partial<BRay>)) =>
_set(state => ({
@@ -285,6 +301,11 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
),
})),
_storeLastScatterConfig: () => {
updateBeamLastConfig({
rayLlmIds: _get().rays.map(ray => ray.rayLlmId).filter(Boolean) as DLLMId[],
});
},
_syncRaysStateToScatter: () => {
const { rays } = _get();
@@ -296,7 +317,7 @@ export const createScatterSlice: StateCreator<RootStoreSlice & ScatterStoreSlice
// [debug]
if (SCATTER_DEBUG_STATE)
console.log('_syncRaysStateToBeam', { rays: rays.length, allDone, raysReady, isScattering: hasRays && !allDone });
console.log('_syncRaysStateToScatter', { rays: rays.length, allDone, raysReady, isScattering: hasRays && !allDone });
_set({
isScattering: hasRays && !allDone,
+42 -14
View File
@@ -1,11 +1,13 @@
import { createStore, StateCreator } from 'zustand/vanilla';
import type { DLLMId } from '~/modules/llms/store-llms';
import { DLLMId, getDiverseTopLlmIds } from '~/modules/llms/store-llms';
import type { DMessage } from '~/common/state/store-chats';
import { createScatterSlice, reInitScatterStateSlice, ScatterStoreSlice } from './scatter/beam.scatter';
import { BeamConfigSnapshot, useModuleBeamStore } from './store-module-beam';
import { SCATTER_RAY_DEF } from './beam.config';
import { createGatherSlice, GatherStoreSlice, reInitGatherStateSlice } from './gather/beam.gather';
import { createScatterSlice, reInitScatterStateSlice, ScatterStoreSlice } from './scatter/beam.scatter';
/// Beam Store (vanilla, creator function) ///
@@ -30,7 +32,6 @@ interface RootStateSlice {
isOpen: boolean;
isMaximized: boolean;
inputChatLlmId: DLLMId | null;
inputHistory: DMessage[] | null;
inputIssues: string | null;
inputReady: boolean;
@@ -42,7 +43,6 @@ const initRootStateSlice = (): RootStateSlice => ({
isOpen: false,
isMaximized: false,
inputChatLlmId: null,
inputHistory: null,
inputIssues: null,
inputReady: false,
@@ -54,7 +54,8 @@ export interface RootStoreSlice extends RootStateSlice {
// lifecycle
open: (chatHistory: Readonly<DMessage[]>, initialChatLlmId: DLLMId | null, callback: BeamSuccessCallback) => void;
terminate: () => void;
terminateKeepingSettings: () => void;
loadBeamConfig: (preset: BeamConfigSnapshot | null) => void;
setIsMaximized: (maximized: boolean) => void;
editInputHistoryMessage: (messageId: string, newText: string) => void;
@@ -68,41 +69,68 @@ const createRootSlice: StateCreator<BeamStore, [], [], RootStoreSlice> = (_set,
...initRootStateSlice(),
open: (chatHistory: Readonly<DMessage[]>, initialChatLLMId: DLLMId | null, callback: BeamSuccessCallback) => {
const { isOpen: wasOpen, terminate } = _get();
open: (chatHistory: Readonly<DMessage[]>, initialChatLlmId: DLLMId | null, callback: BeamSuccessCallback) => {
const { isOpen: wasAlreadyOpen, terminateKeepingSettings, loadBeamConfig, setRayLlmIds, setCurrentGatherLlmId } = _get();
// reset pending operations
terminate();
terminateKeepingSettings();
// validate history
const history = [...chatHistory];
const isValidHistory = history.length >= 1 && history[history.length - 1].role === 'user';
// show and set input
_set({
// input
isOpen: true,
inputChatLlmId: initialChatLLMId,
inputHistory: isValidHistory ? history : null,
inputIssues: isValidHistory ? null : 'Invalid history',
inputIssues: isValidHistory ? null : 'Invalid conversation history: missing user message',
inputReady: isValidHistory,
onSuccessCallback: callback,
// rays already reset
// update the model only if the dialog was not already open
...((!wasOpen && initialChatLLMId) && {
currentGatherLlmId: initialChatLLMId,
...(!wasAlreadyOpen && initialChatLlmId && {
currentGatherLlmId: initialChatLlmId,
} satisfies Partial<GatherStoreSlice>),
});
// if not empty (recycle an existing open beam for this chat), we're done
if (_get().rays.length)
return;
// if empty, initialize from the persisted config, if any
loadBeamConfig(useModuleBeamStore.getState().lastConfig);
if (_get().rays.length)
return;
// it no config (first-time): Heuristic: auto-pick the best models for the user, based on their ELO and variety
const autoLlmIds = getDiverseTopLlmIds(SCATTER_RAY_DEF, true, initialChatLlmId);
if (autoLlmIds.length > 0) {
setRayLlmIds(autoLlmIds);
setCurrentGatherLlmId(autoLlmIds[0]);
}
},
terminate: () =>
terminateKeepingSettings: () =>
_set(state => ({
...initRootStateSlice(),
...reInitGatherStateSlice(state.fusions, state.currentGatherLlmId), // remember after termination
...reInitScatterStateSlice(state.rays),
...reInitGatherStateSlice(state.fusions, state.currentGatherLlmId), // remember after termination
})),
loadBeamConfig: (preset: BeamConfigSnapshot | null) => {
if (preset) {
const { setRayLlmIds, setCurrentGatherLlmId, setCurrentFactoryId } = _get();
preset.rayLlmIds?.length && setRayLlmIds(preset.rayLlmIds);
preset.gatherLlmId && setCurrentGatherLlmId(preset.gatherLlmId);
preset.gatherFactoryId && setCurrentFactoryId(preset.gatherFactoryId);
}
},
setIsMaximized: (maximized: boolean) =>
_set({
isMaximized: maximized,
+59 -19
View File
@@ -3,65 +3,101 @@ import { persist } from 'zustand/middleware';
import { v4 as uuidv4 } from 'uuid';
import type { DLLMId } from '~/modules/llms/store-llms';
import type { FFactoryId } from './gather/instructions/beam.gather.factories';
/// Presets (persistes as zustand store) ///
interface BeamScatterPreset {
export interface BeamConfigSnapshot {
id: string;
name: string;
rayLlmIds: DLLMId[];
gatherFactoryId?: FFactoryId | null; // added post launch
gatherLlmId?: DLLMId | null; // added post launch
}
interface ModuleBeamStore {
// state
scatterPresets: BeamScatterPreset[];
interface ModuleBeamState {
presets: BeamConfigSnapshot[];
lastConfig: BeamConfigSnapshot | null;
cardScrolling: boolean;
scatterShowLettering: boolean;
gatherShowPrompts: boolean;
scatterShowPrevMessages: boolean;
gatherShowAllPrompts: boolean;
}
// actions
addScatterPreset: (name: string, rayLlmIds: DLLMId[]) => void;
deleteScatterPreset: (id: string) => void;
renameScatterPreset: (id: string, name: string) => void;
interface ModuleBeamStore extends ModuleBeamState {
addPreset: (name: string, rayLlmIds: DLLMId[], gatherLlmId: DLLMId | null, gatherFactoryId: FFactoryId | null) => void;
deletePreset: (id: string) => void;
renamePreset: (id: string, name: string) => void;
updateLastConfig: (update: Partial<BeamConfigSnapshot>) => void;
deleteLastConfig: () => void;
toggleCardScrolling: () => void;
toggleScatterShowLettering: () => void;
toggleGatherShowPrompts: () => void;
toggleScatterShowPrevMessages: () => void;
toggleGatherShowAllPrompts: () => void;
}
export const useModuleBeamStore = create<ModuleBeamStore>()(persist(
(_set, _get) => ({
scatterPresets: [],
presets: [],
lastConfig: null,
cardScrolling: false,
scatterShowLettering: false,
gatherShowPrompts: false,
scatterShowPrevMessages: false,
gatherShowAllPrompts: false,
addScatterPreset: (name, rayLlmIds) => _set(state => ({
scatterPresets: [...state.scatterPresets, { id: uuidv4(), name, rayLlmIds }],
addPreset: (name, rayLlmIds, gatherLlmId, gatherFactoryId) => _set(state => ({
presets: [...state.presets, {
id: uuidv4(),
name,
rayLlmIds,
gatherLlmId: gatherLlmId ?? undefined,
gatherFactoryId: gatherFactoryId ?? undefined,
}],
})),
deleteScatterPreset: (id) => _set(state => ({
scatterPresets: state.scatterPresets.filter(preset => preset.id !== id),
deletePreset: (id) => _set(state => ({
presets: state.presets.filter(preset => preset.id !== id),
})),
renameScatterPreset: (id, name) => _set(state => ({
scatterPresets: state.scatterPresets.map(preset => preset.id === id ? { ...preset, name } : preset),
renamePreset: (id, name) => _set(state => ({
presets: state.presets.map(preset => preset.id === id ? { ...preset, name } : preset),
})),
updateLastConfig: (update) => _set(({ lastConfig }) => ({
lastConfig: !lastConfig
? { id: 'current', name: '', rayLlmIds: [], ...update }
: { ...lastConfig, ...update },
})),
deleteLastConfig: () => _set({ lastConfig: null }),
toggleCardScrolling: () => _set(state => ({ cardScrolling: !state.cardScrolling })),
toggleScatterShowLettering: () => _set(state => ({ scatterShowLettering: !state.scatterShowLettering })),
toggleGatherShowPrompts: () => _set(state => ({ gatherShowPrompts: !state.gatherShowPrompts })),
toggleScatterShowPrevMessages: () => _set(state => ({ scatterShowPrevMessages: !state.scatterShowPrevMessages })),
toggleGatherShowAllPrompts: () => _set(state => ({ gatherShowAllPrompts: !state.gatherShowAllPrompts })),
}), {
name: 'app-module-beam',
version: 1,
migrate: (state: any, fromVersion: number): ModuleBeamState => {
// 0 -> 1: rename 'scatterPresets' to 'presets'
if (state && fromVersion === 0 && !state.presets)
return { ...state, presets: state.scatterPresets || [] };
return state;
},
},
));
@@ -77,3 +113,7 @@ export function useBeamCardScrolling() {
export function useBeamScatterShowLettering() {
return useModuleBeamStore((state) => state.scatterShowLettering);
}
export function updateBeamLastConfig(update: Partial<BeamConfigSnapshot>) {
useModuleBeamStore.getState().updateLastConfig(update);
}
+25 -18
View File
@@ -4,8 +4,8 @@ import type { Diff as TextDiff } from '@sanity/diff-match-patch';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Button, Tooltip, Typography } from '@mui/joy';
import UnfoldLessRoundedIcon from '@mui/icons-material/UnfoldLessRounded';
import UnfoldMoreRoundedIcon from '@mui/icons-material/UnfoldMoreRounded';
import ExpandLessIcon from '@mui/icons-material/ExpandLess';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import type { DMessage } from '~/common/state/store-chats';
import { ContentScaling, lineHeightChatTextMd, themeScalingMap } from '~/common/app.theme';
@@ -14,7 +14,6 @@ import { InlineError } from '~/common/components/InlineError';
import { RenderCode, RenderCodeMemo } from './code/RenderCode';
import { RenderHtml } from './RenderHtml';
import { RenderImage } from './RenderImage';
import { RenderLatex } from './RenderLatex';
import { RenderMarkdown, RenderMarkdownMemo } from './markdown/RenderMarkdown';
import { RenderChatText } from './RenderChatText';
import { RenderTextDiff } from './RenderTextDiff';
@@ -40,11 +39,15 @@ const renderBlocksSx: SxProps = {
...blocksSx,
flexGrow: 0,
overflowX: 'auto',
'& *::selection': {
// backgroundColor: '#fc70c3',
backgroundColor: 'primary.solidBg',
color: 'primary.solidColor',
},
} as const;
export function BlocksRenderer(props: {
type BlocksRendererProps = {
// required
text: string;
fromRole: DMessage['role'];
@@ -67,8 +70,10 @@ export function BlocksRenderer(props: {
// optimization: allow memo
optiAllowMemo?: boolean;
};
}) {
export const BlocksRenderer = React.forwardRef<HTMLDivElement, BlocksRendererProps>((props, ref) => {
// state
const [forceUserExpanded, setForceUserExpanded] = React.useState(false);
@@ -105,14 +110,15 @@ export function BlocksRenderer(props: {
const scaledCodeSx: SxProps = React.useMemo(() => (
{
my: props.specialDiagramMode ? 0 : themeScalingMap[props.contentScaling]?.blockCodeMarginY ?? 0,
backgroundColor: props.specialDiagramMode ? 'background.surface' : fromAssistant ? 'neutral.plainHoverBg' : 'primary.plainActiveBg',
boxShadow: props.specialDiagramMode ? 'md' : 'xs',
boxShadow: props.specialDiagramMode ? undefined : 'inset 2px 0px 5px -4px var(--joy-palette-background-backdrop)', // was 'xs'
borderRadius: 'sm',
fontFamily: 'code',
fontSize: themeScalingMap[props.contentScaling]?.blockCodeFontSize ?? '0.875rem',
fontWeight: 'md', // JetBrains Mono has a lighter weight, so we need that extra bump
fontVariantLigatures: 'none',
lineHeight: themeScalingMap[props.contentScaling]?.blockLineHeight ?? 1.75,
borderRadius: 'var(--joy-radius-sm)',
}
), [fromAssistant, props.contentScaling, props.specialDiagramMode]);
@@ -166,6 +172,7 @@ export function BlocksRenderer(props: {
return (
<Box
ref={ref}
onContextMenu={props.onContextMenu}
onDoubleClick={props.onDoubleClick}
sx={renderBlocksSx}
@@ -203,21 +210,19 @@ export function BlocksRenderer(props: {
? <RenderCodeMemoOrNot key={'code-' + index} codeBlock={block} fitScreen={props.fitScreen} initialShowHTML={props.showUnsafeHtml} noCopyButton={props.specialDiagramMode} optimizeLightweight={!optimizeWithMemo} sx={scaledCodeSx} />
: block.type === 'image'
? <RenderImage key={'image-' + index} imageBlock={block} onRunAgain={props.isBottom ? props.onImageRegenerate : undefined} sx={scaledImageSx} />
: block.type === 'latex'
? <RenderLatex key={'latex-' + index} latexBlock={block} sx={scaledTypographySx} />
: block.type === 'diff'
? <RenderTextDiff key={'latex-' + index} diffBlock={block} sx={scaledTypographySx} />
: (props.renderTextAsMarkdown && !fromSystem && !(fromUser && block.content.startsWith('/')))
? <RenderMarkdownMemoOrNot key={'text-md-' + index} textBlock={block} sx={scaledTypographySx} />
: <RenderChatText key={'text-' + index} textBlock={block} sx={scaledTypographySx} />;
: block.type === 'diff'
? <RenderTextDiff key={'text-diff-' + index} diffBlock={block} sx={scaledTypographySx} />
: (props.renderTextAsMarkdown && !fromSystem && !(fromUser && block.content.startsWith('/')))
? <RenderMarkdownMemoOrNot key={'text-md-' + index} textBlock={block} sx={scaledTypographySx} />
: <RenderChatText key={'text-' + index} textBlock={block} sx={scaledTypographySx} />;
})
)}
{isTextCollapsed ? (
<Box sx={{ textAlign: 'right' }}><Button variant='soft' size='sm' onClick={handleTextUncollapse} startDecorator={<UnfoldMoreRoundedIcon />} sx={{ minWidth: 100, mt: 0.5 }}>Expand</Button></Box>
<Box sx={{ textAlign: 'right' }}><Button variant='soft' size='sm' onClick={handleTextUncollapse} startDecorator={<ExpandMoreIcon />} sx={{ minWidth: 120 }}>Expand</Button></Box>
) : forceUserExpanded && (
<Box sx={{ textAlign: 'right' }}><Button variant='soft' size='sm' onClick={handleTextCollapse} startDecorator={<UnfoldLessRoundedIcon />} sx={{ minWidth: 100, mt: 0.5 }}>Collapse</Button></Box>
<Box sx={{ textAlign: 'right' }}><Button variant='soft' size='sm' onClick={handleTextCollapse} startDecorator={<ExpandLessIcon />} sx={{ minWidth: 120 }}>Collapse</Button></Box>
)}
{/* import VisibilityIcon from '@mui/icons-material/Visibility'; */}
@@ -228,4 +233,6 @@ export function BlocksRenderer(props: {
</Box>
);
}
});
BlocksRenderer.displayName = 'BlocksRenderer';
+6 -6
View File
@@ -1,14 +1,14 @@
import * as React from 'react';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, Button, IconButton, Tooltip, Typography } from '@mui/joy';
import { Box, Button, Tooltip, Typography } from '@mui/joy';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
import WebIcon from '@mui/icons-material/Web';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import type { HtmlBlock } from './blocks';
import { overlayButtonsSx } from './code/RenderCode';
import { OverlayButton, overlayButtonsSx } from './code/RenderCode';
// this is used by the blocks parser (for full text detection) and by the Code component (for inline rendering)
@@ -99,14 +99,14 @@ export function RenderHtml(props: { htmlBlock: HtmlBlock, sx?: SxProps }) {
{/* External HTML Buttons */}
<Box className='overlay-buttons' sx={{ ...overlayButtonsSx, p: 1.5 }}>
<Tooltip title={showHTML ? 'Hide' : 'Show Web Page'} variant='solid'>
<IconButton variant={showHTML ? 'solid' : 'outlined'} color='danger' onClick={() => setShowHTML(!showHTML)}>
<OverlayButton variant={showHTML ? 'solid' : 'outlined'} color='danger' onClick={() => setShowHTML(!showHTML)}>
<WebIcon />
</IconButton>
</OverlayButton>
</Tooltip>
<Tooltip title='Copy Code' variant='solid'>
<IconButton variant='outlined' onClick={handleCopyToClipboard}>
<OverlayButton variant='outlined' onClick={handleCopyToClipboard}>
<ContentCopyIcon />
</IconButton>
</OverlayButton>
</Tooltip>
</Box>
+7 -7
View File
@@ -12,7 +12,7 @@ import { GoodTooltip } from '~/common/components/GoodTooltip';
import { Link } from '~/common/components/Link';
import type { ImageBlock } from './blocks';
import { overlayButtonsSx } from './code/RenderCode';
import { OverlayButton, overlayButtonsSx } from './code/RenderCode';
const mdImageReferenceRegex = /^!\[([^\]]*)]\(([^)]+)\)$/;
@@ -128,24 +128,24 @@ export const RenderImage = (props: {
<Box className='overlay-buttons' sx={{ ...overlayButtonsSx, pt: 0.5, px: 0.5, gap: 0.5 }}>
{!!props.onRunAgain && (
<GoodTooltip title='Draw again'>
<IconButton variant='outlined' onClick={props.onRunAgain}>
<OverlayButton variant='outlined' onClick={props.onRunAgain}>
<ReplayIcon />
</IconButton>
</OverlayButton>
</GoodTooltip>
)}
{!!alt && (
<GoodTooltip title={infoOpen ? 'Hide Prompt' : 'Show Prompt'}>
<IconButton variant={infoOpen ? 'solid' : 'soft'} onClick={() => setInfoOpen(open => !open)}>
<OverlayButton variant={infoOpen ? 'solid' : 'outlined'} onClick={() => setInfoOpen(open => !open)}>
<InfoOutlinedIcon />
</IconButton>
</OverlayButton>
</GoodTooltip>
)}
<GoodTooltip title='Open in new tab'>
<IconButton variant='soft' component={Link} href={url} download={alt || 'image'} target='_blank'>
<OverlayButton variant='outlined' component={Link} href={url} download={alt || 'image'} target='_blank'>
<OpenInNewIcon />
</IconButton>
</OverlayButton>
</GoodTooltip>
</Box>
</Sheet>
-28
View File
@@ -1,28 +0,0 @@
import * as React from 'react';
import { Box } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
import type { LatexBlock } from './blocks';
// Dynamically import the Katex functions
const RenderLatexDynamic = React.lazy(async () => {
const { InlineMath } = await import('react-katex');
return {
default: (props: { latex: string }) => <InlineMath math={props.latex} />,
};
});
export const RenderLatex = (props: { latexBlock: LatexBlock; sx?: SxProps; }) =>
<Box
sx={{
mx: 1.5,
my: '0.5em',
textAlign: 'center',
...props.sx,
}}>
<React.Suspense fallback={<div />}>
<RenderLatexDynamic latex={props.latexBlock.latex} />
</React.Suspense>
</Box>;
+4 -15
View File
@@ -4,12 +4,11 @@ import { heuristicIsHtml } from './RenderHtml';
import { heuristicLegacyImageBlocks, heuristicMarkdownImageReferenceBlocks } from './RenderImage';
// Block types
export type Block = CodeBlock | DiffBlock | HtmlBlock | ImageBlock | LatexBlock | TextBlock;
export type Block = CodeBlock | DiffBlock | HtmlBlock | ImageBlock | TextBlock;
export type CodeBlock = { type: 'code'; blockTitle: string; blockCode: string; complete: boolean; };
export type DiffBlock = { type: 'diff'; textDiffs: TextDiff[] };
export type HtmlBlock = { type: 'html'; html: string; };
export type ImageBlock = { type: 'image'; url: string; alt?: string }; // Added optional alt property
export type LatexBlock = { type: 'latex'; latex: string; };
export type TextBlock = { type: 'text'; content: string; }; // for Text or Markdown
@@ -26,8 +25,6 @@ export function areBlocksEqual(a: Block, b: Block): boolean {
return a.html === (b as HtmlBlock).html;
case 'image':
return a.url === (b as ImageBlock).url && a.alt === (b as ImageBlock).alt;
case 'latex':
return a.latex === (b as LatexBlock).latex;
case 'text':
return a.content === (b as TextBlock).content;
}
@@ -56,12 +53,11 @@ export function parseMessageBlocks(text: string, disableParsing: boolean, forceT
return legacyImageBlocks;
const regexPatterns = {
codeBlock: /`{3,}([\w\x20\\.+-_]+)?\n([\s\S]*?)(`{3,}\n?|$)/g,
// was: \w\x20\\.+-_ for tge filename, but was missing too much
// REVERTED THIS: was: (`{3,}\n?|$), but was matching backticks within blocks. so now it must end with a newline or stop
codeBlock: /`{3,}([\S\x20]+)?\n([\s\S]*?)(`{3,}\n?|$)/g,
htmlCodeBlock: /<!DOCTYPE html>([\s\S]*?)<\/html>/g,
svgBlock: /<svg (xmlns|width|viewBox)=([\s\S]*?)<\/svg>/g,
latexBlock: /\$\$([\s\S]*?)\$\$\n?/g,
latexBlock2: /\\\[\n([\s\S]*?)\n\s*\\]\n/g,
// latexBlockOrInline: /\$\$([\s\S]*?)\$\$|\$([^$]*?)\$/g,
};
const blocks: Block[] = [];
@@ -97,18 +93,11 @@ export function parseMessageBlocks(text: string, disableParsing: boolean, forceT
blocks.push({ type: 'code', blockTitle, blockCode, complete: blockEnd.startsWith('```') });
break;
case 'htmlCodeBlock':
const html: string = `<!DOCTYPE html>${match[1]}</html>`;
blocks.push({ type: 'code', blockTitle: 'html', blockCode: html, complete: true });
break;
case 'latexBlock':
case 'latexBlock2':
const latex: string = match[1];
blocks.push({ type: 'latex', latex });
break;
case 'svgBlock':
blocks.push({ type: 'code', blockTitle: 'svg', blockCode: match[0], complete: true });
break;
+5 -3
View File
@@ -1,11 +1,13 @@
import * as React from 'react';
import { IconButton, Tooltip } from '@mui/joy';
import { Tooltip } from '@mui/joy';
import { Brand } from '~/common/app.config';
import { CodePenIcon } from '~/common/components/icons/3rdparty/CodePenIcon';
import { prettyTimestampForFilenames } from '~/common/util/timeUtils';
import { OverlayButton } from './RenderCode';
// CodePen is a web-based HTML, CSS, and JavaScript code editor
const _languages = ['html', 'css', 'javascript', 'json', 'typescript'];
@@ -48,9 +50,9 @@ const handleOpenInCodePen = (code: string, language: string) => {
export function ButtonCodePen(props: { code: string, language: string }): React.JSX.Element {
return (
<Tooltip title='Open in CodePen' variant='solid'>
<IconButton variant='outlined' color='neutral' onClick={() => handleOpenInCodePen(props.code, props.language)}>
<OverlayButton variant='outlined' onClick={() => handleOpenInCodePen(props.code, props.language)}>
<CodePenIcon />
</IconButton>
</OverlayButton>
</Tooltip>
);
}
+5 -3
View File
@@ -1,6 +1,8 @@
import * as React from 'react';
import { IconButton, Tooltip } from '@mui/joy';
import { Tooltip } from '@mui/joy';
import { OverlayButton } from './RenderCode';
// JSFiidle is a web-based HTML, CSS, and JavaScript code editor
@@ -49,9 +51,9 @@ const handleOpenInJsFiddle = (code: string, language: string) => {
export function ButtonJsFiddle(props: { code: string, language: string }): React.JSX.Element {
return (
<Tooltip title='Open in JSFiddle' variant='solid'>
<IconButton onClick={() => handleOpenInJsFiddle(props.code, props.language)}>
<OverlayButton variant='outlined' onClick={() => handleOpenInJsFiddle(props.code, props.language)}>
JS
</IconButton>
</OverlayButton>
</Tooltip>
);
}
+5 -3
View File
@@ -1,11 +1,13 @@
import * as React from 'react';
import { IconButton, Tooltip } from '@mui/joy';
import { Tooltip } from '@mui/joy';
import { Brand } from '~/common/app.config';
import { StackBlitzIcon } from '~/common/components/icons/3rdparty/StackBlitzIcon';
import { prettyTimestampForFilenames } from '~/common/util/timeUtils';
import { OverlayButton } from './RenderCode';
const _languages = [
'typescript',
@@ -77,9 +79,9 @@ const handleOpenInStackBlitz = (code: string, language: string, title?: string)
export function ButtonStackBlitz(props: { code: string, language: string, title?: string }): React.JSX.Element {
return (
<Tooltip title='Open in StackBlitz' variant='solid'>
<IconButton variant='outlined' color='neutral' onClick={() => handleOpenInStackBlitz(props.code, props.language, props.title)}>
<OverlayButton variant='outlined' onClick={() => handleOpenInStackBlitz(props.code, props.language, props.title)}>
<StackBlitzIcon />
</IconButton>
</OverlayButton>
</Tooltip>
);
}
+70 -35
View File
@@ -2,12 +2,13 @@ import * as React from 'react';
import { useQuery } from '@tanstack/react-query';
import type { SxProps } from '@mui/joy/styles/types';
import { Box, ButtonGroup, IconButton, Sheet, Tooltip, Typography } from '@mui/joy';
import { Box, ButtonGroup, IconButton, Sheet, styled, Tooltip, Typography } from '@mui/joy';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
import FitScreenIcon from '@mui/icons-material/FitScreen';
import HtmlIcon from '@mui/icons-material/Html';
import SchemaIcon from '@mui/icons-material/Schema';
import ShapeLineOutlinedIcon from '@mui/icons-material/ShapeLineOutlined';
import WrapTextIcon from '@mui/icons-material/WrapText';
import { copyToClipboard } from '~/common/util/clipboardUtils';
import { frontendSideFetch } from '~/common/util/clientFetchers';
@@ -19,11 +20,13 @@ import { ButtonStackBlitz, isStackBlitzSupported } from './ButtonStackBlitz';
import { heuristicIsHtml, IFrameComponent } from '../RenderHtml';
import { patchSvgString, RenderCodeMermaid } from './RenderCodeMermaid';
export function getPlantUmlServerUrl(): string {
// set at nextjs build time
return process.env.NEXT_PUBLIC_PLANTUML_SERVER_URL || 'https://www.plantuml.com/plantuml/svg/';
}
async function fetchPlantUmlSvg(plantUmlCode: string): Promise<string | null> {
// Get the PlantUML server from inline env var
let plantUmlServerUrl = getPlantUmlServerUrl();
@@ -64,13 +67,19 @@ async function fetchPlantUmlSvg(plantUmlCode: string): Promise<string | null> {
}
export const OverlayButton = styled(IconButton)(({ theme, variant }) => ({
backgroundColor: variant === 'outlined' ? theme.palette.background.surface : undefined,
'--Icon-fontSize': theme.fontSize.lg,
})) as typeof IconButton;
export const overlayButtonsSx: SxProps = {
position: 'absolute', top: 0, right: 0, zIndex: 2, /* top of message and its chips */
display: 'flex', flexDirection: 'row', gap: 1,
opacity: 0, transition: 'opacity 0.2s cubic-bezier(.17,.84,.44,1)',
// buttongroup: background
'& > div > button': {
backgroundColor: 'background.surface',
// backgroundColor: 'background.surface',
// backdropFilter: 'blur(12px)',
},
};
@@ -98,6 +107,7 @@ function RenderCodeImpl(props: RenderCodeImplProps) {
const [showMermaid, setShowMermaid] = React.useState(true);
const [showPlantUML, setShowPlantUML] = React.useState(true);
const [showSVG, setShowSVG] = React.useState(true);
const [softWrap, setSoftWrap] = React.useState(false);
// derived props
const {
@@ -163,19 +173,25 @@ function RenderCodeImpl(props: RenderCodeImplProps) {
component='code'
className={`language-${inferredCodeLanguage || 'unknown'}`}
sx={{
whiteSpace: 'pre', // was 'break-spaces' before we implemented per-block scrolling
whiteSpace: softWrap ? 'break-spaces' : 'pre', // was 'break-spaces' before we implemented per-block scrolling
mx: 0, p: 1.5, // this block gets a thicker border
display: 'block',
display: 'flex',
flexDirection: 'column',
// justifyContent: (renderMermaid || renderPlantUML) ? 'center' : undefined,
overflowX: 'auto',
minWidth: 160,
'&:hover > .overlay-buttons': { opacity: 1 },
...(props.sx || {}),
// fix for SVG diagrams over dark mode: https://github.com/enricoros/big-AGI/issues/520
'[data-joy-color-scheme="dark"] &': (renderPlantUML || renderMermaid) ? {
backgroundColor: 'neutral.300',
} : {},
}}>
{/* Markdown Title (File/Type) */}
{blockTitle != inferredCodeLanguage && (blockTitle.includes('.') || blockTitle.includes('://')) && (
<Sheet sx={{ boxShadow: 'sm', borderRadius: 'sm', mb: 1 }}>
<Typography level='title-sm' sx={{ px: 1, py: 0.5 }}>
<Sheet sx={{ backgroundColor: 'background.surface', boxShadow: 'xs', borderRadius: 'xs', m: -0.5, mb: 1.5 }}>
<Typography level='body-sm' sx={{ px: 1, py: 0.5, color: 'text.primary' }}>
{blockTitle}
{/*{inferredCodeLanguage}*/}
</Typography>
@@ -198,59 +214,78 @@ function RenderCodeImpl(props: RenderCodeImplProps) {
}}
sx={{
...(renderSVG ? { lineHeight: 0 } : {}),
...(renderPlantUML ? { textAlign: 'center' } : {}),
...(renderPlantUML ? { textAlign: 'center', mx: 'auto' } : {}),
}}
/>}
{/* Buttons */}
<Box className='overlay-buttons' sx={{ ...overlayButtonsSx, p: 0.5 }}>
{/* Show HTML */}
{isHTML && (
<Tooltip title={optimizeLightweight ? null : renderHTML ? 'Hide' : 'Show Web Page'}>
<IconButton variant={renderHTML ? 'solid' : 'soft'} color='danger' onClick={() => setShowHTML(!showHTML)}>
<HtmlIcon />
</IconButton>
</Tooltip>
)}
{isMermaid && (
<Tooltip title={optimizeLightweight ? null : renderMermaid ? 'Show Code' : 'Render Mermaid'}>
<IconButton variant={renderMermaid ? 'solid' : 'soft'} onClick={() => setShowMermaid(!showMermaid)}>
<SchemaIcon />
</IconButton>
</Tooltip>
)}
{isPlantUML && (
<Tooltip title={optimizeLightweight ? null : renderPlantUML ? 'Show Code' : 'Render PlantUML'}>
<IconButton variant={renderPlantUML ? 'solid' : 'soft'} onClick={() => setShowPlantUML(!showPlantUML)}>
<SchemaIcon />
</IconButton>
<OverlayButton variant={renderHTML ? 'solid' : 'outlined'} color='danger' onClick={() => setShowHTML(!showHTML)}>
<HtmlIcon sx={{ fontSize: 'xl2' }} />
</OverlayButton>
</Tooltip>
)}
{/* Show SVG */}
{isSVG && (
<Tooltip title={optimizeLightweight ? null : renderSVG ? 'Show Code' : 'Render SVG'}>
<IconButton variant={renderSVG ? 'solid' : 'soft'} onClick={() => setShowSVG(!showSVG)}>
<OverlayButton variant={renderSVG ? 'solid' : 'outlined'} onClick={() => setShowSVG(!showSVG)}>
<ShapeLineOutlinedIcon />
</IconButton>
</OverlayButton>
</Tooltip>
)}
{((isMermaid && showMermaid) || (isPlantUML && showPlantUML && !plantUmlError) || (isSVG && showSVG && canScaleSVG)) && (
<Tooltip title={optimizeLightweight ? null : fitScreen ? 'Original Size' : 'Fit Screen'}>
<IconButton variant={fitScreen ? 'solid' : 'soft'} onClick={() => setFitScreen(on => !on)}>
<FitScreenIcon />
</IconButton>
</Tooltip>
{/* Show Diagrams */}
{(isMermaid || isPlantUML) && (
<ButtonGroup aria-label='Diagram'>
{/* Toggle rendering */}
<Tooltip title={optimizeLightweight ? null : (renderMermaid || renderPlantUML) ? 'Show Code' : 'Render Mermaid'}>
<OverlayButton variant={(renderMermaid || renderPlantUML) ? 'solid' : 'outlined'} onClick={() => {
if (isMermaid) setShowMermaid(on => !on);
if (isPlantUML) setShowPlantUML(on => !on);
}}>
<SchemaIcon />
</OverlayButton>
</Tooltip>
{/* Fit-To-Screen */}
{((isMermaid && showMermaid) || (isPlantUML && showPlantUML && !plantUmlError) || (isSVG && showSVG && canScaleSVG)) && (
<Tooltip title={optimizeLightweight ? null : fitScreen ? 'Original Size' : 'Fit Screen'}>
<OverlayButton variant={fitScreen ? 'solid' : 'outlined'} onClick={() => setFitScreen(on => !on)}>
<FitScreenIcon />
</OverlayButton>
</Tooltip>
)}
</ButtonGroup>
)}
{/* New Code Window Buttons */}
{(canJSFiddle || canCodePen || canStackBlitz) && (
<ButtonGroup aria-label='Open code in external editors' sx={{ cornerRadius: 'md' }}>
<ButtonGroup aria-label='Open code in external editors'>
{canJSFiddle && <ButtonJsFiddle code={blockCode} language={inferredCodeLanguage!} />}
{canCodePen && <ButtonCodePen code={blockCode} language={inferredCodeLanguage!} />}
{canStackBlitz && <ButtonStackBlitz code={blockCode} title={blockTitle} language={inferredCodeLanguage!} />}
</ButtonGroup>
)}
{/* Soft Wrap toggle */}
{(!renderHTML && !renderMermaid && !renderPlantUML && !renderSVG) && (
<Tooltip title='Toggle Soft Wrap'>
<OverlayButton variant={softWrap ? 'solid' : 'outlined'} onClick={() => setSoftWrap(on => !on)}>
<WrapTextIcon />
</OverlayButton>
</Tooltip>
)}
{/* Copy */}
{props.noCopyButton !== true && (
<Tooltip title={optimizeLightweight ? null : 'Copy Code'}>
<IconButton variant='soft' onClick={handleCopyToClipboard}>
<OverlayButton variant='outlined' onClick={handleCopyToClipboard}>
<ContentCopyIcon />
</IconButton>
</OverlayButton>
</Tooltip>
)}
</Box>
@@ -158,6 +158,7 @@ export function RenderCodeMermaid(props: { mermaidCode: string, fitScreen: boole
component='div'
ref={mermaidContainerRef}
dangerouslySetInnerHTML={{ __html: patchSvgString(props.fitScreen, _svgCode) || 'Loading Diagram...' }}
style={{ marginInline: 'auto' }}
/>
);
@@ -2,7 +2,9 @@ import * as React from 'react';
import { CSVLink } from 'react-csv';
import { default as ReactMarkdown } from 'react-markdown';
import { default as rehypeKatex } from 'rehype-katex';
import { default as remarkGfm } from 'remark-gfm';
import { default as remarkMath } from 'remark-math';
import { Button } from '@mui/joy';
import DownloadIcon from '@mui/icons-material/Download';
@@ -98,16 +100,33 @@ const LinkRenderer = ({ children, node, ...props }: LinkRendererProps) => (
const reactMarkdownComponents = {
a: LinkRenderer, // override the link renderer to add target="_blank"
table: TableRenderer, // override the table renderer to show the download CSV links
// math/inlineMath components are not needed, rehype-katex handles this automatically
};
// Custom plugins: GFM (GitHub Flavored Markdown)
/*
* Convert OpenAI-style markdown with LaTeX to 'remark-math' compatible format.
* Note that inline or block will both be converted to $$...$$ format, and we
* disable on purpose the single dollar sign for inline math, as it can clash
* with other markdown syntax.
*/
const preprocessMarkdown = (markdownText: string) => markdownText
.replace(/\s\\\((.*?)\\\)/gs, (_match, p1) => ` $$${p1}$$`) // Replace inline LaTeX delimiters \( and \) with $$
.replace(/\s\\\[(.*?)\\]/gs, (_match, p1) => ` $$${p1}$$`); // Replace block LaTeX delimiters \[ and \] with $$
const remarkPlugins = [
remarkGfm,
];
export default function CustomMarkdownRenderer(props: any) {
return <ReactMarkdown components={reactMarkdownComponents} remarkPlugins={remarkPlugins} {...props} />;
export default function CustomMarkdownRenderer(props: { content: string }) {
return (
<ReactMarkdown
components={reactMarkdownComponents as any}
remarkPlugins={[
remarkGfm, // GitHub Flavored Markdown
[remarkMath, { singleDollarTextMath: false }], // Math
]}
rehypePlugins={[
rehypeKatex, // KaTeX
]}
>
{preprocessMarkdown(props.content)}
</ReactMarkdown>
);
}
@@ -32,9 +32,7 @@ export function RenderMarkdown(props: { textBlock: TextBlock; sx?: SxProps; }) {
sx={props.sx}
>
<React.Suspense fallback={<div>Loading...</div>}>
<DynamicMarkdownRenderer>
{props.textBlock.content}
</DynamicMarkdownRenderer>
<DynamicMarkdownRenderer content={props.textBlock.content} />
</React.Suspense>
</RenderMarkdownBox>
);
+31
View File
@@ -0,0 +1,31 @@
import type { TRPCClientErrorBase } from '@trpc/client';
import { useQuery } from '@tanstack/react-query';
import type { ModelDescriptionSchema } from './server/llm.server.types';
import type { DModelSource } from './store-llms';
import { llmsUpdateModelsForSourceOrThrow } from './llm.client';
/**
* Hook that fetches the list of models from the vendor and updates the store,
* while returning the fetch state.
*/
export function useLlmUpdateModels<TSourceSetup>(
enabled: boolean,
source: DModelSource<TSourceSetup>,
keepUserEdits?: boolean,
): {
isFetching: boolean,
refetch: () => void,
isError: boolean,
error: TRPCClientErrorBase<any> | null
} {
return useQuery<{ models: ModelDescriptionSchema[] }, TRPCClientErrorBase<any> | null>({
enabled: enabled && !!source,
queryKey: ['list-models', source.id],
queryFn: async () => await llmsUpdateModelsForSourceOrThrow(source.id, keepUserEdits === true),
staleTime: Infinity,
});
}
+82 -2
View File
@@ -1,7 +1,13 @@
import type { DLLMId } from './store-llms';
import { sendGAEvent } from '@next/third-parties/google';
import { hasGoogleAnalytics } from '~/common/components/GoogleAnalytics';
import type { ModelDescriptionSchema } from './server/llm.server.types';
import type { OpenAIWire } from './server/openai/openai.wiretypes';
import type { StreamingClientUpdate } from './vendors/unifiedStreamingClient';
import { findVendorForLlmOrThrow } from './vendors/vendors.registry';
import { DLLM, DLLMId, DModelSource, DModelSourceId, LLM_IF_OAI_Chat, useModelsStore } from './store-llms';
import { FALLBACK_LLM_TEMPERATURE } from './vendors/openai/openai.vendor';
import { findAccessForSourceOrThrow, findVendorForLlmOrThrow } from './vendors/vendors.registry';
// LLM Client Types
@@ -29,6 +35,80 @@ export interface VChatMessageOrFunctionCallOut extends VChatMessageOut {
// LLM Client Functions
export async function llmsUpdateModelsForSourceOrThrow(sourceId: DModelSourceId, keepUserEdits: boolean): Promise<{ models: ModelDescriptionSchema[] }> {
// get the access, assuming there's no client config and the server will do all
const { source, vendor, transportAccess } = findAccessForSourceOrThrow(sourceId);
// fetch models
const data = await vendor.rpcUpdateModelsOrThrow(transportAccess);
// update the global models store
useModelsStore.getState().setLLMs(
data.models.map(model => modelDescriptionToDLLMOpenAIOptions(model, source)),
source.id,
true,
keepUserEdits,
);
// figure out which vendors are actually used and useful
hasGoogleAnalytics && sendGAEvent('event', 'app_models_updated', {
app_models_source_id: source.id,
app_models_source_label: source.label,
app_models_updated_count: data.models.length || 0,
app_models_vendor_id: vendor.id,
app_models_vendor_label: vendor.name,
});
// return the fetched models
return data;
}
function modelDescriptionToDLLMOpenAIOptions<TSourceSetup, TLLMOptions>(model: ModelDescriptionSchema, source: DModelSource<TSourceSetup>): DLLM<TSourceSetup, TLLMOptions> {
// null means unknown contenxt/output tokens
const contextTokens = model.contextWindow || null;
const maxOutputTokens = model.maxCompletionTokens || (contextTokens ? Math.round(contextTokens / 2) : null);
const llmResponseTokensRatio = model.maxCompletionTokens ? 1 / 2 : 1 / 4;
const llmResponseTokens = maxOutputTokens ? Math.round(maxOutputTokens * llmResponseTokensRatio) : null;
return {
id: `${source.id}-${model.id}`,
// editable properties
label: model.label,
created: model.created || 0,
updated: model.updated || 0,
description: model.description,
hidden: !!model.hidden,
// isEdited: false, // NOTE: this is set by the store on user edits
// hard properties
contextTokens,
maxOutputTokens,
trainingDataCutoff: model.trainingDataCutoff,
interfaces: model.interfaces?.length ? model.interfaces : [LLM_IF_OAI_Chat],
// inputTypes: ...
benchmark: model.benchmark,
pricing: model.pricing,
// derived properties
tmpIsFree: model.pricing?.chatIn === 0 && model.pricing?.chatOut === 0,
tmpIsVision: model.interfaces?.includes(LLM_IF_OAI_Chat) === true,
sId: source.id,
_source: source,
options: {
llmRef: model.id,
// @ts-ignore FIXME: large assumption that this is LLMOptionsOpenAI object
llmTemperature: FALLBACK_LLM_TEMPERATURE,
llmResponseTokens,
},
};
}
export async function llmChatGenerateOrThrow<TSourceSetup = unknown, TAccess = unknown, TLLMOptions = unknown>(
llmId: DLLMId,
messages: VChatMessageIn[],
@@ -8,7 +8,6 @@ import VisibilityOffIcon from '@mui/icons-material/VisibilityOff';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { GoodModal } from '~/common/components/GoodModal';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { DLLMId, useModelsStore } from '../store-llms';
import { findVendorById } from '../vendors/vendors.registry';
@@ -123,20 +122,20 @@ export function LLMOptionsModal(props: { id: DLLMId, onClose: () => void }) {
<FormControl orientation='horizontal' sx={{ flexWrap: 'nowrap' }}>
<FormLabelStart title='Details' sx={{ minWidth: 80 }} onClick={() => setShowDetails(!showDetails)} />
{showDetails && <Box sx={{ display: 'flex', flexDirection: 'column', gap: 1 }}>
<Typography level='body-md'>
{llm.id}
</Typography>
{llm.isFree && <Typography level='body-xs'>
🎁 Free model - note: refresh models to check for updates in pricing
</Typography>}
{!!llm.description && <Typography level='body-xs'>
{!!llm.description && <Typography level='body-sm'>
{llm.description}
</Typography>}
{!!llm.tmpIsFree && <Typography level='body-xs'>
🎁 Free model - note: refresh models to check for updates in pricing
</Typography>}
<Typography level='body-xs'>
llm id: {llm.id}<br />
context tokens: <b>{llm.contextTokens ? llm.contextTokens.toLocaleString() : 'not provided'}</b>{` · `}
max output tokens: <b>{llm.maxOutputTokens ? llm.maxOutputTokens.toLocaleString() : 'not provided'}</b><br />
{!!llm.created && `created: ${(new Date(llm.created * 1000)).toLocaleString()} · `}
{!!llm.created && <>created: {(new Date(llm.created * 1000)).toLocaleString()}<br /></>}
{/*· tags: {llm.tags.join(', ')}*/}
{!!llm.pricing && <>pricing: $<b>{llm.pricing.chatIn || '(unk) '}</b>/M in, $<b>{llm.pricing.chatOut || '(unk) '}</b>/M out<br /></>}
{!!llm.benchmark && <>benchmark: <b>{llm.benchmark.cbaElo?.toLocaleString() || '(unk) '}</b> CBA Elo<br /></>}
config: {JSON.stringify(llm.options)}
</Typography>
</Box>}
@@ -78,48 +78,79 @@ export function ModelsSourceSelector(props: {
// vendor list items
const vendorItems = React.useMemo(() => findAllVendors()
.filter(v => !!v.instanceLimit)
.sort((a, b) => a.name.localeCompare(b.name))
.map(vendor => {
const sourceInstanceCount = modelSources.filter(source => source.vId === vendor.id).length;
const enabled = vendor.instanceLimit > sourceInstanceCount;
const backendCaps = getBackendCapabilities();
return {
vendor,
enabled,
component: (
<MenuItem key={vendor.id} disabled={!enabled} onClick={() => handleAddSourceFromVendor(vendor.id)}>
<ListItemDecorator>
{vendorIcon(vendor, vendorHasBackendCap(vendor, backendCaps))}
</ListItemDecorator>
{vendor.name}
const vendorComponents = React.useMemo(() => {
{/*{sourceInstanceCount > 0 && ` (added)`}*/}
// prepare the items
const vendorItems = findAllVendors()
.filter(v => !!v.instanceLimit)
.sort((a, b) => {
// sort first by 'cloud' on top (vs. 'local'), then by name
// if (a.location !== b.location)
// return a.location === 'cloud' ? -1 : 1;
return a.name.localeCompare(b.name);
})
.map(vendor => {
const sourceInstanceCount = modelSources.filter(source => source.vId === vendor.id).length;
const enabled = vendor.instanceLimit > sourceInstanceCount;
const backendCaps = getBackendCapabilities();
return {
vendor,
enabled,
component: (
<MenuItem key={vendor.id} disabled={!enabled} onClick={() => handleAddSourceFromVendor(vendor.id)}>
<ListItemDecorator>
{vendorIcon(vendor, vendorHasBackendCap(vendor, backendCaps))}
</ListItemDecorator>
{vendor.name}
{/* Free indication */}
{!!vendor.hasFreeModels && ` 🎁`}
{/*{sourceInstanceCount > 0 && ` (added)`}*/}
{/* Multiple instance hint */}
{vendor.instanceLimit > 1 && !!sourceInstanceCount && enabled && (
<Typography component='span' level='body-sm'>
#{sourceInstanceCount + 1}
{/*/{vendor.instanceLimit}*/}
</Typography>
)}
{/* Free indication */}
{/*{!!vendor.hasFreeModels && ` 🎁`}*/}
{/* Local chip */}
{/*{vendor.location === 'local' && (*/}
{/* <Chip variant='solid' size='sm'>*/}
{/* local*/}
{/* </Chip>*/}
{/*)}*/}
</MenuItem>
),
};
},
), [handleAddSourceFromVendor, modelSources]);
{/* Multiple instance hint */}
{vendor.instanceLimit > 1 && !!sourceInstanceCount && enabled && (
<Typography component='span' level='body-sm'>
#{sourceInstanceCount + 1}
{/*/{vendor.instanceLimit}*/}
</Typography>
)}
{/* Local chip */}
{/*{vendor.location === 'local' && (*/}
{/* <Chip variant='solid' size='sm'>*/}
{/* local*/}
{/* </Chip>*/}
{/*)}*/}
</MenuItem>
),
};
},
);
// prepend headers
// const components: React.ReactNode[] = [];
// let lastLocation: 'cloud' | 'local' | null = null;
// vendorItems.forEach(item => {
// if (item.vendor.location !== lastLocation) {
// lastLocation = item.vendor.location;
// components.push(
// <Typography key={lastLocation} level='body-xs' sx={{
// color: 'text.tertiary',
// mx: 1.5,
// mt: 1,
// mb: 1,
// }}>
// {lastLocation === 'cloud' ? 'Cloud Services' : 'Local Services'}
// </Typography>,
// );
// }
// components.push(item.component);
// });
// return components;
return vendorItems.map(item => item.component);
}, [handleAddSourceFromVendor, modelSources]);
// source items
const sourceItems = React.useMemo(() => modelSources
@@ -186,9 +217,9 @@ export function ModelsSourceSelector(props: {
<CloseableMenu
placement='bottom-start' zIndex={themeZIndexOverMobileDrawer}
open={!!vendorsMenuAnchor} anchorEl={vendorsMenuAnchor} onClose={closeVendorsMenu}
sx={{ minWidth: 220 }}
sx={{ minWidth: 200 }}
>
{vendorItems.map(item => item.component)}
{vendorComponents}
</CloseableMenu>
{/* source delete confirmation */}
@@ -17,6 +17,7 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
trainingDataCutoff: 'Aug 2023',
interfaces: [LLM_IF_OAI_Chat, LLM_IF_OAI_Vision],
pricing: { chatIn: 15, chatOut: 75 },
benchmark: { cbaElo: 1256, cbaMmlu: 86.8 },
},
{
id: 'claude-3-sonnet-20240229',
@@ -28,6 +29,7 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
trainingDataCutoff: 'Aug 2023',
interfaces: [LLM_IF_OAI_Chat, LLM_IF_OAI_Vision],
pricing: { chatIn: 3, chatOut: 15 },
benchmark: { cbaElo: 1203, cbaMmlu: 79 },
},
{
id: 'claude-3-haiku-20240307',
@@ -39,6 +41,7 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
trainingDataCutoff: 'Aug 2023',
interfaces: [LLM_IF_OAI_Chat, LLM_IF_OAI_Vision],
pricing: { chatIn: 0.25, chatOut: 1.25 },
benchmark: { cbaElo: 1181, cbaMmlu: 75.2 },
},
// Claude 2 models
@@ -51,6 +54,7 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
maxCompletionTokens: 4096,
interfaces: [LLM_IF_OAI_Chat],
pricing: { chatIn: 8, chatOut: 24 },
benchmark: { cbaElo: 1119 },
},
{
id: 'claude-2.0',
@@ -61,6 +65,7 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
maxCompletionTokens: 4096,
interfaces: [LLM_IF_OAI_Chat],
pricing: { chatIn: 8, chatOut: 24 },
benchmark: { cbaElo: 1131, cbaMmlu: 78.5 },
hidden: true,
},
{
@@ -70,8 +75,8 @@ export const hardcodedAnthropicModels: ModelDescriptionSchema[] = [
description: 'Low-latency, high throughput model',
contextWindow: 100000,
maxCompletionTokens: 4096,
pricing: { chatIn: 0.8, chatOut: 2.4 },
interfaces: [LLM_IF_OAI_Chat],
pricing: { chatIn: 0.8, chatOut: 2.4 },
},
{
id: 'claude-instant-1.1',
@@ -83,6 +83,17 @@ export function anthropicMessagesPayloadOrThrow(model: OpenAIModelSchema, histor
const anthropicRole = historyItem.role === 'assistant' ? 'assistant' : 'user';
if (index === 0 || anthropicRole !== lastMessage?.role) {
// Hack/Hotfix: if the first role is 'assistant', then prepend a user message otherwise the API call will break;
// but what should we really do here?
if (index === 0 && anthropicRole === 'assistant') {
if (systemPrompt) {
// This stinks, as it will duplicate the system prompt; it's the best we can do for now for a better UX
acc.push({ role: 'user', content: [{ type: 'text', text: systemPrompt }] });
} else
throw new Error('The first message in the chat history must be a user message and not an assistant message.');
}
// Add a new message object if the role is different from the previous message
acc.push({
role: anthropicRole,
@@ -61,9 +61,11 @@ export function geminiModelToModelDescription(geminiModel: GeminiModelSchema, al
description: descriptionLong,
contextWindow: contextWindow,
maxCompletionTokens: outputTokenLimit,
// pricing: isGeminiPro ? { needs per-character and per-image pricing } : undefined,
// rateLimits: isGeminiPro ? { reqPerMinute: 60 } : undefined,
// trainingDataCutoff: '...',
interfaces,
// rateLimits: isGeminiPro ? { reqPerMinute: 60 } : undefined,
// benchmarks: ...
// pricing: isGeminiPro ? { needs per-character and per-image pricing } : undefined,
hidden,
};
}
+104 -62
View File
@@ -19,7 +19,7 @@ import { OLLAMA_PATH_CHAT, ollamaAccess, ollamaAccessSchema, ollamaChatCompletio
// OpenAI server imports
import type { OpenAIWire } from './openai/openai.wiretypes';
import { openAIAccess, openAIAccessSchema, openAIChatCompletionPayload, openAIHistorySchema, openAIModelSchema } from './openai/openai.router';
import { openAIAccess, openAIAccessSchema, openAIChatCompletionPayload, OpenAIHistorySchema, openAIHistorySchema, OpenAIModelSchema, openAIModelSchema } from './openai/openai.router';
// configuration
@@ -54,82 +54,62 @@ const chatStreamingInputSchema = z.object({
});
export type ChatStreamingInputSchema = z.infer<typeof chatStreamingInputSchema>;
// the purpose is to send something out even before the upstream stream starts, so that we keep the connection up
const chatStreamingStartOutputPacketSchema = z.object({
type: z.enum(['start']),
});
export type ChatStreamingPreambleStartSchema = z.infer<typeof chatStreamingStartOutputPacketSchema>;
// the purpose is to have a first packet that contains the model name, so that the client can display it
// this is a hack until we have a better streaming format
const chatStreamingFirstOutputPacketSchema = z.object({
model: z.string(),
});
export type ChatStreamingFirstOutputPacketSchema = z.infer<typeof chatStreamingFirstOutputPacketSchema>;
export type ChatStreamingPreambleModelSchema = z.infer<typeof chatStreamingFirstOutputPacketSchema>;
export async function llmStreamingRelayHandler(req: NextRequest): Promise<Response> {
// inputs - reuse the tRPC schema
// Parse the request
const body = await req.json();
const { access, model, history } = chatStreamingInputSchema.parse(body);
const prettyDialect = serverCapitalizeFirstLetter(access.dialect);
// access/dialect dependent setup:
// - requestAccess: the headers and URL to use for the upstream API call
// - muxingFormat: the format of the event stream (sse or json-nl)
// - vendorStreamParser: the parser to use for the event stream
// Prepare the upstream API request and demuxer/parser
let requestData: ReturnType<typeof _prepareRequestData>;
try {
requestData = _prepareRequestData(access, model, history);
} catch (error: any) {
console.error(`[POST] /api/llms/stream: ${prettyDialect}: prepareRequestData issue:`, safeErrorString(error));
return new NextResponse(`**[Service Issue] ${prettyDialect}**: ${safeErrorString(error) || 'Unknown streaming error'}`, {
status: 422,
});
}
// Connect to the upstream (blocking)
let upstreamResponse: Response;
let requestAccess: { headers: HeadersInit, url: string } = { headers: {}, url: '' };
let muxingFormat: MuxingFormat = 'sse';
let vendorStreamParser: AIStreamParser;
try {
// prepare the API request data
let body: object;
switch (access.dialect) {
case 'anthropic':
requestAccess = anthropicAccess(access, '/v1/messages');
body = anthropicMessagesPayloadOrThrow(model, history, true);
vendorStreamParser = createStreamParserAnthropicMessages();
break;
case 'gemini':
requestAccess = geminiAccess(access, model.id, geminiModelsStreamGenerateContentPath);
body = geminiGenerateContentTextPayload(model, history, access.minSafetyLevel, 1);
vendorStreamParser = createStreamParserGemini(model.id.replace('models/', ''));
break;
case 'ollama':
requestAccess = ollamaAccess(access, OLLAMA_PATH_CHAT);
body = ollamaChatCompletionPayload(model, history, true);
muxingFormat = 'json-nl';
vendorStreamParser = createStreamParserOllama();
break;
case 'azure':
case 'groq':
case 'lmstudio':
case 'localai':
case 'mistral':
case 'oobabooga':
case 'openai':
case 'openrouter':
case 'perplexity':
case 'togetherai':
requestAccess = openAIAccess(access, model.id, '/v1/chat/completions');
body = openAIChatCompletionPayload(access.dialect, model, history, null, null, 1, true);
vendorStreamParser = createStreamParserOpenAI();
break;
}
if (SERVER_DEBUG_WIRE)
console.log('-> streaming:', debugGenerateCurlCommand('POST', requestAccess.url, requestAccess.headers, body));
console.log('-> streaming:', debugGenerateCurlCommand('POST', requestData.url, requestData.headers, requestData.body));
// POST to our API route
upstreamResponse = await nonTrpcServerFetchOrThrow(requestAccess.url, 'POST', requestAccess.headers, body);
// [MAY TIMEOUT] on Vercel Edge calls; this times out on long requests to Anthropic, on 2024-04-23.
// The solution would be to return a new response with a 200 status code, and then stream the data
// in a new request, but we'll lose back-pressure and complicates logic.
upstreamResponse = await nonTrpcServerFetchOrThrow(requestData.url, 'POST', requestData.headers, requestData.body);
} catch (error: any) {
// server-side admins message
const capDialect = serverCapitalizeFirstLetter(access.dialect);
const fetchOrVendorError = safeErrorString(error) + (error?.cause ? ' · ' + JSON.stringify(error.cause) : '');
console.error(`[POST] /api/llms/stream: ${capDialect}: fetch issue:`, fetchOrVendorError, requestAccess?.url);
console.error(`[POST] /api/llms/stream: ${capDialect}: fetch issue:`, fetchOrVendorError, requestData?.url);
// client-side users visible message
const statusCode = ((error instanceof ServerFetchError) && (error.statusCode >= 400)) ? error.statusCode : 422;
const devMessage = process.env.NODE_ENV === 'development' ? ` [DEV_URL: ${requestAccess?.url}]` : '';
const devMessage = process.env.NODE_ENV === 'development' ? ` [DEV_URL: ${requestData?.url}]` : '';
return new NextResponse(`**[Service Issue] ${capDialect}**: ${fetchOrVendorError}${devMessage}`, {
status: statusCode,
});
@@ -144,9 +124,10 @@ export async function llmStreamingRelayHandler(req: NextRequest): Promise<Respon
* NOTE: we have not benchmarked to see if there is performance impact by using this approach - we do want to have
* a 'healthy' level of inventory (i.e., pre-buffering) on the pipe to the client.
*/
const transformUpstreamToBigAgiClient = createEventStreamTransformer(
muxingFormat, vendorStreamParser, access.dialect,
const transformUpstreamToBigAgiClient = createUpstreamTransformer(
requestData.vendorMuxingFormat, requestData.vendorStreamParser, access.dialect,
);
const chatResponseStream =
(upstreamResponse.body || createEmptyReadableStream())
.pipeThrough(transformUpstreamToBigAgiClient);
@@ -162,11 +143,16 @@ export async function llmStreamingRelayHandler(req: NextRequest): Promise<Respon
// Event Stream Transformers
/**
* The default demuxer for EventSource upstreams.
*/
const _createDemuxerEventSource: (onParse: EventSourceParseCallback) => EventSourceParser = createEventsourceParser;
/**
* Creates a parser for a 'JSON\n' non-event stream, to be swapped with an EventSource parser.
* Ollama is the only vendor that uses this format.
*/
function createDemuxerJsonNewline(onParse: EventSourceParseCallback): EventSourceParser {
function _createDemuxerJsonNewline(onParse: EventSourceParseCallback): EventSourceParser {
let accumulator: string = '';
return {
// feeds a new chunk to the parser - we accumulate in case of partial data, and only execute on full lines
@@ -197,7 +183,7 @@ function createDemuxerJsonNewline(onParse: EventSourceParseCallback): EventSourc
* Creates a TransformStream that parses events from an EventSource stream using a custom parser.
* @returns {TransformStream<Uint8Array, string>} TransformStream parsing events.
*/
function createEventStreamTransformer(muxingFormat: MuxingFormat, vendorTextParser: AIStreamParser, dialectLabel: string): TransformStream<Uint8Array, Uint8Array> {
function createUpstreamTransformer(muxingFormat: MuxingFormat, vendorTextParser: AIStreamParser, dialectLabel: string): TransformStream<Uint8Array, Uint8Array> {
const textDecoder = new TextDecoder();
const textEncoder = new TextEncoder();
let eventSourceParser: EventSourceParser;
@@ -206,6 +192,10 @@ function createEventStreamTransformer(muxingFormat: MuxingFormat, vendorTextPars
return new TransformStream({
start: async (controller): Promise<void> => {
// Send initial packet indicating the start of the stream
const startPacket: ChatStreamingPreambleStartSchema = { type: 'start' };
controller.enqueue(textEncoder.encode(JSON.stringify(startPacket)));
// only used for debugging
let debugLastMs: number | null = null;
@@ -242,9 +232,9 @@ function createEventStreamTransformer(muxingFormat: MuxingFormat, vendorTextPars
};
if (muxingFormat === 'sse')
eventSourceParser = createEventsourceParser(onNewEvent);
eventSourceParser = _createDemuxerEventSource(onNewEvent);
else if (muxingFormat === 'json-nl')
eventSourceParser = createDemuxerJsonNewline(onNewEvent);
eventSourceParser = _createDemuxerJsonNewline(onNewEvent);
},
// stream=true is set because the data is not guaranteed to be final and un-chunked
@@ -293,7 +283,7 @@ function createStreamParserAnthropicMessages(): AIStreamParser {
responseMessage = anthropicWireMessagesResponseSchema.parse(message);
// hack: prepend the model name to the first packet
if (firstMessage) {
const firstPacket: ChatStreamingFirstOutputPacketSchema = { model: responseMessage.model };
const firstPacket: ChatStreamingPreambleModelSchema = { model: responseMessage.model };
text = JSON.stringify(firstPacket);
}
break;
@@ -408,7 +398,7 @@ function createStreamParserGemini(modelName: string): AIStreamParser {
// hack: prepend the model name to the first packet
if (!hasBegun) {
hasBegun = true;
const firstPacket: ChatStreamingFirstOutputPacketSchema = { model: modelName };
const firstPacket: ChatStreamingPreambleModelSchema = { model: modelName };
text = JSON.stringify(firstPacket) + text;
}
@@ -444,7 +434,7 @@ function createStreamParserOllama(): AIStreamParser {
// hack: prepend the model name to the first packet
if (!hasBegun && chunk.model) {
hasBegun = true;
const firstPacket: ChatStreamingFirstOutputPacketSchema = { model: chunk.model };
const firstPacket: ChatStreamingPreambleModelSchema = { model: chunk.model };
text = JSON.stringify(firstPacket) + text;
}
@@ -485,7 +475,7 @@ function createStreamParserOpenAI(): AIStreamParser {
// hack: prepend the model name to the first packet
if (!hasBegun) {
hasBegun = true;
const firstPacket: ChatStreamingFirstOutputPacketSchema = { model: json.model };
const firstPacket: ChatStreamingPreambleModelSchema = { model: json.model };
text = JSON.stringify(firstPacket) + text;
}
@@ -493,4 +483,56 @@ function createStreamParserOpenAI(): AIStreamParser {
const close = !!json.choices[0].finish_reason;
return { text, close };
};
}
function _prepareRequestData(access: ChatStreamingInputSchema['access'], model: OpenAIModelSchema, history: OpenAIHistorySchema): {
headers: HeadersInit;
url: string;
body: object;
vendorMuxingFormat: MuxingFormat;
vendorStreamParser: AIStreamParser;
} {
switch (access.dialect) {
case 'anthropic':
return {
...anthropicAccess(access, '/v1/messages'),
body: anthropicMessagesPayloadOrThrow(model, history, true),
vendorMuxingFormat: 'sse',
vendorStreamParser: createStreamParserAnthropicMessages(),
};
case 'gemini':
return {
...geminiAccess(access, model.id, geminiModelsStreamGenerateContentPath),
body: geminiGenerateContentTextPayload(model, history, access.minSafetyLevel, 1),
vendorMuxingFormat: 'sse',
vendorStreamParser: createStreamParserGemini(model.id.replace('models/', '')),
};
case 'ollama':
return {
...ollamaAccess(access, OLLAMA_PATH_CHAT),
body: ollamaChatCompletionPayload(model, history, access.ollamaJson, true),
vendorMuxingFormat: 'json-nl',
vendorStreamParser: createStreamParserOllama(),
};
case 'azure':
case 'groq':
case 'lmstudio':
case 'localai':
case 'mistral':
case 'oobabooga':
case 'openai':
case 'openrouter':
case 'perplexity':
case 'togetherai':
return {
...openAIAccess(access, model.id, '/v1/chat/completions'),
body: openAIChatCompletionPayload(access.dialect, model, history, null, null, 1, true),
vendorMuxingFormat: 'sse',
vendorStreamParser: createStreamParserOpenAI(),
};
}
}
+11 -1
View File
@@ -9,10 +9,18 @@ const pricingSchema = z.object({
chatOut: z.number().optional(), // Cost per Million output tokens
});
const benchmarkSchema = z.object({
cbaElo: z.number().optional(),
cbaMmlu: z.number().optional(),
});
// const rateLimitsSchema = z.object({
// reqPerMinute: z.number().optional(),
// });
const interfaceSchema = z.enum([LLM_IF_OAI_Chat, LLM_IF_OAI_Fn, LLM_IF_OAI_Complete, LLM_IF_OAI_Vision, LLM_IF_OAI_Json]);
// NOTE: update the `fromManualMapping` function if you add new fields
const modelDescriptionSchema = z.object({
id: z.string(),
label: z.string(),
@@ -23,9 +31,11 @@ const modelDescriptionSchema = z.object({
maxCompletionTokens: z.number().optional(),
// rateLimits: rateLimitsSchema.optional(),
trainingDataCutoff: z.string().optional(),
interfaces: z.array(z.enum([LLM_IF_OAI_Chat, LLM_IF_OAI_Fn, LLM_IF_OAI_Complete, LLM_IF_OAI_Vision, LLM_IF_OAI_Json])),
interfaces: z.array(interfaceSchema),
benchmark: benchmarkSchema.optional(),
pricing: pricingSchema.optional(),
hidden: z.boolean().optional(),
// TODO: add inputTypes/Kinds..
});
// this is also used by the Client
+87 -68
View File
@@ -12,73 +12,92 @@
>>>
*
* from: https://ollama.ai/library?sort=featured
* Note: the default contextWindow in code is 8192, so we do not redefine that
*/
export const OLLAMA_BASE_MODELS: { [key: string]: { description: string, pulls: number, added?: string } } = {
'llama2': { description: 'The most popular model for general use.', pulls: 259800 },
'mistral': { description: 'The 7B model released by Mistral AI, updated to version 0.2', pulls: 165500 },
'llava': { description: '🌋 A novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding.', pulls: 17000, added: '20231215' },
'mixtral': { description: 'A high-quality Mixture of Experts (MoE) model with open weights by Mistral AI.', pulls: 36700, added: '20231215' },
'starling-lm': { description: 'Starling is a large language model trained by reinforcement learning from AI feedback focused on improving chatbot helpfulness.', pulls: 6569, added: '20231129' },
'neural-chat': { description: 'A fine-tuned model based on Mistral with good coverage of domain and language.', pulls: 8164, added: '20231129' },
'codellama': { description: 'A large language model that can use text prompts to generate and discuss code.', pulls: 111100 },
'dolphin-mixtral': { description: 'An uncensored, fine-tuned model based on the Mixtral mixture of experts model that excels at coding tasks. Created by Eric Hartford.', pulls: 94800, added: '20231215' },
'mistral-openorca': { description: 'Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.', pulls: 87300 },
'llama2-uncensored': { description: 'Uncensored Llama 2 model by George Sung and Jarrad Hope.', pulls: 54500 },
'orca-mini': { description: 'A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.', pulls: 40300 },
'vicuna': { description: 'General use chat model based on Llama and Llama 2 with 2K to 16K context sizes.', pulls: 25200 },
'wizard-vicuna-uncensored': { description: 'Wizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford.', pulls: 21900 },
'deepseek-coder': { description: 'DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.', pulls: 21100, added: '20231129' },
'phi': { description: 'Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.', pulls: 18100, added: '20231220' },
'dolphin-mistral': { description: 'The uncensored Dolphin model based on Mistral that excels at coding tasks. Updated to version 2.6.', pulls: 17800, added: '20240126' },
'zephyr': { description: 'Zephyr beta is a fine-tuned 7B version of mistral that was trained on on a mix of publicly available, synthetic datasets.', pulls: 16400 },
'wizardcoder': { description: 'State-of-the-art code generation model', pulls: 14300 },
'phind-codellama': { description: 'Code generation model based on Code Llama.', pulls: 13500 },
'openhermes': { description: 'OpenHermes 2.5 is a 7B model fine-tuned by Teknium on Mistral with fully open datasets.', pulls: 13000, added: '20240126' },
'llama2-chinese': { description: 'Llama 2 based model fine tuned to improve Chinese dialogue ability.', pulls: 12700 },
'orca2': { description: 'Orca 2 is built by Microsoft research, and are a fine-tuned version of Meta\'s Llama 2 models. The model is designed to excel particularly in reasoning.', pulls: 10500, added: '20231129' },
'nous-hermes': { description: 'General use models based on Llama and Llama 2 from Nous Research.', pulls: 10100 },
'wizard-math': { description: 'Model focused on math and logic problems', pulls: 10100 },
'falcon': { description: 'A large language model built by the Technology Innovation Institute (TII) for use in summarization, text generation, and chat bots.', pulls: 9746 },
'openchat': { description: 'A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106.', pulls: 9089, added: '20231129' },
'codeup': { description: 'Great code generation model based on Llama2.', pulls: 7566 },
'tinyllama': { description: 'The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.', pulls: 6784, added: '20240126' },
'stable-beluga': { description: 'Llama 2 based model fine tuned on an Orca-style dataset. Originally called Free Willy.', pulls: 6702 },
'everythinglm': { description: 'Uncensored Llama2 based model with support for a 16K context window.', pulls: 6580 },
'medllama2': { description: 'Fine-tuned Llama 2 model to answer medical questions based on an open source medical dataset.', pulls: 6448 },
'starcoder': { description: 'StarCoder is a code generation model trained on 80+ programming languages.', pulls: 6273 },
'wizardlm-uncensored': { description: 'Uncensored version of Wizard LM model', pulls: 6241 },
'yi': { description: 'A high-performing, bilingual language model.', pulls: 5648 },
'dolphin-phi': { description: '2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.', pulls: 5427, added: '20240126' },
'bakllava': { description: 'BakLLaVA is a multimodal model consisting of the Mistral 7B base model augmented with the LLaVA architecture.', pulls: 5335, added: '20231215' },
'solar': { description: 'A compact, yet powerful 10.7B large language model designed for single-turn conversation.', pulls: 4817 },
'yarn-mistral': { description: 'An extension of Mistral to support context windows of 64K or 128K.', pulls: 4424 },
'wizard-vicuna': { description: 'Wizard Vicuna is a 13B parameter model based on Llama 2 trained by MelodysDreamj.', pulls: 4129 },
'samantha-mistral': { description: 'A companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.', pulls: 3764 },
'sqlcoder': { description: 'SQLCoder is a code completion model fined-tuned on StarCoder for SQL generation tasks', pulls: 3756 },
'meditron': { description: 'Open-source medical large language model adapted from Llama 2 to the medical domain.', pulls: 3481, added: '20231129' },
'stablelm-zephyr': { description: 'A lightweight chat model allowing accurate, and responsive output without requiring high-end hardware.', pulls: 3412, added: '20231210' },
'open-orca-platypus2': { description: 'Merge of the Open Orca OpenChat model and the Garage-bAInd Platypus 2 model. Designed for chat and code generation.', pulls: 3403 },
'yarn-llama2': { description: 'An extension of Llama 2 that supports a context of up to 128k tokens.', pulls: 3259 },
'magicoder': { description: '🎩 Magicoder is a family of 7B parameter models trained on 75K synthetic instruction data using OSS-Instruct, a novel approach to enlightening LLMs with open-source code snippets.', pulls: 3118, added: '20231220' },
'deepseek-llm': { description: 'An advanced language model crafted with 2 trillion bilingual tokens.', pulls: 3036, added: '20231129' },
'nous-hermes2': { description: 'The powerful family of models by Nous Research that excels at scientific discussion and coding tasks.', pulls: 2604, added: '20240126' },
'codebooga': { description: 'A high-performing code instruct model created by merging two existing code models.', pulls: 2495 },
'mistrallite': { description: 'MistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.', pulls: 2399 },
'stable-code': { description: 'Stable Code 3B is a model offering accurate and responsive code completion at a level on par with models such as CodeLLaMA 7B that are 2.5x larger.', pulls: 2323, added: '20240126' },
'nous-hermes2-mixtral': { description: 'The Nous Hermes 2 model from Nous Research, now trained over Mixtral.', pulls: 2173, added: '20240126' },
'goliath': { description: 'A language model created by combining two fine-tuned Llama 2 70B models into one.', pulls: 2002, added: '20231129' },
'nexusraven': { description: 'Nexus Raven is a 13B instruction tuned model for function calling tasks.', pulls: 1882 },
'llama-pro': { description: 'An expansion of Llama 2 that specializes in integrating both general language understanding and domain-specific knowledge, particularly in programming and mathematics.', pulls: 1853, added: '20240126' },
'wizardlm': { description: 'General use 70 billion parameter model based on Llama 2.', pulls: 1703 },
'notux': { description: 'A top-performing mixture of experts model, fine-tuned with high-quality data.', pulls: 1564, added: '20240126' },
'alfred': { description: 'A robust conversational model designed to be used for both chat and instruct use cases.', pulls: 1461, added: '20231129' },
'xwinlm': { description: 'Conversational model based on Llama 2 that performs competitively on various benchmarks.', pulls: 1312 },
'megadolphin': { description: 'MegaDolphin-2.2-120b is a transformation of Dolphin-2.2-70b created by interleaving the model with itself.', pulls: 1115, added: '20240126' },
'qwen': { description: 'Qwen is a series of large language models by Alibaba Cloud spanning from 1.8B to 72B parameters', pulls: 1066, added: '20240126' },
'notus': { description: 'A 7B chat model fine-tuned with high-quality data and based on Zephyr.', pulls: 885, added: '20240126' },
'tinydolphin': { description: 'An experimental 1.1B parameter model trained on the new Dolphin 2.8 dataset by Eric Hartford and based on TinyLlama.', pulls: 735, added: '20240126' },
'stablelm2': { description: 'Stable LM 2 1.6B is a state-of-the-art 1.6 billion parameter small language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch.', pulls: 45, added: '20240126' },
'duckdb-nsql': { description: '7B parameter text-to-SQL model made by MotherDuck and Numbers Station.', pulls: 42, added: '20240126' },
export const OLLAMA_BASE_MODELS: { [key: string]: { description: string, pulls: number, added?: string, contextWindow?: number } } = {
'llama3': { description: 'Meta Llama 3: The most capable openly available LLM to date', pulls: 562300, added: '20240501' },
'phi3': { description: 'Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model by Microsoft.', pulls: 61800, added: '20240501' },
'wizardlm2': { description: 'State of the art large language model from Microsoft AI with improved performance on complex chat, multilingual, reasoning and agent use cases.', pulls: 34400, added: '20240501' },
'mistral': { description: 'The 7B model released by Mistral AI, updated to version 0.2.', pulls: 682700 },
'gemma': { description: 'Gemma is a family of lightweight, state-of-the-art open models built by Google DeepMind. Updated to version 1.1', pulls: 1100000, added: '20240501' },
'mixtral': { description: 'A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes.', pulls: 205300 },
'llama2': { description: 'Llama 2 is a collection of foundation language models ranging from 7B to 70B parameters.', pulls: 1400000 },
'codegemma': { description: 'CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following.', pulls: 35000, added: '20240501' },
'command-r': { description: 'Command R is a Large Language Model optimized for conversational interaction and long context tasks.', pulls: 28500, added: '20240501' },
'command-r-plus': { description: 'Command R+ is a powerful, scalable large language model purpose-built to excel at real-world enterprise use cases.', pulls: 23800, added: '20240501', contextWindow: 128000 },
'llava': { description: '🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.', pulls: 166600 },
'dbrx': { description: 'DBRX is an open, general-purpose LLM created by Databricks.', pulls: 4034, added: '20240501' },
'codellama': { description: 'A large language model that can use text prompts to generate and discuss code.', pulls: 381200 },
'qwen': { description: 'Qwen 1.5 is a series of large language models by Alibaba Cloud spanning from 0.5B to 110B parameters', pulls: 243800 },
'dolphin-mixtral': { description: 'Uncensored, 8x7b and 8x22b fine-tuned models based on the Mixtral mixture of experts models that excels at coding tasks. Created by Eric Hartford.', pulls: 210300 },
'llama2-uncensored': { description: 'Uncensored Llama 2 model by George Sung and Jarrad Hope.', pulls: 166900 },
'mistral-openorca': { description: 'Mistral OpenOrca is a 7 billion parameter model, fine-tuned on top of the Mistral 7B model using the OpenOrca dataset.', pulls: 120100 },
'deepseek-coder': { description: 'DeepSeek Coder is a capable coding model trained on two trillion code and natural language tokens.', pulls: 111700 },
'phi': { description: 'Phi-2: a 2.7B language model by Microsoft Research that demonstrates outstanding reasoning and language understanding capabilities.', pulls: 89700 },
'nomic-embed-text': { description: 'A high-performing open embedding model with a large token context window.', pulls: 83300, added: '20240501' },
'dolphin-mistral': { description: 'The uncensored Dolphin model based on Mistral that excels at coding tasks. Updated to version 2.8.', pulls: 79700 },
'orca-mini': { description: 'A general-purpose model ranging from 3 billion parameters to 70 billion, suitable for entry-level hardware.', pulls: 75900 },
'nous-hermes2': { description: 'The powerful family of models by Nous Research that excels at scientific discussion and coding tasks.', pulls: 74000 },
'zephyr': { description: 'Zephyr is a series of fine-tuned versions of the Mistral and Mixtral models that are trained to act as helpful assistants.', pulls: 53500 },
'llama2-chinese': { description: 'Llama 2 based model fine tuned to improve Chinese dialogue ability.', pulls: 53400 },
'wizard-vicuna-uncensored': { description: 'Wizard Vicuna Uncensored is a 7B, 13B, and 30B parameter model based on Llama 2 uncensored by Eric Hartford.', pulls: 49600 },
'openhermes': { description: 'OpenHermes 2.5 is a 7B model fine-tuned by Teknium on Mistral with fully open datasets.', pulls: 43400 },
'vicuna': { description: 'General use chat model based on Llama and Llama 2 with 2K to 16K context sizes.', pulls: 42100 },
'tinyllama': { description: 'The TinyLlama project is an open endeavor to train a compact 1.1B Llama model on 3 trillion tokens.', pulls: 39500 },
'starcoder2': { description: 'StarCoder2 is the next generation of transparently trained open code LLMs that comes in three sizes: 3B, 7B and 15B parameters.', pulls: 37600, added: '20240501' },
'tinydolphin': { description: 'An experimental 1.1B parameter model trained on the new Dolphin 2.8 dataset by Eric Hartford and based on TinyLlama.', pulls: 37600 },
'openchat': { description: 'A family of open-source models trained on a wide variety of data, surpassing ChatGPT on various benchmarks. Updated to version 3.5-0106.', pulls: 36300 },
'starcoder': { description: 'StarCoder is a code generation model trained on 80+ programming languages.', pulls: 31400 },
'stable-code': { description: 'Stable Code 3B is a coding model with instruct and code completion variants on par with models such as Code Llama 7B that are 2.5x larger.', pulls: 30900 },
'wizardcoder': { description: 'State-of-the-art code generation model', pulls: 30800 },
'neural-chat': { description: 'A fine-tuned model based on Mistral with good coverage of domain and language.', pulls: 25800 },
'yi': { description: 'A high-performing, bilingual language model.', pulls: 25600 },
'phind-codellama': { description: 'Code generation model based on Code Llama.', pulls: 23700 },
'starling-lm': { description: 'Starling is a large language model trained by reinforcement learning from AI feedback focused on improving chatbot helpfulness.', pulls: 22000 },
'wizard-math': { description: 'Model focused on math and logic problems', pulls: 21000 },
'mxbai-embed-large': { description: 'State-of-the-art large embedding model from mixedbread.ai', pulls: 20800, added: '20240501' },
'falcon': { description: 'A large language model built by the Technology Innovation Institute (TII) for use in summarization, text generation, and chat bots.', pulls: 20200 },
'orca2': { description: 'Orca 2 is built by Microsoft research, and are a fine-tuned version of Meta\'s Llama 2 models.The model is designed to excel particularly in reasoning.', pulls: 19900 },
'dolphin-phi': { description: '2.7B uncensored Dolphin model by Eric Hartford, based on the Phi language model by Microsoft Research.', pulls: 19700 },
'dolphin-llama3': { description: 'Dolphin 2.9 is a new model with 8B and 70B sizes by Eric Hartford based on Llama 3 that has a variety of instruction, conversational, and coding skills.', pulls: 19700, added: '20240501' },
'dolphincoder': { description: 'A 7B and 15B uncensored variant of the Dolphin model family that excels at coding, based on StarCoder2.', pulls: 17800, added: '20240501' },
'nous-hermes': { description: 'General use models based on Llama and Llama 2 from Nous Research.', pulls: 16700 },
'solar': { description: 'A compact, yet powerful 10.7B large language model designed for single-turn conversation.', pulls: 15200 },
'sqlcoder': { description: 'SQLCoder is a code completion model fined-tuned on StarCoder for SQL generation tasks', pulls: 15200 },
'bakllava': { description: 'BakLLaVA is a multimodal model consisting of the Mistral 7B base model augmented with the LLaVA architecture.', pulls: 14600 },
'medllama2': { description: 'Fine-tuned Llama 2 model to answer medical questions based on an open source medical dataset.', pulls: 14200 },
'nous-hermes2-mixtral': { description: 'The Nous Hermes 2 model from Nous Research, now trained over Mixtral.', pulls: 13700 },
'wizardlm-uncensored': { description: 'Uncensored version of Wizard LM model', pulls: 13400 },
'stablelm2': { description: 'Stable LM 2 is a state-of-the-art 1.6B parameter language model trained on multilingual data in English, Spanish, German, Italian, French, Portuguese, and Dutch.', pulls: 12700 },
'codeup': { description: 'Great code generation model based on Llama2.', pulls: 12400 },
'all-minilm': { description: 'Embedding models on very large sentence level datasets.', pulls: 11700, added: '20240501' },
'everythinglm': { description: 'Uncensored Llama2 based model with support for a 16K context window.', pulls: 11700 },
'samantha-mistral': { description: 'A companion assistant trained in philosophy, psychology, and personal relationships. Based on Mistral.', pulls: 11000 },
'yarn-llama2': { description: 'An extension of Llama 2 that supports a context of up to 128k tokens.', pulls: 10600, contextWindow: 128000 },
'deepseek-llm': { description: 'An advanced language model crafted with 2 trillion bilingual tokens.', pulls: 10500 },
'stable-beluga': { description: 'Llama 2 based model fine tuned on an Orca-style dataset. Originally called Free Willy.', pulls: 10300 },
'yarn-mistral': { description: 'An extension of Mistral to support context windows of 64K or 128K.', pulls: 10200 },
'meditron': { description: 'Open-source medical large language model adapted from Llama 2 to the medical domain.', pulls: 9829 },
'codeqwen': { description: 'CodeQwen1.5 is a large language model pretrained on a large amount of code data.', pulls: 9367, added: '20240501' },
'llama-pro': { description: 'An expansion of Llama 2 that specializes in integrating both general language understanding and domain-specific knowledge, particularly in programming and mathematics.', pulls: 8978 },
'magicoder': { description: '🎩 Magicoder is a family of 7B parameter models trained on 75K synthetic instruction data using OSS-Instruct, a novel approach to enlightening LLMs with open-source code snippets.', pulls: 8434 },
'stablelm-zephyr': { description: 'A lightweight chat model allowing accurate, and responsive output without requiring high-end hardware.', pulls: 8387 },
'codebooga': { description: 'A high-performing code instruct model created by merging two existing code models.', pulls: 7863 },
'mistrallite': { description: 'MistralLite is a fine-tuned model based on Mistral with enhanced capabilities of processing long contexts.', pulls: 7351 },
'wizard-vicuna': { description: 'Wizard Vicuna is a 13B parameter model based on Llama 2 trained by MelodysDreamj.', pulls: 7089 },
'xwinlm': { description: 'Conversational model based on Llama 2 that performs competitively on various benchmarks.', pulls: 6971 },
'nexusraven': { description: 'Nexus Raven is a 13B instruction tuned model for function calling tasks.', pulls: 6819 },
'wizardlm': { description: 'General use model based on Llama 2.', pulls: 6358 },
'goliath': { description: 'A language model created by combining two fine-tuned Llama 2 70B models into one.', pulls: 5501 },
'open-orca-platypus2': { description: 'Merge of the Open Orca OpenChat model and the Garage-bAInd Platypus 2 model. Designed for chat and code generation.', pulls: 5252 },
'notux': { description: 'A top-performing mixture of experts model, fine-tuned with high-quality data.', pulls: 4780 },
'megadolphin': { description: 'MegaDolphin-2.2-120b is a transformation of Dolphin-2.2-70b created by interleaving the model with itself.', pulls: 4571 },
'duckdb-nsql': { description: '7B parameter text-to-SQL model made by MotherDuck and Numbers Station.', pulls: 4432 },
'alfred': { description: 'A robust conversational model designed to be used for both chat and instruct use cases.', pulls: 4042 },
'notus': { description: 'A 7B chat model fine-tuned with high-quality data and based on Zephyr.', pulls: 3836 },
'llama3-gradient': { description: 'This model extends LLama-3 8B\'s context length from 8k to over 1m tokens.', pulls: 3364, added: '20240501' },
'snowflake-arctic-embed': { description: 'A suite of text embedding models by Snowflake, optimized for performance.', pulls: 3345, added: '20240501' },
'moondream': { description: 'moondream is a small vision language model designed to run efficiently on edge devices.', pulls: 1553, added: '20240501' },
};
// export const OLLAMA_LAST_UPDATE: string = '20240126';
export const OLLAMA_PREV_UPDATE: string = '20231220';
// export const OLLAMA_LAST_UPDATE: string = '20240501';
export const OLLAMA_PREV_UPDATE: string = '20240126';

Some files were not shown because too many files have changed in this diff Show More