Compare commits

..

649 Commits

Author SHA1 Message Date
Enrico Ros 5cc5df6909 1.7.0: Fix 2023-12-10 04:52:00 -08:00
Enrico Ros 11d8cf8996 Update GitHub docker action 2023-12-10 04:51:37 -08:00
Enrico Ros eae578970e 1.7.0: UpDate 2023-12-10 04:12:05 -08:00
Enrico Ros e076953c6a Merge branch 'release-1.7.0' 2023-12-10 04:08:29 -08:00
Enrico Ros 5c455591ea 1.7.0: Readme and Changelog 2023-12-10 04:06:50 -08:00
Enrico Ros 19b3dcd927 Update maintainers-release.md 2023-12-10 03:43:42 -08:00
Enrico Ros 702e27edbf Update deploy-authentication.md 2023-12-10 03:29:13 -08:00
Enrico Ros 7c872de9af Update deploy-authentication.md 2023-12-10 03:28:38 -08:00
Enrico Ros 53b18143e7 Update deploy-authentication.md 2023-12-10 03:27:49 -08:00
Enrico Ros d812813aac Update deploy-authentication.md 2023-12-10 03:27:09 -08:00
Enrico Ros 9505b7fd7f Update deploy-authentication.md 2023-12-10 03:26:27 -08:00
Enrico Ros 9e07822598 Update deploy-authentication.md 2023-12-10 03:26:02 -08:00
Enrico Ros 6d6604a043 Update maintainers-release.md 2023-12-10 03:10:29 -08:00
Enrico Ros 64d5071eb4 Update maintainers-release.md 2023-12-10 03:02:27 -08:00
Enrico Ros 4a29ff0b19 Update maintainers-release.md 2023-12-10 02:43:42 -08:00
Enrico Ros 6acab83ac5 1.7.0: Version 2023-12-10 02:28:54 -08:00
Enrico Ros a3391b46ec 1.7.0: News 2023-12-10 02:28:54 -08:00
Enrico Ros 9d021a0ea9 News: improve page 2023-12-10 01:58:15 -08:00
Enrico Ros 5b35435136 Removed stray page. #177 2023-12-10 01:56:48 -08:00
Enrico Ros 38b1cd1e4b Composer: premature optimizations 2023-12-10 01:47:37 -08:00
Enrico Ros 50e4bf30f2 Composer: more optimizations 2023-12-10 01:30:16 -08:00
Enrico Ros 6f8d6462b9 Composer: optimizations 2023-12-10 01:07:32 -08:00
Enrico Ros 596bb1ccc6 Readme: refer to http basic auth. #269 2023-12-10 00:20:21 -08:00
Enrico Ros 8023d4fd7e Improve HTTP Basic Auth docs. Improves #269 2023-12-10 00:17:34 -08:00
Enrico Ros 5808c5ae27 Merge branch 'LennardSchwarz-add-basic-auth' 2023-12-10 00:11:04 -08:00
Enrico Ros 0945bc1e74 Documented HTTP basic Auth. Fixes #269 2023-12-10 00:10:01 -08:00
Enrico Ros c82ea978da Improve Build/Deploy instructions 2023-12-09 23:05:56 -08:00
Enrico Ros 9184e28691 Merge branch 'add-basic-auth' of https://github.com/LennardSchwarz/lenn-big-agi into LennardSchwarz-add-basic-auth 2023-12-09 22:26:59 -08:00
Enrico Ros 59784af72c Browser: initial screenshot support 2023-12-08 04:45:43 -08:00
Enrico Ros 8feb1881b9 Merge branch 'feature-new-attachments'
Fixes #251
2023-12-08 04:45:26 -08:00
Enrico Ros 62747e07f1 Mic: greatly improve, with unmounting 2023-12-08 04:37:11 -08:00
Enrico Ros 934511a21f Mic: properly fix #221. The timeout was not reapplied. 2023-12-08 04:37:11 -08:00
Enrico Ros e36b71db9c Mic: Fix back on Desktop 2023-12-08 04:37:11 -08:00
Enrico Ros 924cd7018f Attachments: MultiPart-ready. Closes #251 for this stage. 2023-12-08 04:37:11 -08:00
Enrico Ros d5e91f9ce7 Optimize 2023-12-08 04:37:11 -08:00
Enrico Ros f1ad8cd55e Attachments: cleanups 2023-12-08 04:37:11 -08:00
Enrico Ros d177c73642 Attachments: Send! 2023-12-08 04:37:11 -08:00
Enrico Ros 011bcf8ccd Misc smaller improvements 2023-12-08 04:37:11 -08:00
Enrico Ros 7d0e5809e1 Misc cleanups 2023-12-08 04:37:11 -08:00
Enrico Ros b369148057 Attachments: Inlining: done. Use a hook that derives data from another hook. 2023-12-08 04:37:11 -08:00
Enrico Ros 2e0105b5ed Attachments: improvements and cleanups (still not attaching) 2023-12-08 04:37:11 -08:00
Enrico Ros 3f24ade8e6 Attachments: expire older parts 2023-12-08 04:37:11 -08:00
Enrico Ros 9cdaf26174 Attachments: remove Camera OCR (now common image OCR) 2023-12-08 04:37:11 -08:00
Enrico Ros 3b2c604615 Attachments: first inlining 2023-12-08 04:37:11 -08:00
Enrico Ros 223689316b Token Progress Bar: improve margins 2023-12-08 04:37:11 -08:00
Enrico Ros 6456a0de0c Token Progress Bar: disable Tooltip 2023-12-08 04:37:10 -08:00
Enrico Ros 57458fb32f Attachments: closer to ejection 2023-12-08 04:37:10 -08:00
Enrico Ros b2521060cc Attachments: cleanup Outputs 2023-12-08 04:37:10 -08:00
Enrico Ros 13b6a1ba7e Attachments: use ComposerOutputPart and cleanups 2023-12-08 04:37:10 -08:00
Enrico Ros ec81d802d5 Attachments: extract item menu 2023-12-08 04:37:10 -08:00
Enrico Ros f6eca257d6 Cleanup action group, slightly improves #258 2023-12-08 04:37:10 -08:00
Enrico Ros e744b1afcd Attachments: bits 2023-12-08 04:37:10 -08:00
Enrico Ros bfcae972f7 Attachments: cached token counting 2023-12-08 04:37:10 -08:00
Enrico Ros 360f886c37 Attachments: improve console log 2023-12-08 04:37:10 -08:00
Enrico Ros 305c278e1c Beauty: right align 2023-12-08 04:37:10 -08:00
Enrico Ros ccfcf6235f Beauty: by 2 pixels 2023-12-08 04:37:10 -08:00
Enrico Ros 62f7d92bb2 Beauty: token reporting 2023-12-08 04:37:10 -08:00
Enrico Ros f8915141c8 Attachments: major steps forward towards ejectability 2023-12-08 04:37:10 -08:00
Enrico Ros 7e1e4af19b Beauty: highlight user commands 2023-12-08 04:37:10 -08:00
Enrico Ros 439c462a9b Beauty: buttons 2023-12-08 04:37:10 -08:00
Enrico Ros 95aa71abd6 Beauty: mic buttons 2023-12-08 04:37:10 -08:00
Enrico Ros 3c829cbf97 Good Tooltip 2023-12-08 04:37:10 -08:00
Enrico Ros 29a31d5ca3 Beauty: main button 2023-12-08 04:37:10 -08:00
Enrico Ros 4a8bb24c0f Attachments: move withing composer 2023-12-08 04:37:10 -08:00
Enrico Ros 6b6c3afe0c Attachment: improve UX 2023-12-08 04:37:10 -08:00
Enrico Ros fd41388584 Attachment: outputsLoading for the spinners 2023-12-08 04:37:10 -08:00
Enrico Ros b418b69dc3 Attachment: improve Unsupported (without requiring user action to switch to the generic text-block) 2023-12-08 04:37:10 -08:00
Enrico Ros e1e2962a02 Attachment: bits 2023-12-08 04:37:10 -08:00
Enrico Ros f1662e174f Attachment: PDF to text, sync conversion, and debug 2023-12-08 04:37:10 -08:00
Enrico Ros a73c55fc1f Attachment: fixes 2023-12-08 04:37:10 -08:00
Enrico Ros 0aa923a99d Attachment: remove 2023-12-08 04:37:10 -08:00
Enrico Ros b75160bb2b Attachment: rename pipeline 2023-12-08 04:37:10 -08:00
Enrico Ros 3d515102a1 Attachment: initial image support 2023-12-08 04:37:10 -08:00
Enrico Ros b857cc18d8 Attachment: many cleanups 2023-12-08 04:37:10 -08:00
Enrico Ros 4737d962db Attachment: begin conversions 2023-12-08 04:37:10 -08:00
Enrico Ros 7ba71078a8 Attachment: conversion logic for text, finished popups 2023-12-08 04:37:10 -08:00
Enrico Ros bee0fa8751 Attachment: group Logic 2023-12-08 04:37:10 -08:00
Enrico Ros 5916dfb08d pdfUtils: move 2023-12-08 04:37:10 -08:00
Enrico Ros 9d13b03923 Enable Camera on desktop, #233 2023-12-08 04:37:10 -08:00
Enrico Ros 48e6385ac7 FormLabelStart: try with 'minWidth' 2023-12-08 04:37:10 -08:00
Enrico Ros cf664ff486 Attachment: improve auto-mime 2023-12-08 04:37:09 -08:00
Enrico Ros 5ccf8ba128 Attachment: push forward flow 2023-12-08 04:37:09 -08:00
Enrico Ros 3cd5917207 Attachment: set tooltip on button only 2023-12-08 04:37:09 -08:00
Enrico Ros e2dcca274f Browser: close incognito context 2023-12-08 04:37:09 -08:00
Enrico Ros 7369e898af Browser: make the wss endpoint always overridable 2023-12-08 04:37:09 -08:00
Enrico Ros 1e2c12fddb New Attach System: downloads almost ok 2023-12-08 04:37:09 -08:00
Enrico Ros 4f7369b940 Browser: improve behavior when loading non-pages (files) 2023-12-08 04:37:09 -08:00
Enrico Ros f566049890 Browser: further improve error handling 2023-12-08 04:37:09 -08:00
Enrico Ros fbc2da8b09 Browser: further improve error handling 2023-12-08 04:37:09 -08:00
Enrico Ros af70b39515 Browse: beginning to cleanup page load 2023-12-08 04:37:09 -08:00
Enrico Ros e080d72e8a New Attach System: Components 2023-12-08 04:37:09 -08:00
Enrico Ros fd24e3676a Confirmation Modals: prettier 2023-12-08 04:37:09 -08:00
Enrico Ros 942cd461f5 Drag & drop in Composer: exclude self-drags 2023-12-08 04:37:09 -08:00
Enrico Ros 9567e1cbaa New Attach System: renames 2023-12-08 04:37:09 -08:00
Enrico Ros 2d5d31268e New Attach System: transfer specialized functions to the hook 2023-12-08 04:37:09 -08:00
Enrico Ros b376608709 Fix on-demand clipboard item read.
Note: shall remove this and go for ctrl+v only?
2023-12-08 04:37:09 -08:00
Enrico Ros 551e502caf New Attach System: porting 2023-12-08 04:37:09 -08:00
Enrico Ros 9fb7fcd22f New Attach System: framework 2023-12-08 04:37:09 -08:00
Enrico Ros 1cda7d195b Revert "Browser: initial screenshot support"
This reverts commit 4a02923dda.
2023-12-08 04:36:17 -08:00
Enrico Ros 4a02923dda Browser: initial screenshot support 2023-12-08 04:13:44 -08:00
Enrico Ros a8a45631c2 Browser: update the documentation - large #247 improvement (@stevenlafl) 2023-12-08 03:40:51 -08:00
Enrico Ros eaa755d4ce Browser: update the documentation - integrates #247 2023-12-08 03:17:02 -08:00
Enrico Ros 872396a90e Browser: update Markdown, see #247 2023-12-08 02:08:00 -08:00
Enrico Ros 6b3a2772cc Bits 2023-12-08 01:41:56 -08:00
Enrico Ros f378733abe Oobabooga: document the changes 2023-12-07 22:18:06 -08:00
Enrico Ros 0cf8f0439d Oobabooga: fix with recent API changes 2023-12-07 22:09:28 -08:00
Enrico Ros ab53087b3a LLM Overheat: intuitive UX 2023-12-05 15:13:08 -08:00
Enrico Ros b50923a3b7 Denser menus: Message context & Selection 2023-12-05 14:59:45 -08:00
Enrico Ros 1b4a8da313 Backend: add support for analytics (log which host name responds) 2023-12-05 02:48:14 -08:00
Enrico Ros 31684c2fee [shortcuts] Ctrl+Shift+O: current Chat Model options (temperature, etc..) 2023-12-04 23:54:48 -08:00
Enrico Ros fedd4b1fda Fix setting reactivity on the new Voice Input Timeout. Closes #221 2023-12-04 23:36:18 -08:00
Enrico Ros a41667f427 Overheat LLMs
OpenAI LLMs can go up to 2 as far as temperature.
We don't enable >1 by default, but we have a new labs setting
to enable 'overheating' (max temperature raised
from 1 to 2) for Really Well Done LLMs.
2023-12-04 23:15:40 -08:00
Enrico Ros 021fa3b313 Update README.md 2023-12-02 01:50:49 -08:00
Lennard Schwarz b7ca69aa0e Update realm info 2023-12-01 18:31:04 +01:00
Lennard Schwarz 1efcadbf46 Update readme 2023-12-01 18:29:06 +01:00
Lennard Schwarz 598a6a8e0b Merge branch 'main' of github-ls:LennardSchwarz/lenn-big-agi into add-basic-auth 2023-12-01 18:25:58 +01:00
Enrico Ros 1cd441a2f5 Clipboard: intercept exception, e.g. when a jpeg/png file is copied to clipboard, chrome won't consider it valid on read (yes on ctrl+v) 2023-11-29 15:40:12 -08:00
Enrico Ros 783dc55d02 Ollama: pulling warning 2023-11-29 11:30:07 -08:00
Enrico Ros 88418d1ed0 Enable Toppy-M 2023-11-29 11:13:42 -08:00
Enrico Ros 6a74d1900f History truncation 2023-11-29 11:06:52 -08:00
Enrico Ros 5566e29bcc OpenRouter: update models 2023-11-29 10:43:10 -08:00
Enrico Ros 1f49195251 Ollama: update models, including a marker of the new models 2023-11-29 10:16:31 -08:00
Enrico Ros c5e15ece14 Composer: bits 2023-11-28 14:10:41 -08:00
Enrico Ros 7ceb176d70 Composer: cleanup overlays 2023-11-28 14:08:32 -08:00
Enrico Ros b93bd1bd0b move pdfToText 2023-11-28 12:35:38 -08:00
Enrico Ros 088133ec37 Configurable Voice Input timeout. #221 2023-11-28 03:46:23 -08:00
Enrico Ros 784766442d Extract FormRadioControl 2023-11-28 03:28:06 -08:00
Enrico Ros e014a7c828 Clarityx 2023-11-28 02:45:11 -08:00
Enrico Ros 224e745a71 Cosmetix 2023-11-28 02:35:06 -08:00
Enrico Ros 28ef74f1e9 Merge branch 'release-1.6.0' 2023-11-28 01:41:30 -08:00
Enrico Ros 70091ac39b 1.6.0 version 2023-11-28 01:40:29 -08:00
Enrico Ros cc1011659d 1.6.0 README and changelog 2023-11-28 01:39:03 -08:00
Enrico Ros 7eaa4a11bd 1.6.0 news 2023-11-28 01:32:14 -08:00
Enrico Ros 495f25e2d4 Update news hiding 2023-11-28 01:30:03 -08:00
Enrico Ros f2396000f2 Update template 2023-11-28 01:29:41 -08:00
Enrico Ros 77533aa385 Fix, thanks lint 2023-11-27 16:02:50 -08:00
Enrico Ros 01b2bf6fa3 Flattener: move to streaming, using a new helper 2023-11-27 16:01:25 -08:00
Enrico Ros 6d7843805e Small bits 2023-11-27 15:33:02 -08:00
Enrico Ros 0a593fb2c6 Fix focusing of imported chats. #233 2023-11-27 13:31:37 -08:00
Enrico Ros 57f277f269 ElevenLabs: improve config UX 2023-11-27 13:24:25 -08:00
Enrico Ros 6924e02a17 Link Import: fix Chat URL 2023-11-27 13:20:47 -08:00
Enrico Ros f4b645fd78 Update config-browse.md 2023-11-25 11:47:28 -08:00
Enrico Ros fdb46d3072 Browse: Improve errors reporting 2023-11-24 15:32:54 -08:00
Enrico Ros 858e9d3cb3 Browse: Local (ws://) in incognito 2023-11-24 15:32:46 -08:00
Enrico Ros 52a9dc7bec Browse: Documentation 2023-11-24 15:19:03 -08:00
Enrico Ros 16fbd3b6a3 Browse: cleanups2 2023-11-24 14:23:14 -08:00
Enrico Ros aa09e60f5f Browse: cleanups 2023-11-24 14:20:50 -08:00
Enrico Ros 3b2983831d Spell 2023-11-24 14:01:30 -08:00
Enrico Ros 16e69d0d0b commands: /help (primitive) 2023-11-24 13:55:47 -08:00
Enrico Ros 548f52c770 Browse: user configuration 2023-11-24 13:50:46 -08:00
Enrico Ros 8adac0d193 Browse: /browse -> loads as assistant response 2023-11-24 13:50:46 -08:00
Enrico Ros c0d3c6c982 Browse: /react support (as 'loadURL' tool) 2023-11-24 13:35:57 -08:00
Enrico Ros c1516e7be0 Browse: Share Target -> Composer attachment 2023-11-24 13:11:44 -08:00
Enrico Ros 8473894be2 Browse: CTRL+V (url) and 'Paste' (url) -> Composer attachment 2023-11-24 13:11:44 -08:00
Enrico Ros d5e2fbed0e Browse: page loading service, using remote Puppeteer
also: moved to tRPC (node)
2023-11-24 12:49:45 -08:00
Enrico Ros 2dfa78fbe0 Voice Calls - Labs option 2023-11-24 10:58:30 -08:00
Enrico Ros dff83c5ede Roll packages 2023-11-24 10:49:01 -08:00
Enrico Ros 483f483c4a Copy to clipboard snacks 2023-11-23 02:15:57 -08:00
Enrico Ros f780daf1b1 Anthropic Claude 2.1 support. Closes #245 2023-11-23 01:34:54 -08:00
Enrico Ros 5e6e5bf017 Improved Models Tooltip 2023-11-23 01:27:31 -08:00
Enrico Ros bfe2882ac3 Adding optional Pricing schema 2023-11-23 01:11:14 -08:00
Enrico Ros 0574be04f4 Update soft knowledge cutoff for 1106 models. 2023-11-22 23:36:04 -08:00
Enrico Ros 53b5da8cb8 OpenAI Shared Chats: import from Clipboard too, and copy json object 2023-11-22 22:32:45 -08:00
Enrico Ros 5387b17c36 Also show the branched title. 2023-11-22 13:03:03 -08:00
Enrico Ros 0e854b8772 Title: show the chat index (1: first, 2: second most recently created, etc) 2023-11-22 04:32:19 -08:00
Enrico Ros d23f247a8c Large Perf Boost on Messages 2023-11-22 04:06:27 -08:00
Enrico Ros ce13c04e96 Perf Boost - large gains on the Nav Drawer 2023-11-22 04:00:28 -08:00
Enrico Ros e55fbe9ad0 Fix missing hook dep 2023-11-22 03:14:16 -08:00
Enrico Ros e5a11af6d2 Rename 2023-11-22 02:32:24 -08:00
Enrico Ros 76f21f8c96 Rename 2023-11-22 02:22:20 -08:00
Enrico Ros ea4d9afff2 Ctrl + Shift + ?: show shortcuts 2023-11-22 01:52:13 -08:00
Enrico Ros d884970a02 Do not require confirmation for 'armed' deletions. 2023-11-22 01:39:23 -08:00
Enrico Ros ee11787dcc README.md - roadmap comment 2023-11-22 01:38:16 -08:00
Enrico Ros 13e1ba977f Update 1.5.0 release notes 2023-11-22 01:25:56 -08:00
Enrico Ros 7137ebdda2 Merge pull request #240 from g1ibby/fix-ollama-listModels
fix: ollama listModel endpoint when a model doesn't have TEMPLATE
2023-11-22 01:07:40 -08:00
Enrico Ros 9b71b08fe1 Chat Layout: push the chatmessagelist two levels down #233 2023-11-22 01:06:35 -08:00
Enrico Ros 45a18edac0 ChatMessageList: undo the Ephemeral move 2023-11-22 00:59:17 -08:00
Enrico Ros f1b1ca0a5f Window manager: split functions 2023-11-22 00:59:03 -08:00
Enrico Ros 0c1718bf9c Split-branch settings 2023-11-22 00:56:58 -08:00
Enrico Ros a934ca548e usePanesManager: optional debug 2023-11-21 22:52:24 -08:00
Enrico Ros 2896bd7287 Move Ephemerals Down 2023-11-21 22:41:55 -08:00
Enrico Ros 5ad103a8a2 Refer. 2023-11-21 22:21:22 -08:00
Enrico Ros 16916db247 Improve routing, and move the action pwa action receiver 2023-11-21 22:17:17 -08:00
g1ibby 669eb1414f fix: ollama listModel endpoint when a model doesn't have TEMPLATE or PARAMETER 2023-11-22 13:14:46 +07:00
Enrico Ros 6ed8529d6a Roll types 2023-11-21 22:06:18 -08:00
Enrico Ros bb36dbc4b9 Removed the Labs page, removed a store 2023-11-21 21:31:21 -08:00
Enrico Ros f9e38c7220 Ctrl + Alt + Left/Right: fast history navigation, closes #207 2023-11-21 19:27:22 -08:00
Enrico Ros 2b5a051a9e Ctrl + Alt + Left/Right: navigates in history 2023-11-21 18:45:19 -08:00
Enrico Ros 9793236941 Shortcuts: use fewer listeners 2023-11-21 18:04:33 -08:00
Enrico Ros 497d1c9559 Snackbar: chat title (disabled for now) 2023-11-21 17:41:00 -08:00
Enrico Ros 75c4fe5e67 Snackbars: useEffect compatible 2023-11-21 17:36:39 -08:00
Enrico Ros f4d3d3bd28 Snackbars: add the 'title' type 2023-11-21 17:36:12 -08:00
Enrico Ros 853aadaa0e Confirm branching. 2023-11-21 16:55:36 -08:00
Enrico Ros 8bf23e121c Snackbar Framework animations - Improves #206 2023-11-21 16:55:25 -08:00
Enrico Ros cbffc3f6d5 Snackbar Framework - Closes #206 2023-11-21 16:41:12 -08:00
Enrico Ros 52fc4ec5d8 Improve Restart messaging 2023-11-21 16:40:42 -08:00
Enrico Ros ab94579a30 Branching: duplication up to a message. Partial #235
This commit also largely cleanups the hierarchy tree of component callbacks/handlers
and sets a common nomenclature.
2023-11-21 15:16:58 -08:00
Enrico Ros 43ddc79939 Roll packages 2023-11-21 13:43:45 -08:00
Enrico Ros 6938c6b8d0 UI: Improve options location - Fixes #236 2023-11-21 13:41:27 -08:00
Enrico Ros ba5d835248 Improve spacing 2023-11-21 13:14:06 -08:00
Enrico Ros 510d58ba69 Cleanup News page - part of #236 2023-11-21 12:57:29 -08:00
Enrico Ros c23b0770bf tRPC: enforce more separation of the runtime
The build system was requiring (erroneously) some nodejs packages
when inside routers in the Edge route.
2023-11-21 02:29:05 -08:00
Enrico Ros cb4fdc56a5 Moved chat/commands 2023-11-21 00:28:59 -08:00
Enrico Ros 3b28767212 Renamed to ChatPane 2023-11-21 00:28:46 -08:00
Enrico Ros a1d6cb8cd0 Window management: separate stores again 2023-11-21 00:16:35 -08:00
Enrico Ros 0a094ef0b0 Improve Stores naming 2023-11-20 17:38:35 -08:00
Enrico Ros 17c349af94 Window management: framework
This includes moving the full responsibility for the active window
(and history) to the panes.
2023-11-20 16:19:04 -08:00
Enrico Ros 97f2a19227 Moved and renamed Trade, where it belongs 2023-11-20 16:07:08 -08:00
Enrico Ros 6fc2415e5d Chats store: removed the activeConversationId 2023-11-20 15:24:39 -08:00
Enrico Ros d68c131bbc Window management: ancillary nothingness 2 2023-11-20 15:18:04 -08:00
Enrico Ros 0b6c217da6 Window management: ancillary nothingness 2023-11-20 14:35:36 -08:00
Enrico Ros 432d78fc9d Window management: ancillary component cleanups 2023-11-20 14:32:06 -08:00
Enrico Ros 769ca1546a Window management: ancillary small changes 2023-11-20 14:20:19 -08:00
Enrico Ros 989684884c Window management: ancillary component changes 2023-11-20 14:19:09 -08:00
Enrico Ros a2b6554e73 ChatMessageList: do not collapse on null conversations, but show an helpful message 2023-11-20 02:16:25 -08:00
Enrico Ros 28555445c9 InlineError: allow 'info' 2023-11-20 02:14:24 -08:00
Enrico Ros 20bddfe6c6 Uniform sxprops 2023-11-20 02:14:06 -08:00
Enrico Ros 01243f7422 globalStoredList: begin abstracting stored lists 2023-11-19 19:02:43 -08:00
Enrico Ros 741edb499c Chat: begin moving window state up 2023-11-19 16:09:48 -08:00
Enrico Ros a3fd877a75 Default mobile corner button 2023-11-19 15:58:09 -08:00
Enrico Ros 0c19c4c8ac Clear for 1.6.0 2023-11-19 15:57:38 -08:00
Enrico Ros 9ad92c19a6 1.5.0 Update Version 2023-11-18 21:09:27 -08:00
Enrico Ros c54185e6eb 1.5.0 Update README 2023-11-18 21:09:26 -08:00
Enrico Ros 42fae2f915 1.5.0 News page 2023-11-18 21:09:25 -08:00
Enrico Ros 48f4dd8573 Lint fixes 2023-11-18 21:09:16 -08:00
Enrico Ros 396e3a4625 Update issue templates 2023-11-18 20:29:10 -08:00
Enrico Ros 348915c420 AppNews: fix layouting 2023-11-18 19:07:38 -08:00
Enrico Ros 157dadcae6 Update README.md 2023-11-18 17:27:34 -08:00
Enrico Ros 89b39b4bec Play mic off sound only when not manually initiated. #226 2023-11-18 16:31:15 -08:00
Enrico Ros c42625c8aa SpeechRecognition: add done 'reason' 2023-11-18 16:26:45 -08:00
Enrico Ros ac0e7ad738 Keystrokes: fix platform 2023-11-18 16:10:32 -08:00
Enrico Ros bdd92e69fc Mic: louder click 2023-11-18 15:30:22 -08:00
Enrico Ros f65178c08a Mic: play sound when it stops recording. Closes #226 2023-11-18 15:23:38 -08:00
Enrico Ros 3df40f18f8 Shortcuts: show shortcuts modal. Fixes #195 2023-11-18 14:40:59 -08:00
Enrico Ros af007699ce Shortcuts: delete and clone conversation 2023-11-18 14:36:50 -08:00
Enrico Ros b8537bc4e7 Fix stored states 2023-11-18 14:07:27 -08:00
Enrico Ros a4c3e57899 Auto title chat: true by default 2023-11-18 13:55:46 -08:00
Enrico Ros 065069426b ElevenLabs: cleanup state store, move config options around, and enable to speak the full sentence. Fixes #225 2023-11-18 00:15:47 -08:00
Enrico Ros 0d1cd45813 Remove follow-up mode, and instead add it as an option on the chat menu. Fixes #224 2023-11-17 22:45:02 -08:00
Enrico Ros 090032dccd Roll Next and Prisma 2023-11-17 20:53:25 -08:00
Enrico Ros 987458ed63 Separate Chat menus, part of #224 2023-11-17 20:51:20 -08:00
Enrico Ros 32bc46c46b Merge pull request #220 from llegomark/main
Great update, thanks for the PR. Approved.
2023-11-17 17:18:36 -08:00
Mark Anthony Llego f3a39ad5d2 Refactor pdfToText function to improve readability
and performance
2023-11-17 19:37:28 +08:00
Mark Anthony Llego 98c95bf436 Update pdfjs-dist version to 4.0.189 2023-11-17 19:30:58 +08:00
Enrico Ros a687ddd2a0 Downgrade the UI if the browser does not support clipboard read. Closes #124 2023-11-17 00:45:55 -08:00
Enrico Ros 2bce8dc31e Bugfix: when no model is selected, composer shouldn't send (should not actually clear) 2023-11-17 00:33:41 -08:00
Enrico Ros 2c3597f0dd Merge branch 'edmondop-issue-191' 2023-11-16 23:08:35 -08:00
Enrico Ros 3570d9e9cf Auto-title: moved to a non-reactive check, UI: cleanup text for mobile
Closes #191
2023-11-16 23:08:04 -08:00
Enrico Ros cb8fab47af Merge branch 'issue-191' of https://github.com/edmondop/big-agi into edmondop-issue-191 2023-11-16 17:34:40 -08:00
Enrico Ros 58cfff3912 README: Update the Roadmap, latest features, development and deployment instructions 2023-11-16 17:31:03 -08:00
Edmondo Porcu d2cdf36186 Missing newline 2023-11-16 17:18:39 -08:00
Edmondo Porcu 9237fbaad5 Exposing UI for disabling auto title in the chats 2023-11-16 17:16:56 -08:00
Enrico Ros c6a20c475f Add the "BUG" issue template 2023-11-16 16:52:49 -08:00
Enrico Ros 6e0bb6260e Adding the "Roadmap request" issue template 2023-11-16 16:48:52 -08:00
Enrico Ros 321c52351e As per request, enable sponsorship 2023-11-16 13:38:29 -08:00
Enrico Ros 13d91508c9 Diagrams: show text if no code 2023-11-16 02:06:10 -08:00
Enrico Ros 7a770659f3 Docker: update instructions 2023-11-15 20:10:23 -08:00
Enrico Ros b734087d85 Settings Menu overhaul 2023-11-15 16:26:05 -08:00
Enrico Ros ae354434e2 Auto-focus composer after mic input 2023-11-15 14:00:33 -08:00
Enrico Ros ae16b03c7f Auto-focus composer on ctrl+alt+n 2023-11-15 13:58:04 -08:00
Enrico Ros a1ac12761d Cleanup: appearance 2023-11-15 04:03:12 -08:00
Enrico Ros 1aabdd4394 Mermaid: final cleanups 2023-11-15 03:50:26 -08:00
Enrico Ros 0548f6b863 Disable --turbo until https://github.com/vercel/next.js/issues/57581 is resolved 2023-11-15 03:38:08 -08:00
Enrico Ros 65fc40796b Mermaid: switch to CDN operation, to speed up development again
We are loading Mermaid from the CDN (and spending all the work to dynamically load it
and strong type it), because the Mermaid dependencies (npm i mermaid) are too heavy
and would slow down development for everyone.

Looking forward for feedback on this.
2023-11-15 03:35:43 -08:00
Enrico Ros 48af71d5f1 Mermaid: vast improvement 2023-11-14 20:08:27 -08:00
Enrico Ros cafcafb582 Escape to toggle declutter mode 2023-11-14 20:08:19 -08:00
Enrico Ros 29da5383ed Ollama: enable deletion. See #186 2023-11-14 19:21:40 -08:00
Enrico Ros ba50ff3b90 On a second thought, trying this with
OpenAI replacement as well.
2023-11-14 19:21:09 -08:00
Enrico Ros 63a7dd1ce9 Replace models (don't append) by default.
On all Vendors, aside OpenAI, replace the models, so if a model is deleted from the server,
it won't show up in the list. This has multiple advantages, including not keeping stray configuration.

On a second thought, trying this with
OpenAI replacement as well.

Fixes #186
2023-11-14 19:20:28 -08:00
Enrico Ros 552ffb4257 Fix page 2023-11-14 03:59:37 -08:00
Enrico Ros 87461fb73e Emergency fix - final.r2.reallyfinal.r42-draft-clean_copy 2023-11-14 03:35:28 -08:00
Enrico Ros 22fac6f3c1 Emergency fix2 2023-11-14 03:33:11 -08:00
Enrico Ros 2932e8e89d Emergency fix 2023-11-14 03:29:36 -08:00
Enrico Ros b7ea52701a [*] Full dynamic backend configuration. Allows for runtime env vars, especially on Docker. 2023-11-14 03:25:07 -08:00
Enrico Ros 6d8aa3e989 Dynamic backend feature presence: move all apart from llm 2023-11-14 03:09:29 -08:00
Enrico Ros 5a158155c5 Backend: fetch capabilities 2023-11-14 02:30:46 -08:00
Enrico Ros a30ec5d023 Env-vars: build time validation
Note: build time env vars are not needed, as we're transitioning at
runtime variables.
However if they are set at build time, then validation would happen right then and there.
2023-11-14 01:42:52 -08:00
Enrico Ros eff9be3c99 Env-vars: server side strict checking 2023-11-14 01:15:45 -08:00
Enrico Ros 5a17801c8e Remove some process.env refs 2023-11-14 00:13:44 -08:00
Enrico Ros 76651be12c OpenAILLMOptions: show the temperature value, always 2023-11-13 23:29:28 -08:00
Enrico Ros 5c93af6cdc Restructure the App wrappers in Providers 2023-11-13 23:04:53 -08:00
Enrico Ros 3dbd5158c0 Shortcuts: display the main Send shortcut 2023-11-13 21:04:00 -08:00
Enrico Ros 233d92b69d Docker: fix - thanks @fredliubojin 2023-11-13 20:17:58 -08:00
Enrico Ros bc6bf3195e Visualization: copy 2023-11-13 18:29:02 -08:00
Enrico Ros a71588777a Visualization: disable copy button 2023-11-13 18:13:53 -08:00
Enrico Ros 8c9445d800 fixed hardcoding 2023-11-13 18:08:59 -08:00
Enrico Ros 3cecf7c0b5 removed useTheme from Layout 2023-11-13 18:08:09 -08:00
Enrico Ros e1128fa38f Composer: Extract some buttons, and support the useIsMobile() hook 2023-11-13 17:50:19 -08:00
Enrico Ros 140412cb8b pwaUtils: core for isBrowser, and reduce all platform checks to static client-side 2023-11-13 17:49:38 -08:00
Enrico Ros 882b8629d7 Reduce settings gap 2023-11-13 17:48:34 -08:00
Enrico Ros 7056866841 Improve Keystrokes 2023-11-13 17:48:25 -08:00
Enrico Ros cc6afa9190 Rationalize Settings Labels 2023-11-13 16:27:20 -08:00
Enrico Ros 93f075c270 Cleanup settings code 2023-11-13 15:01:33 -08:00
Enrico Ros c2f991678c App files: start rationalizing 2023-11-13 14:49:38 -08:00
Enrico Ros b8c2f1b73b Dockerfile: cleanups 2023-11-13 13:33:05 -08:00
Enrico Ros 9b939c9a05 Dockerfile: improve and run as user 2023-11-13 13:24:26 -08:00
Enrico Ros 150c295370 Fix dependency 2023-11-13 12:40:10 -08:00
Enrico Ros c5f23ce7ca Docker deployments: add .dockerignore 2023-11-13 12:03:41 -08:00
Enrico Ros f7254fe8f6 Cleanup text-diff 2023-11-13 00:51:58 -08:00
Enrico Ros 32e3a4e547 Roll packages (note: mermaid brings in a lot?) 2023-11-12 23:13:12 -08:00
Enrico Ros 3622155881 ctrl + alt + n/x: new/reset conversation 2023-11-12 22:55:15 -08:00
Enrico Ros 77cc8272c5 ctrl + shift + x: clear conversation 2023-11-12 22:36:28 -08:00
Enrico Ros acff0d0ef5 Mermaid: improve with an example 2023-11-12 22:27:00 -08:00
Enrico Ros 47cf6fe688 Fix hook dependency 2023-11-12 22:01:21 -08:00
Enrico Ros 2b937719dd Mermaid: full support (gpt still makes many mistakes) 2023-11-12 21:57:37 -08:00
Enrico Ros 551faa47db RenderCode: make space for Mermaid 2023-11-12 17:23:25 -08:00
Enrico Ros 692c1ebfda Mermaid syntax highlighting 2023-11-12 17:18:16 -08:00
Enrico Ros 72c6f616f9 RenderCode: better explain the issue 2023-11-12 16:52:14 -08:00
Enrico Ros 1da4b3653e Diagrams: improve proompts 2023-11-12 16:50:46 -08:00
Enrico Ros 8ef6d1667e Diagrams: improve naming, hotfixing, remove title bar 2023-11-12 16:48:08 -08:00
Enrico Ros 961c0b581e Diagrams: improve naming, hotfixing 2023-11-12 16:38:31 -08:00
Enrico Ros 3118228a68 PlantUML: improve rendering, including Errors and syntax errors 2023-11-12 16:38:07 -08:00
Enrico Ros a47b9b0a55 Diagrams: mermaid support 2023-11-12 16:03:06 -08:00
Enrico Ros ae0b39c9c0 useFormRadio: easy memoized Radio drop in 2023-11-12 15:26:17 -08:00
Enrico Ros 2d90947cb9 Diagrams: hotfix code 2023-11-12 14:34:44 -08:00
Enrico Ros 78c1c3bece Images: smaller shadows 2023-11-12 14:34:19 -08:00
Enrico Ros bbce30b24f Improve consistency of Code, Html, Image blocks 2023-11-12 13:45:55 -08:00
Enrico Ros 92009ed6b4 Diagrams: toggling Options also hides the progress 2023-11-12 12:14:07 -08:00
Enrico Ros 54db3746c7 Diagrams: toggle Options 2023-11-12 12:11:00 -08:00
Enrico Ros 58c7012314 Bits 2023-11-12 04:37:14 -08:00
Enrico Ros baf0ca2682 Diagram Generator 2023-11-12 04:36:57 -08:00
Enrico Ros 191144b010 Shared Llm Type selector 2023-11-12 01:57:03 -08:00
Enrico Ros 65d085d169 ChatMessage: optional hide avatar 2023-11-12 01:21:04 -08:00
Enrico Ros a39e90003e ChatMessage: optional Edit callback 2023-11-12 01:06:32 -08:00
Enrico Ros 013186a1ad More GoodModals 2023-11-12 00:47:00 -08:00
Enrico Ros 6dd6fb0ce8 Diagrams: wire it up 2023-11-12 00:22:19 -08:00
Enrico Ros db590a2b76 Imagine and Speak: visible, and can configure 2023-11-11 22:47:28 -08:00
Enrico Ros e58088de24 Try to extend chrome to all desktops 2023-11-11 22:32:42 -08:00
Enrico Ros 88dfa60238 Chat messages: share isImagining / isEditing 2023-11-11 21:53:22 -08:00
Enrico Ros 03fca4b9f8 Hint at this being a selection 2023-11-11 21:40:47 -08:00
Enrico Ros c5f7b8e0d2 Remove the Red badge on share, not that new anymore 2023-11-11 21:38:36 -08:00
Enrico Ros 1d18c56810 Custom message context menu - supports custom actions on the selection 2023-11-11 21:38:19 -08:00
Enrico Ros e59e8780b6 small cleanup bits 2023-11-11 20:36:19 -08:00
Enrico Ros ea196bb22f MessagesList: cleanup code more 2023-11-11 19:11:36 -08:00
Enrico Ros 47c2d19a70 MessagesList: cleanup code 2023-11-11 19:08:42 -08:00
Enrico Ros a11ab7cd7c MessagesList: extract Tools panel 2023-11-11 18:51:45 -08:00
Enrico Ros b7b25688ac Fix built on a less configured eslint 2023-11-11 17:38:37 -08:00
Enrico Ros c77a6bb670 Roll next 2023-11-11 17:34:54 -08:00
Enrico Ros 5c65e888d7 More Lint fixes 2023-11-11 17:29:58 -08:00
Enrico Ros 69932b17c9 Lint fixes 2023-11-11 16:45:11 -08:00
Enrico Ros 7fbafa14a2 Rationalize tsconfig.json, from create-t3-app 2023-11-11 16:31:54 -08:00
Enrico Ros 9b25d89d80 Fixes: Found lockfile missing swc dependencies, patching...
Lockfile was successfully patched, please run "npm install" to ensure @next/swc dependencies are downloaded
2023-11-11 16:05:02 -08:00
Enrico Ros 7fb65c260e Update next.config.js 2023-11-11 16:04:11 -08:00
Enrico Ros 97f8b03b19 Roll tRPC 2023-11-11 15:20:58 -08:00
Enrico Ros 53a71224e6 docs: Ollama: proxy: add buffering disable 2023-11-11 15:11:29 -08:00
Enrico Ros f0ed480e81 Call: disabled, but show 2023-11-10 19:02:09 -08:00
Enrico Ros 8010ca3a6e Ollama stream encoding: fixing a huge bug 2023-11-10 18:50:53 -08:00
Enrico Ros c844a0c319 cleanup of non-openai transports 2023-11-10 18:45:15 -08:00
Enrico Ros 11f2a22b2e Ollama: debug malformed JSON packets 2023-11-10 18:26:45 -08:00
Enrico Ros 11cdb72370 Media hooks to differentiate devices 2023-11-10 14:10:20 -08:00
Enrico Ros fe09334783 Improve Vendor icons 2023-11-10 13:51:02 -08:00
Enrico Ros 8c7618be49 ollama: svg icon 2023-11-10 13:34:55 -08:00
Enrico Ros 648ab3e188 docs: ollama: added the advaced reverse proxy configuration 2023-11-10 13:21:12 -08:00
Enrico Ros e5f498c310 docs: ollama: move 2023-11-10 13:13:59 -08:00
Enrico Ros 278594b543 docs: ollama: move 2023-11-10 13:13:51 -08:00
Enrico Ros 649bfdc957 docs: ollama: fix refs 2023-11-10 13:10:38 -08:00
Enrico Ros 251bbcfc5b docs: ollama: update 2023-11-10 13:02:35 -08:00
Enrico Ros 70e73b2c81 ollama: stop auto-fetch while typing every char of the url 2023-11-10 12:57:53 -08:00
Enrico Ros 72a93f9ffa docs: optipng 2023-11-10 12:52:17 -08:00
Enrico Ros cc9a6db859 docs: Ollama: integration guide 2023-11-10 12:11:55 -08:00
Enrico Ros 1814e71cbe Ollama: update model display style 2023-11-10 11:48:30 -08:00
Enrico Ros 06e21d6d9a Update Ollama models 2023-11-10 11:37:15 -08:00
Enrico Ros f53053d3f6 YouTube persona selector and Augmented chat are out of the Experimental mode (still not polished) 2023-11-10 11:28:46 -08:00
Enrico Ros 214983ee82 NextJS 14 Support, with App Router, TurboPack 2023-11-09 22:54:38 -08:00
Enrico Ros 19e0d36204 Roll packages 2023-11-09 21:36:46 -08:00
Enrico Ros 64196b29ce Voice Continuation Mode
See also #175. This accomplishes a similar function in an elegant way.
2023-11-09 21:34:48 -08:00
Enrico Ros 5b2e0fbff2 UseSpeechRecognition: adapt to Callback changes 2023-11-09 20:51:36 -08:00
Enrico Ros 8fa735401d Composer: debounce token counting 2023-11-09 01:12:05 -08:00
Enrico Ros e1f8230bc9 Debouncing hook for Frontend. 2023-11-09 00:53:34 -08:00
Enrico Ros 47f1fcd3bf Ollama: full support (stream, gen, list, pull, index). Fixes #179 2023-11-08 17:47:41 -08:00
Enrico Ros 73d0f430fa Improve Shortcuts 2023-11-08 15:14:36 -08:00
Enrico Ros fc812654d1 Shortcuts: require ctrl/shift state 2023-11-08 15:14:36 -08:00
Enrico Ros e84a9e46c0 OpenAI: removed properties 2023-11-08 14:58:44 -08:00
Enrico Ros c354d146ae OpenAI: improve errors display 2023-11-08 14:52:14 -08:00
Enrico Ros ce2441affe Ctrl+Shift+R: regenerate assistant 2023-11-08 14:21:56 -08:00
Enrico Ros c695d4b6d4 Shortcuts: stop propagation, just in case 2023-11-08 14:06:18 -08:00
Enrico Ros be7dc82b75 Cleanup streaming errors 2023-11-08 13:55:59 -08:00
Enrico Ros 4b5519a134 Cleanup streaming errors 2023-11-08 13:53:32 -08:00
Enrico Ros 3dd9e56708 Reuse more tRPC fetchers 2023-11-08 13:35:56 -08:00
Enrico Ros a78658aac7 OpenAI: Vision (Preview) -> Vision 2023-11-08 13:20:17 -08:00
Enrico Ros 65b46cfe79 Improve and disambiguate tRPC errors 2023-11-08 12:22:11 -08:00
Enrico Ros 5d20b63f98 Server-side errors 2023-11-08 11:53:14 -08:00
Enrico Ros 54288bb2e2 Streaming & Fetches: improve error reporting 2023-11-08 11:32:39 -08:00
Enrico Ros b3be1c6e91 Wire cleanups 2023-11-08 01:52:03 -08:00
Enrico Ros bdcc0fb09f Roll packages 2023-11-08 01:07:13 -08:00
Enrico Ros 635e54ae07 Cloudflare deployment docs: mention the compatibility flags
Fixes #174
2023-11-08 00:55:53 -08:00
Enrico Ros 58fa4465ce Cleanup 2023-11-08 00:32:39 -08:00
Enrico Ros 0adc273e0f Fix #182 properly. Allows special tokens. 2023-11-08 00:32:17 -08:00
Enrico Ros f76d5fa8ea Fix #182. Don't crash the UI if the tokenizer throws. 2023-11-08 00:26:49 -08:00
Enrico Ros 9615ff44af OpenAI: support for maxCompletionTokens (in desc) -> maxOutputTokens (in DLLM). Fixes #181
Note: you will have to "Update" the OpenAI models for this to be effective.
2023-11-08 00:10:05 -08:00
Enrico Ros db69516d5f bits 2023-11-07 23:34:45 -08:00
Enrico Ros 6e93b125d5 Azure: improve list clarity 2023-11-07 23:07:06 -08:00
Enrico Ros a187a89444 Model List: highlight latest 2023-11-07 23:06:54 -08:00
Enrico Ros a4c11646af OpenAI: update 128 'k' tokens 2023-11-07 22:44:10 -08:00
Enrico Ros 0a73eb2ca6 Openrouter: update models (new OpenAI, Google 32ks, Phind, Zephyr) 2023-11-07 22:35:39 -08:00
Enrico Ros b25dc4dbea docs: update Oobabooga 2023-11-07 22:14:29 -08:00
Enrico Ros a268f621eb docs: Added LocalAI 2023-11-07 22:14:23 -08:00
Enrico Ros 247b3228f9 Fully server-side Model Description 2023-11-07 22:14:10 -08:00
Enrico Ros 63541b37ec llms: scope files 2023-11-07 18:50:46 -08:00
Enrico Ros 3d507741e4 OpenAI: new models: improve appearance/defaults 2023-11-06 13:19:23 -08:00
Enrico Ros 86a3d86408 OpenAI: speculative support for 1106 models 2023-11-06 06:42:48 -08:00
Enrico Ros 9ce61b6ea3 OpenAI: speculative support for 1106 models 2023-11-06 06:40:19 -08:00
Enrico Ros a9d97b97bb Anthropic: show model refresh button when missing key 2023-11-05 21:21:51 -08:00
Enrico Ros 87f0bf16fa Update OpenAISourceSetup.tsx 2023-11-03 07:40:12 -07:00
Enrico Ros 5dcdff20d4 Shortcuts: fix names 2023-11-02 17:37:44 -07:00
Enrico Ros 151117ed5e Default Fast/Func llms to 'gpt-3.5-turbo-16k-0613' 2023-11-02 16:24:12 -07:00
Enrico Ros b7e40cfb6b Enable Speech Recognition on IPhone 2023-11-02 16:22:31 -07:00
Enrico Ros 16be43edcc Audio: begin cleanup 2023-11-02 16:13:48 -07:00
Enrico Ros 5fe3aa56cc Calls: Feedback menu items 2023-11-02 16:13:48 -07:00
Enrico Ros 9ed75a4d55 Call: show the presence of context 2023-11-02 16:13:48 -07:00
Enrico Ros 7fed742bab Personas: set starters and voice IDs for all 2023-11-02 16:13:48 -07:00
Enrico Ros 16b6c0dd43 Call UI: override voice 2023-11-02 16:13:48 -07:00
Enrico Ros da6555dfc7 Call UI: quickfixes 2023-11-02 16:13:48 -07:00
Enrico Ros 351d25170b Call: take it off the experimental flag 2023-11-02 16:13:48 -07:00
Enrico Ros fce4f043a4 Call: Call Wizard to debug issues before they present themselves 2023-11-02 16:05:33 -07:00
Enrico Ros 90fb3945a6 Call: persona dropdown buttons 2023-11-02 16:05:28 -07:00
Enrico Ros b7d56afb52 Style: final adjustments 2023-11-02 15:55:39 -07:00
Enrico Ros 23c8dc27cf Style: cleanup 2023-11-02 15:55:38 -07:00
Enrico Ros 5660b592de Style: improve message colors 2023-11-02 15:55:38 -07:00
Enrico Ros 9b2f938b49 Style: improve theming 2023-11-02 15:55:37 -07:00
Enrico Ros 3a4f5ffa3d Style: persona selector fixes 2023-11-02 15:55:37 -07:00
Enrico Ros 14b8350bf1 Roll Joy 5.0.0-beta.13 2023-11-02 15:55:21 -07:00
Enrico Ros e9ec1361ac Rationalize AppLayout state, and add Shortcuts
Ctrl + Alt + M: quick model setup
Ctrl + Alt + P: preferences
2023-11-02 15:41:11 -07:00
Enrico Ros a283d034e1 Support for String avatar 2023-11-02 01:56:50 -07:00
Enrico Ros 5e8fd7ea4e Support for String avatar 2023-11-02 01:56:41 -07:00
Enrico Ros 121bbd0d6f Add LeftButton support 2023-11-02 01:53:16 -07:00
Enrico Ros 2db5fd545b tRPC: don't repeat curl debug 2023-11-01 17:31:40 -07:00
Enrico Ros 3dc94c7f23 Fix normal paste. 2023-11-01 17:31:07 -07:00
Enrico Ros dafc5117d2 Reduce visibility 2023-11-01 17:15:25 -07:00
Enrico Ros 2297a20a15 Fix state 2023-11-01 16:29:53 -07:00
Enrico Ros ca37803be3 Cleanup code path for 'draw-imagine-plus' - prompt is still not great 2023-11-01 16:25:08 -07:00
Enrico Ros e3d2327d93 Enter to send: renamed to Enter is Newline 2023-10-30 17:36:07 -07:00
Lennard Schwarz 89f3e6f955 Update readme 2023-10-30 14:57:51 +01:00
Lennard Schwarz e79b429c5e Update Readme 2023-10-30 14:57:45 +01:00
Lennard Schwarz c240f6bd5b Add deploy button 2023-10-30 14:55:53 +01:00
Lennard Schwarz 33312e0fd9 Add my middleware thing 2023-10-30 14:52:43 +01:00
Enrico Ros 53533d0f9d Draw+: simple prompt augmentation - will redo with a preview window 2023-10-29 00:06:38 -07:00
Enrico Ros 6b51a9f69b ChatMode - extract as store, to persist between top-levels
Not sure it belongs here, maybe should be part of a Chat Store instead.
2023-10-28 23:57:01 -07:00
Enrico Ros 33e1f7e21f Debug - hook to understand component lifetimes 2023-10-28 23:23:30 -07:00
Enrico Ros 7e86104ef9 Debug - hook to understand component lifetimes 2023-10-28 23:22:35 -07:00
Enrico Ros a577823b48 Cleanup routing 2023-10-28 22:40:29 -07:00
Enrico Ros e59d6b089f Easier Drawing, mode description, accessible settings 2023-10-27 02:05:32 -07:00
Enrico Ros a8839b71ac Prodia: unified SDXL support, with model list, priority, advanced settings, resolution, default to R.V.5 2023-10-27 01:29:50 -07:00
Enrico Ros 6e7aa71b0d Differentiate network issues 2023-10-27 01:27:34 -07:00
Enrico Ros 1486f61511 Roll packages (Prisma, tRPC, types) 2023-10-26 22:57:50 -07:00
Enrico Ros d68a1c34bf next.js: lock down to 13.4; 13.5 inflates the outputs ("parsed size" increases), and 14 even more. I see more compiled modules and lower speed 2023-10-26 16:40:13 -07:00
Enrico Ros 8c2bbe2eb4 Update tsconfig.json, and remove a bad dep 2023-10-26 14:45:19 -07:00
Enrico Ros 6fff438872 Call: composer buttons (disabled) 2023-10-26 14:32:46 -07:00
Enrico Ros db110a9957 Buildfix 2023-10-25 12:34:44 -07:00
Enrico Ros 0fd14db84c useGlobalShortcut: Ctrl+Shift+V to paste attachment 2023-10-25 12:29:05 -07:00
Enrico Ros ecce20d2bf useGlobalShortcut: register shortcuts for global actions 2023-10-25 12:29:02 -07:00
Enrico Ros 1e8782a177 (old) Sent History: remove 2023-10-25 12:29:00 -07:00
Enrico Ros 17e05cf5af Capabilities framework: begin 2023-10-25 12:17:15 -07:00
Enrico Ros 28989f8828 Linting 2023-10-25 11:53:35 -07:00
Enrico Ros dd774eedfb Roll Prisma 2023-10-25 11:53:32 -07:00
Enrico Ros b828fc0c57 Fix OpenAI/Helicone 2023-10-25 11:53:30 -07:00
Enrico Ros d2e0fecfb7 Easier Drawing Mode 2023-10-25 11:28:48 -07:00
Enrico Ros 1d0e789902 Rename constant 2023-10-24 21:57:05 -07:00
Enrico Ros 796aeb99a4 Improve server-side debugging 2023-10-24 21:55:48 -07:00
Enrico Ros f756ac5fc2 CloudFlare: document how to fix build - closes #174 2023-10-24 13:31:23 -07:00
Enrico Ros 9b779e788f Tryfix #174 2023-10-24 12:51:20 -07:00
Enrico Ros e11ca878b6 Roll Superjson and types 2023-10-24 12:34:47 -07:00
Enrico Ros 8ebcff6483 Roll Prisma and tRPC 2023-10-24 12:31:24 -07:00
Enrico Ros f8e23b4016 Style: Theme components - to keep style consistent 2023-10-24 12:27:03 -07:00
Enrico Ros d2217eb142 Style: format theme file 2023-10-24 12:27:03 -07:00
Enrico Ros 68274d827e Style: Preferences modal fixes 2023-10-24 12:27:03 -07:00
Enrico Ros 76601c1d46 Style: bunch of FormControl adjustments 2023-10-24 12:11:15 -07:00
Enrico Ros b526998c8b Anthropic: add support through AWS/Bedrock 2023-10-24 00:11:21 -07:00
Enrico Ros fcf5316aa1 OpenAI: further improve debugging 2023-10-24 00:06:21 -07:00
Enrico Ros dffef1a6e9 FormFieldText: add disablement 2023-10-23 23:40:21 -07:00
Enrico Ros ec29c63cf3 OpenAI transport: mode debuggability 2023-10-23 23:39:50 -07:00
Enrico Ros a35f259986 Remove double click on chat button to set mode 2023-10-23 22:03:30 -07:00
Enrico Ros 206345b451 Package.json: add Node 20 support 2023-10-23 22:01:57 -07:00
Enrico Ros 622bde003e Debug: llm streaming I/O (default: off) 2023-10-23 21:15:54 -07:00
Enrico Ros 9a80b8870e Smaller 2023-10-21 15:35:56 -07:00
Enrico Ros cdaf97226a Documentation: Azure OpenAI (has GPT-4-32k) 2023-10-21 15:24:52 -07:00
Enrico Ros 3a66f50318 Documentation: Azure OpenAI (has GPT-4-32k) 2023-10-21 15:12:41 -07:00
Enrico Ros 7b27f0ed22 Call: cleanups 2023-10-19 17:36:52 -07:00
Enrico Ros ba35840cbd Call - brand new application; baseline support
Notes:
 - Sounds Source: https://mixkit.co/free-sound-effects/phone-ring/
2023-10-19 17:13:13 -07:00
Enrico Ros 7ab347523f Roll packages 2023-10-19 16:25:16 -07:00
Enrico Ros ddccd78269 Token counting: much better counting/presentation - verified: perfect 2023-10-19 16:11:51 -07:00
Enrico Ros 77c781e7b8 Moar Bette Env Vars 2023-10-19 14:51:15 -07:00
Enrico Ros 26030c1efe Update Env Vars docs. 2023-10-19 14:46:15 -07:00
Enrico Ros d8313f4d0a Document Environment Variables 2023-10-19 14:40:50 -07:00
Enrico Ros 5225dc34e1 Update Documentation: Docker 2023-10-19 13:47:33 -07:00
Enrico Ros b6d9393513 Update Documentation 2023-10-19 13:37:33 -07:00
Enrico Ros 54f66da5d8 Minor cleans 2023-10-19 12:59:43 -07:00
Enrico Ros ae3d4750f3 Cleanup code and update OpenRouter settings 2023-10-19 12:49:01 -07:00
Enrico Ros 56cb1c6d24 Cleanup 2023-10-19 12:23:22 -07:00
Enrico Ros 371f02c869 ChatGPT Importer: Working again - but OpenAI may be unreliable. Closes #165 2023-10-19 04:47:34 -07:00
Enrico Ros a450cdaa42 Try a fix for OpenAI import 2023-10-19 04:39:25 -07:00
Enrico Ros 8989bf9a4f Text Tools: Highlight differences 2023-10-19 03:18:16 -07:00
Enrico Ros d41ad780c5 Fix custom personas being lost when switching to other personas. Mark the custom as final. 2023-10-19 00:11:14 -07:00
Enrico Ros ed3a752912 Write down changes 2023-10-18 23:33:24 -07:00
Enrico Ros 358378c7e6 Merge branch 'jontybrook-main' 2023-10-18 23:21:49 -07:00
Enrico Ros 05097af27b Merge branch 'main' of https://github.com/jontybrook/big-agi into jontybrook-main 2023-10-18 23:21:37 -07:00
Enrico Ros 15eb6a235d OpenAI "-instruct" models cannot be used for the chat endpoint. Closes #169 2023-10-18 22:54:08 -07:00
Enrico Ros 138b043f0f Anthropic: full support for Helicone. Closes #173 2023-10-18 22:43:35 -07:00
Enrico Ros 99557b46f5 OpenAI: explain Helicone setup 2023-10-18 22:39:05 -07:00
Enrico Ros 4d42379374 Simplify OpenAI source setup 2023-10-18 21:58:27 -07:00
Enrico Ros fb207d99b9 Improve Sharing Store 2023-10-18 17:47:18 -07:00
Enrico Ros 188a18d6ac Great working Shared Links history 2023-10-18 17:45:27 -07:00
Enrico Ros e81acdf0eb Show outgoing chatlinks (stored locally) 2023-10-18 17:12:58 -07:00
Enrico Ros 99ba47397a Store ChatLink Chat Title too 2023-10-18 16:05:39 -07:00
Enrico Ros 380e07aa9c Less 'share' 2023-10-18 15:59:56 -07:00
Enrico Ros 6aa98da2f4 chatLinkId 2023-10-18 15:56:39 -07:00
Enrico Ros 30d2416ba2 ChatLink: as precaution - append object/keys in localstorage 2023-10-18 15:55:17 -07:00
Enrico Ros 695fde6f8b ChatLink: remember ID 2023-10-18 15:43:34 -07:00
Enrico Ros 989b4461e7 ChatLink: move to /link/chat, update DB, cleanups 2023-10-18 15:25:46 -07:00
Enrico Ros 2d0ec4df8a Move conv title 2023-10-18 15:12:40 -07:00
Enrico Ros 42fe23a4cf Update autoSuggestions.ts 2023-10-17 23:24:10 -07:00
Enrico Ros 66b79054df Cleanups 2023-10-17 21:42:03 -07:00
Enrico Ros 06a2fe3fcc ViewShared: Detect tables and turn on markdown 2023-10-17 21:11:17 -07:00
Enrico Ros 7ffc8df247 Fix a visual bug (overflow-x) 2023-10-17 20:48:38 -07:00
Enrico Ros f934bad2e4 Small bits 2023-10-17 20:20:02 -07:00
Enrico Ros 302c674d70 Remove older file 2023-10-17 19:47:27 -07:00
Enrico Ros c9231684f6 Roll PDFJS 2023-10-17 19:45:32 -07:00
Enrico Ros 3a150c063f Roll Tessetact 2023-10-17 19:41:19 -07:00
Enrico Ros 91e8da3a53 Roll misc packages 2023-10-17 19:37:42 -07:00
Enrico Ros 17a36e1fc3 Roll markdown (and github flavored markdown) 2023-10-17 19:29:29 -07:00
Enrico Ros 7ff9e9f75c Roll tRPC 2023-10-17 19:02:06 -07:00
Enrico Ros 3fadba76ba Roll typescript 2023-10-17 19:00:34 -07:00
Enrico Ros 7ea232f516 Formalize Shared Viewer application 2023-10-17 19:00:11 -07:00
Enrico Ros 65831fa1e9 Extract Logo Progress 2023-10-17 18:59:37 -07:00
Enrico Ros 58ad0ece69 Render Markdown: ON by default (test) 2023-10-17 18:10:02 -07:00
Enrico Ros d2c7261f74 Update README.md 2023-10-17 17:20:19 -07:00
Enrico Ros ac9e415d08 Version 1.4.0 2023-10-17 17:19:22 -07:00
Enrico Ros 42646c1ee2 Docker: try the fix again 2023-10-17 17:05:09 -07:00
Enrico Ros 65f0cf4c8f Docker: try this fix (npm ci won't run postinstall, which is Prisma generate) 2023-10-17 17:00:50 -07:00
Enrico Ros b207d61f78 SSS: improve the viewing page 2023-10-17 15:58:24 -07:00
Enrico Ros dcebd08f55 SSS: return the creation date 2023-10-17 15:58:08 -07:00
Enrico Ros bacc153cc8 SSS: improve creation dialog 2023-10-17 15:57:46 -07:00
Enrico Ros 331ecfeae5 Chat message - disable copy on hover by default 2023-10-17 15:57:24 -07:00
Enrico Ros d3d45c82d4 Improve messaging 2023-10-17 14:08:15 -07:00
Enrico Ros be641a43c3 Improve the appearance of time 2023-10-17 13:47:01 -07:00
Enrico Ros b00479ffbb SSS: simplify variables 2023-10-17 13:32:45 -07:00
Enrico Ros de305dbdb9 Sharing Page 2023-10-17 05:15:15 -07:00
Enrico Ros 7958f87c24 More flexible ChatItems 2023-10-17 05:13:33 -07:00
Enrico Ros e5d6d9fc16 More Imaginable and Speakable 2023-10-17 05:09:33 -07:00
Enrico Ros 052f71b1bd More Flexible Chat Messages 2023-10-17 05:05:53 -07:00
Enrico Ros df08ec2b51 Fix to show the models dialog when not configured 2023-10-17 05:05:08 -07:00
Enrico Ros a5c89a3edd Routes: navigate to chat 2023-10-17 04:45:45 -07:00
Enrico Ros 52d26cb825 Improve layout 2023-10-17 04:45:13 -07:00
Enrico Ros c46741f733 Import Conversation: update signature 2023-10-17 04:44:54 -07:00
Enrico Ros 3f63d03572 Visibility changes 2023-10-17 02:53:50 -07:00
Enrico Ros 9129e9b507 Visibility changes 2023-10-17 02:53:33 -07:00
Enrico Ros 911c2e8b27 Azure: remove router - obsolete since the llms transport unification 2023-10-17 01:10:08 -07:00
Enrico Ros a2de6e358c SSS: add Share to big-AGI
This is the first change to require server-side DB, and required to
pull in Prisma for ORM.

There are 3 env vars needed during build time and run time to activate this feature.
2023-10-17 01:09:43 -07:00
Enrico Ros 3971bcedda PWA: add Web Share helpers 2023-10-17 00:55:33 -07:00
Enrico Ros 725b08e021 Trade: server-side prisma utility function 2023-10-17 00:37:08 -07:00
Enrico Ros 49755abe8b Trade: add server-side-storage 2023-10-17 00:35:27 -07:00
Enrico Ros 45cfb14219 Misc: reuse origin functions 2023-10-17 00:34:04 -07:00
Enrico Ros fa656726ef UI: improve Modals 2023-10-17 00:32:53 -07:00
Enrico Ros 784d6361f8 Ignore more 2023-10-16 21:13:17 -07:00
Enrico Ros 66782faba4 tRPC: add a Node NextJS API route (Edge Function), in addition to the existing Edge Runtime
Move the 'trade' router from the Edge to the Node runtime.
2023-10-16 18:24:39 -07:00
Enrico Ros 8c40fadc2e Sharing: constrain the spec of the stored object 2023-10-16 17:33:57 -07:00
Enrico Ros 13b97e58f5 SSS: Sharing schema 2023-10-16 16:24:26 -07:00
Enrico Ros 50c1b84f94 UI: helpers for showing badges 2023-10-16 15:59:19 -07:00
Enrico Ros df76ec7d6f tRCP: move client code to src/common/utils 2023-10-16 15:25:56 -07:00
Enrico Ros 47539c8d44 tRCP: move server code to the new src/server 2023-10-16 15:20:14 -07:00
Enrico Ros 6022aeee50 Server-Side-Storage (SSS): use Prisma 2023-10-16 14:58:10 -07:00
Enrico Ros 7aaae21e0c Trade: move router locally 2023-10-16 01:00:03 -07:00
Enrico Ros 69db13e4c4 Privacy policy URL, available to the Client side 2023-10-16 00:45:31 -07:00
Enrico Ros 8661bf6fc8 Trade (import/export): cleanup 2023-10-16 00:44:32 -07:00
Enrico Ros 85562b5888 Mobile share_target: move to /launch 2023-10-15 23:53:00 -07:00
Enrico Ros 9a851e342f Azure: unhide gpt4-32k 2023-10-15 17:10:32 -07:00
Enrico Ros 5278c04051 Share: improve menu items 2023-10-15 16:52:27 -07:00
Enrico Ros 21a04212d5 OpenRouter: show free models 2023-10-15 16:51:56 -07:00
Enrico Ros 005ad5b042 OpenRouter: enable mistral 2023-10-15 16:34:42 -07:00
Enrico Ros e25c0dc006 OpenRouter: update models, and doc the gpt4 update prompt 2023-10-15 16:34:02 -07:00
Enrico Ros 09d38eb57c Merge pull request #171 from enricoros/llms-rework
Llms rework
2023-10-11 19:20:13 -07:00
Enrico Ros 19361ac7cb Update README.md 2023-10-10 19:35:38 -07:00
Enrico Ros 85e97e984b OpenRouter: update available model names 2023-10-10 02:24:25 -07:00
Enrico Ros dcd7f65223 Show when there's server-side key support 2023-10-10 01:45:11 -07:00
Enrico Ros 6ee231d271 OpenRouter: server-side API key support 2023-10-10 01:16:56 -07:00
Enrico Ros acd34f7b8d Rework of the LLM paths in progress 2023-10-06 02:03:15 -07:00
Jonty Brook e339262251 (feat): add support for cloudflare ai gateway openai endpoints 2023-10-05 15:25:16 +01:00
Enrico Ros acb06bcc6d Improve error reporting to debug #165 2023-10-03 22:01:57 -07:00
Enrico Ros 25da2556ac Render HTML within Code blocks 2023-09-29 07:44:39 -07:00
Enrico Ros b2f7c6f204 HTML block: render as HTML, e.g. in case of a full proxy 2023-09-29 07:29:20 -07:00
Enrico Ros 5272fa972a Merge pull request #163 from enricoros/feature-azure-openai
Land restructuring of the LLMs folder and partial Azure support. Full support will come next.
2023-09-22 23:07:00 -07:00
Enrico Ros 91353ced8a Azure: land in main, disable instancing as we finish it 2023-09-22 23:02:47 -07:00
Enrico Ros 3448267344 Llms: downgrade tsx -> ts (not required) 2023-09-22 22:47:40 -07:00
Enrico Ros 34c150924e Llms: bits 2023-09-22 22:29:33 -07:00
Enrico Ros 617f7676ce Llms: moved (client) vendors inside ../vendors 2023-09-22 22:20:22 -07:00
Enrico Ros adaff91225 Llms: removed and spread out llm.routes 2023-09-22 22:03:21 -07:00
Enrico Ros 1597675f4e Llms: separate client transport functions 2023-09-22 21:46:05 -07:00
Enrico Ros 2f92c81bee Llms: small move 2023-09-22 21:15:16 -07:00
Enrico Ros 1e0f11d064 Llms: move the server side proximally closer 2023-09-22 20:57:32 -07:00
Enrico Ros b26ddc422a Llms: remove the 'types' file and extract the vendor description out 2023-09-22 20:22:54 -07:00
Enrico Ros 813d95b898 Llms: move out icons 2023-09-22 19:27:33 -07:00
Enrico Ros 4f3f7963d0 Llms: cleanup routers 2023-09-22 19:15:38 -07:00
Enrico Ros 4d2209ca8d Anthropic: rename wiretypes 2023-09-22 19:14:41 -07:00
Enrico Ros 06e866a3e8 LLms: unify model priors 2023-09-22 18:31:02 -07:00
Enrico Ros aee6c85349 Llms: cleanups 2023-09-22 02:05:19 -07:00
Enrico Ros cd141048f5 Azure: immediate chat calls are working - integration still WIP 2023-09-22 02:28:38 -07:00
Enrico Ros 01ea8c7091 Azure: consistent naming of endpoints 2023-09-22 01:48:31 -07:00
Enrico Ros ce08f6fc50 Extended PlantUML support to mindmaps, improved syntax highlighting and language detection. 2023-09-22 01:42:43 -07:00
Enrico Ros b16fc0b0c1 Improve error messaging on http errors 2023-09-22 01:38:09 -07:00
Enrico Ros f69245adaa Partial Azure OpenAI Service support 2023-09-22 01:26:08 -07:00
Enrico Ros da751c06ca Anthropic: reduce access 2023-09-22 01:25:55 -07:00
Enrico Ros 6f92a2ec2c Anthropic: cleanup the hardcode 2023-09-22 01:23:29 -07:00
Enrico Ros bb42f3cd77 Begin the move of model descriptors to the server side 2023-09-21 23:19:08 -07:00
Enrico Ros 3fd4167335 Reminders 2023-09-20 01:24:15 -07:00
Enrico Ros 89820b94ef Latex Block: improve parsing, to fix https://github.com/enricoros/big-agi/issues/153#issuecomment-1698587371 2023-09-20 00:18:51 -07:00
Enrico Ros 99564e7fa1 OpenRouter: clarify config 2023-09-19 23:46:58 -07:00
Enrico Ros 908da13317 Fix ID clash 2023-09-19 23:42:05 -07:00
Enrico Ros a905a0d6e4 Augmented Chat (was ".. & Follow-Up") - first augmentation: Diagrams
Also fix function calling when with a mandatory function name,
even in case it doesn't hit.
2023-09-19 23:24:56 -07:00
Enrico Ros e86a83a676 Update voice dropdown 2023-09-19 21:17:17 -07:00
Enrico Ros 48dcdaaa57 React hooks for LLM/Persona selects 2023-09-19 21:07:38 -07:00
Enrico Ros c1d0093d48 3.5 Turbo Instruct models - not supported for Chat (only /completions) 2023-09-19 08:39:34 -07:00
Enrico Ros bfa4ab46b1 What happened here 2023-09-19 08:13:04 -07:00
Enrico Ros cb83f2ddb0 Update messages 2023-09-19 07:55:51 -07:00
Enrico Ros 14a5e3a9b8 Merge branch 'main' of https://github.com/DeFiFoFum/big-agi into DeFiFoFum-main 2023-09-19 07:39:07 -07:00
Enrico Ros 7cffa7931a Merge branch 'Ashesh3-IndexedDB-storage' 2023-09-19 07:29:29 -07:00
Enrico Ros c83637118c IndexedDB: strengthen the migration process, including localStorage backup (#158) 2023-09-19 07:26:23 -07:00
Enrico Ros f4cd952b1c OpenRouter: describe how to configure it 2023-09-18 23:13:23 -07:00
Enrico Ros 34d5a32fe5 Re-Enable Speech Recognition on Safari (still untested on iPhones) 2023-09-18 22:43:23 -07:00
Enrico Ros f1e5585337 Merged #158 2023-09-15 07:44:25 -07:00
Enrico Ros b7c7268806 Merge branch 'IndexedDB-storage' of https://github.com/Ashesh3/big-agi into Ashesh3-IndexedDB-storage 2023-09-15 12:03:09 -07:00
Enrico Ros 7f49ddb2cc Export text cleanup 2023-09-15 07:00:32 -07:00
Enrico Ros 0e7c9e3d45 A forked chat's messages are all done (not typing) 2023-09-13 07:26:52 -07:00
Enrico Ros 7e6a7a2e2a Show chat sizes when at the maximum 2023-09-13 07:23:39 -07:00
Enrico Ros 5cc2661375 Update news page 2023-09-12 00:46:14 -07:00
Enrico Ros 1aefd6836c Hovering models in the list adds the context window size overlay 2023-09-12 00:35:45 -07:00
Enrico Ros ae6b9c5eed OpenRouter: enable 2023-09-12 00:35:24 -07:00
Enrico Ros 901db54fe9 OpenRouter: send HTTP headers 2023-09-12 00:35:07 -07:00
Enrico Ros bf0068f015 OpenRouter: improve models list from the official docs page
https://openrouter.ai/docs#models
2023-09-12 00:34:36 -07:00
Enrico Ros d7e974fff4 OpenRouter: improve models list 2023-09-11 23:41:38 -07:00
Ashesh c611066a58 Add migration 2023-09-11 19:26:57 +00:00
Ashesh 67d3b21414 Update package-lock file 2023-09-11 19:26:11 +00:00
Ashesh3 3e38a71893 Use IndexedDB for storing chats 2023-08-30 12:23:11 +05:30
defifofum 9829f99055 Add chat drawer indicator for num conversations 2023-08-29 17:30:21 -05:00
Enrico Ros 17077e4c16 Render Latex via React-Katex (dynamic) 2023-08-25 19:45:57 -07:00
Enrico Ros f7aed8dea6 Improve block parsing, now with inline images, multiple-interleaved blocks support 2023-08-25 19:19:00 -07:00
Enrico Ros 9c1d5d761e Cleanups 2023-08-23 09:24:51 -07:00
Enrico Ros 74f8e66a70 Dynamic Code Highligher/Type Inferrer import. Large performance gains. 2023-08-23 08:58:34 -07:00
Enrico Ros f626d98fcf Remove scrollbar 2023-08-23 08:42:49 -07:00
Enrico Ros c0235e212f Clearer, faster, and more scoped Code Rendering 2023-08-23 08:15:27 -07:00
Enrico Ros 4d21e136bc Disable duplication when it would lose data 2023-08-23 07:05:17 -07:00
Enrico Ros 49f30a8e62 Roll packages 2023-08-23 00:43:41 -07:00
Enrico Ros 131b0c7351 Report error message from Google Search misconfiguration 2023-08-20 17:18:44 -07:00
Enrico Ros 4e3b1706cf ElevenLabs: support Streaming (output) endpoint & extract Voices Dropdown 2023-08-20 16:21:20 -07:00
Enrico Ros 9495a509e6 Update useSpeechRecognition to reflect enablement state
Note: the REF is holding the current state, while the state holds the
delayed state. But it's good enough to
never race-cond form the UI.
2023-08-20 16:17:28 -07:00
Enrico Ros c9b22215aa Vast improvements to the Speech Reco hook
- Fix initial delay, up to ~8s on desktop
- Much improve this state
- More improvements and cleanups
2023-08-20 15:42:02 -07:00
Enrico Ros e5e2b9b8b0 Personas: custom avatars and voices 2023-08-20 15:26:35 -07:00
Enrico Ros 98d791810a MPEG Streaming support in the ElevenLabs API
With this patch, the edge function begins streaming the content right away.
This leads to some minor optimization for the non-streaming use case, as there
is no large audio file kept on the server before transferring.
But this mainly creates a large optimization for the "streaming" use case,
as as the data trickles in, it is sent to the client in pass-through fashion.
2023-08-17 23:54:57 -07:00
Enrico Ros 7b107df84e fix Chat Message errors displayed as objects 2023-08-17 08:35:32 -07:00
Enrico Ros bbf6e289d3 Cleanups 2023-08-17 14:53:13 -07:00
348 changed files with 19790 additions and 8273 deletions
+38
View File
@@ -0,0 +1,38 @@
# big-AGI non-code files
/docs/
README.md
# Node build artifacts
/node_modules
/.pnp
.pnp.js
# next.js
/.next/
/out/
# production
/build
# versioning
.git/
.github/
# IDEs
.idea/
# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*
# local env files
.env*.local
# vercel
.vercel
# typescript
*.tsbuildinfo
next-env.d.ts
-28
View File
@@ -1,28 +0,0 @@
# [Recommended for local deployments] Backend API key for OpenAI, so that users don't need one (UI > this > '')
OPENAI_API_KEY=
# [Optional] Sets the "OpenAI-Organization" header field to support organization users (UI > this > '')
OPENAI_API_ORG_ID=
# [Optional] Set the backend host for the OpenAI API, to enable platforms such as Helicone (UI > this > api.openai.com)
OPENAI_API_HOST=
# [Optional, Helicone] Helicone API key: https://www.helicone.ai/keys
HELICONE_API_KEY=
# [Optional] Anthropic credentials for the server-side
ANTHROPIC_API_KEY=
ANTHROPIC_API_HOST=
# [Optional] Enables ElevenLabs credentials on the server side - for optional text-to-speech
ELEVENLABS_API_KEY=
ELEVENLABS_API_HOST=
ELEVENLABS_VOICE_ID=
# [Optional] Prodia credentials on the server side - for optional image generation
PRODIA_API_KEY=
# [Optional, Search] Google Cloud API Key
# https://console.cloud.google.com/apis/credentials -
GOOGLE_CLOUD_API_KEY=
# [Optional, Search] Google Custom/Programmable Search Engine ID
# https://programmablesearchengine.google.com/
GOOGLE_CSE_ID=
+13
View File
@@ -0,0 +1,13 @@
# These are supported funding model platforms
github: enricoros # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
+25
View File
@@ -0,0 +1,25 @@
---
name: Bug report
about: Omg what's happening?
title: "[BUG]"
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
Where is it happening?
- Which device [Mobile/Desktop, os version]:
- Which browser:
- Which website:
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots / context**
If applicable, please add screenshots or additional context
@@ -0,0 +1,75 @@
---
name: Maintainers-Release
about: Maintainers
title: Release 1.2.3
labels: ''
assignees: enricoros
---
## Release checklist:
- [ ] Update the [Roadmap](https://github.com/users/enricoros/projects/4/views/2) calling out shipped features
- [ ] Create and update a [Milestone](https://github.com/enricoros/big-agi/milestones) for the release
- [ ] Assign this task
- [ ] Assign all the shipped roadmap Issues
- [ ] Assign the relevant [recently closed Isssues](https://github.com/enricoros/big-agi/issues?q=is%3Aclosed+sort%3Aupdated-desc)
- Code changes:
- [ ] Create a release branch 'release-x.y.z': `git checkout -b release-1.2.3`
- [ ] Create a temporary tag `git tag v1.2.3 && git push opensource --tags`
- [ ] Create a [New Draft GitHub Release](https://github.com/enricoros/big-agi/releases/new), and generate the automated changelog (for new contributors)
- [ ] Update the release version in package.json, and `npm i`
- [ ] Update in-app News [src/apps/news/news.data.tsx](/src/apps/news/news.data.tsx)
- [ ] Update the in-app News version number
- [ ] Update the readme with the new release
- [ ] Copy the highlights to the [docs/changelog.md](/docs/changelog.md)
- Release:
- [ ] merge onto main
- [ ] verify deployment on Vercel
- [ ] verify container on GitHub Packages
- create a GitHub release
- [ ] name it 'vX.Y.Z'
- [ ] copy the release notes and link appropriate artifacts
- Announce:
- [ ] Discord announcement
- [ ] Twitter announcement
## Links
Milestone:
Former release task:
GitHub release:
## Artifacts Generation
1) The following is my opensource application
- paste README.md
2) I am announcing a new version, 1.7.0. The following were the announcements for 1.6.0. Discord announcement, GitHub Release, in-app news.data.tsx, changelog.md.
- paste the former: `discord announcement`, `GitHub release`, `news.data.tsx`, `changelog.md`
3) The following is the new data I have for 1.7.0
- paste the link to the milestone (closed) and each individual issue (content will be downloaded)
- paste the git changelog `git log v1.6.0..v1.7.0 | clip`
### news.data.TSX
```markdown
I need the following from you:
1. a table summarizing all the new features in 1.2.3 (description, significance, usefulness, do not link the commit, but have the issue number), which will be used for the artifacts later
2. after the table score each feature from a user impact and magnitude point of view
3. Improve the table, in decreasing order of importance for features, fixing any detail that's missing, in particular check if there are commits of significance from a user or developer point of view, which are not contained in the table
4. I want you then to update the news.data.tsx for the new release
```
### GitHub release
Now paste the former release (or 1.5.0 which was accurate and great), including the new contributors and
some stats (# of commits, etc.), and roll it for the new release.
### Discord announcement
```markdown
Can you generate my 1.2.3 big-AGI discord announcement from the GitHub Release announcement, and the in-app News?
```
+17
View File
@@ -0,0 +1,17 @@
---
name: Roadmap request
about: Suggest a roadmap item
title: "[Roadmap]"
labels: ''
assignees: ''
---
**Why**
The reason behind the request - we love it to be framed for "users will be able to do x" rather than quick-aging hype-tech-of-the-day requests
**Concise description**
A clear and concise description of what you want to happen.
**Requirements**
If you can, please detail the changes you expect in UX, user workflows, technology, architecture (if not, the reviewers will do it for you)
+14 -4
View File
@@ -7,11 +7,15 @@
# To get a newer version, you will need to update the SHA.
# You can also reference a tag or branch, but the action may change without warning.
name: Create and publish a Docker image
name: Create and publish Docker images
on:
push:
branches: ['main']
branches:
- main
- main-stable # Trigger on pushes to the main-stable branch
tags:
- 'v*' # Trigger on version tags (e.g., v1.7.0)
env:
REGISTRY: ghcr.io
@@ -26,7 +30,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Log in to the Container registry
uses: docker/login-action@65b78e6e13532edd9afa3aa52ac7964289d1a9c1
@@ -40,11 +44,17 @@ jobs:
uses: docker/metadata-action@9ec57ed1fcdbf14dcef7dfbe97b2010124a938b7
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=development,enable=${{ github.ref == 'refs/heads/main' }}
type=raw,value=stable,enable=${{ github.ref == 'refs/heads/main-stable' }}
type=ref,event=tag # Use the tag name as a tag for tag builds
type=semver,pattern={{version}} # Generate semantic versioning tags for tag builds
- name: Build and push Docker image
uses: docker/build-push-action@f2a1d5e99d037542a71f64918e516c093c6f3fc4
with:
context: .
file: Dockerfile
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
labels: ${{ steps.meta.outputs.labels }}
+1
View File
@@ -27,6 +27,7 @@ yarn-error.log*
# local env files
.env
.env.*
# vercel
.vercel
+39 -25
View File
@@ -1,42 +1,56 @@
# Test
FROM node:18-alpine as test-target
ENV NODE_ENV=development
ENV PATH $PATH:/usr/src/app/node_modules/.bin
# Base
FROM node:18-alpine AS base
ENV NEXT_TELEMETRY_DISABLED 1
WORKDIR /usr/src/app
# Dependencies
FROM base AS deps
WORKDIR /app
# Dependency files
COPY package*.json ./
COPY prisma ./prisma
# CI and release builds should use npm ci to fully respect the lockfile.
# Local development may use npm install for opportunistic package updates.
ARG npm_install_command=ci
RUN npm $npm_install_command
# Install dependencies, including dev (release builds should use npm ci)
ENV NODE_ENV development
RUN npm ci
# Builder
FROM base AS builder
WORKDIR /app
# Copy development deps and source
COPY --from=deps /app/node_modules ./node_modules
COPY . .
# Build
FROM test-target as build-target
ENV NODE_ENV=production
# Use build tools, installed as development packages, to produce a release build.
# Build the application
ENV NODE_ENV production
RUN npm run build
# Reduce installed packages to production-only.
# Reduce installed packages to production-only
RUN npm prune --production
# Archive
FROM node:18-alpine as archive-target
ENV NODE_ENV=production
ENV PATH $PATH:/usr/src/app/node_modules/.bin
# Runner
FROM base AS runner
WORKDIR /app
WORKDIR /usr/src/app
# As user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Include only the release build and production packages.
COPY --from=build-target /usr/src/app/node_modules node_modules
COPY --from=build-target /usr/src/app/.next .next
COPY --from=build-target /usr/src/app/public public
# Copy Built app
COPY --from=builder --chown=nextjs:nodejs /app/public public
COPY --from=builder --chown=nextjs:nodejs /app/.next .next
COPY --from=builder --chown=nextjs:nodejs /app/node_modules node_modules
# Minimal ENV for production
ENV NODE_ENV production
ENV PATH $PATH:/app/node_modules/.bin
# Run as non-root user
USER nextjs
# Expose port 3000 for the application to listen on
EXPOSE 3000
CMD ["next", "start"]
# Start the application
CMD ["next", "start"]
+106 -108
View File
@@ -1,29 +1,77 @@
# `BIG-AGI` 🤖💬
# BIG-AGI 🧠✨
Welcome to `big-AGI` 👋 your personal AGI application
powered by OpenAI GPT-4 and beyond. Designed for smart humans and super-heroes,
this responsive web app comes with Personas, Drawing, Code Execution, PDF imports, Voice support,
data Rendering, AGI functions, chats and much more. Comes with plenty of `#big-AGI-energy` 🚀
Welcome to big-AGI 👋, the GPT application for professionals that need form, function,
simplicity, and speed. Powered by the latest models from 7 vendors, including
open-source, `big-AGI` offers best-in-class Voice and Chat with AI Personas,
visualizations, coding, drawing, calling, and quite more -- all in a polished UX.
[![Official Website](https://img.shields.io/badge/BIG--AGI.com-%23096bde?style=for-the-badge&logo=vercel&label=demo)](https://big-agi.com)
Pros use big-AGI. 🚀 Developers love big-AGI. 🤖
[![Official Website](https://img.shields.io/badge/BIG--AGI.com-%23096bde?style=for-the-badge&logo=vercel&label=launch)](https://big-agi.com)
Or fork & run on Vercel
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-agi&env=OPENAI_API_KEY,OPENAI_API_HOST&envDescription=OpenAI%20KEY%20for%20your%20deployment.%20Set%20HOST%20only%20if%20non-default.)
## Useful 👊
## 👉 [roadmap](https://github.com/users/enricoros/projects/4/views/2)
big-AGI is an open book; our **[public roadmap](https://github.com/users/enricoros/projects/4/views/2)**
shows the current developments and future ideas.
- Got a suggestion? [_Add your roadmap ideas_](https://github.com/enricoros/big-agi/issues/new?&template=roadmap-request.md)
- Want to contribute? [_Pick up a task!_](https://github.com/users/enricoros/projects/4/views/4) - _easy_ to _pro_
### What's New in 1.7.0 · Dec 10, 2023 · Attachment Theory 🌟
- **Attachments System Overhaul**: Drag, paste, link, snap, text, images, PDFs and more. [#251](https://github.com/enricoros/big-agi/issues/251)
- **Desktop Webcam Capture**: Image capture now available as Labs feature. [#253](https://github.com/enricoros/big-agi/issues/253)
- **Independent Browsing**: Full browsing support with Browserless. [Learn More](https://github.com/enricoros/big-agi/blob/main/docs/config-browse.md)
- **Overheat LLMs**: Push the creativity with higher LLM temperatures. [#256](https://github.com/enricoros/big-agi/issues/256)
- **Model Options Shortcut**: Quick adjust with `Ctrl+Shift+O`
- Optimized Voice Input and Performance
- Latest Ollama and Oobabooga models
- For developers: **Password Protection**: HTTP Basic Auth. [Learn How](https://github.com/enricoros/big-agi/blob/main/docs/deploy-authentication.md)
### What's New in 1.6.0 - Nov 28, 2023
- **Web Browsing**: Download web pages within chats - [browsing guide](https://github.com/enricoros/big-agi/blob/main/docs/config-browse.md)
- **Branching Discussions**: Create new conversations from any message
- **Keyboard Navigation**: Swift chat navigation with new shortcuts (e.g. ctrl+alt+left/right)
- **Performance Boost**: Faster rendering for a smoother experience
- **UI Enhancements**: Refined interface based on user feedback
- **New Features**: Anthropic Claude 2.1, `/help` command, and Flattener tool
- **For Developers**: Code quality upgrades and snackbar notifications
### What's New in 1.5.0 - Nov 19, 2023
- **Continued Voice**: Engage with hands-free interaction for a seamless experience
- **Visualization Tool**: Create data representations with our new visualization capabilities
- **Ollama Local Models**: Leverage local models support with our comprehensive guide
- **Text Tools**: Enjoy tools including highlight differences to refine your content
- **Mermaid Diagramming**: Render complex diagrams with our Mermaid language support
- **OpenAI 1106 Chat Models**: Experience the cutting-edge capabilities of the latest OpenAI models
- **SDXL Support**: Enhance your image generation with SDXL support for Prodia
- **Cloudflare OpenAI API Gateway**: Integrate with Cloudflare for a robust API gateway
- **Helicone for Anthropic**: Utilize Helicone's tools for Anthropic models
Check out the [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), or
the [past releases changelog](docs/changelog.md).
## ✨ Key Features 👊
![Ask away, paste a ton, copy the gems](docs/pixels/big-AGI-compo1.png)
[More](docs/pixels/big-AGI-compo2b.png), [screenshots](docs/pixels).
- Engaging AI Personas
- Clean UX, w/ tokens counters
- Private: user-owned API keys and localStorage, self-hostable if you like
- Human I/O: Advanced voice support (TTS, STT)
- Machine I/O: PDF import & Summarization, code execution
- Many more updates & integrations: ElevenLabs, Helicone, Paste.gg, Prodia
- Coming up: automatic-AGI reasoning (Reason+Act) and more
- **AI Personas**: Tailor your AI interactions with customizable personas
- **Sleek UI/UX**: A smooth, intuitive, and mobile-responsive interface
- **Efficient Interaction**: Voice commands, OCR, and drag-and-drop file uploads
- **Multiple AI Models**: Choose from a variety of leading AI providers
- **Privacy First**: Self-host and use your own API keys for full control
- **Advanced Tools**: Execute code, import PDFs, and summarize documents
- **Seamless Integrations**: Enhance functionality with various third-party services
- **Open Roadmap**: Contribute to the progress of big-AGI
## Support 🙌
## 💖 Support
[//]: # ([![Official Discord](https://img.shields.io/discord/1098796266906980422?label=discord&logo=discord&logoColor=%23fff&style=for-the-badge)](https://discord.gg/MkH4qj2Jp9))
[![Official Discord](https://discordapp.com/api/guilds/1098796266906980422/widget.png?style=banner2)](https://discord.gg/MkH4qj2Jp9)
@@ -39,86 +87,14 @@ Or fork & run on Vercel
<br/>
## Latest Drops 💧🎁
#### 🚨 July/Aug: Back with the Cool features 🧠
- 🎉 **Camera OCR** - real-world AI - take a picture of a text, and chat with it
- 🎉 **Backup/Restore** - save chats, and restore them later
- 🎉 **[Local model support with Oobabooga server](docs/local-llm-text-web-ui.md)** - run your own LLMs!
- 🎉 **Flatten conversations** - conversations summarizer with 4 modes
- 🎉 **Fork conversations** - create a new chat, to expriment with different endings
- 🎉 New commands: /s to add a System message, and /a for an Assistant message
- 🎉 New Chat modes: Write-only - just appends the message, without assistant response
- 🎉 Fix STOP generation - in sync with the Vercel team to fix a long-standing NextJS issue
- 🎉 Fixes on the HTML block - particularly useful to see error pages
#### June: scale UP 🚀
- 🎉 **[New OpenAI Models](https://openai.com/blog/function-calling-and-other-api-updates) support** - 0613 models, including 16k and 32k
- 🎉 **Cleaner UI** - with rationalized Settings, Modals, and Configurators
- 🎉 **Dynamic Models Configurator** - easy connection with different model vendors
- 🎉 **Multiple Model Vendors Support** framework to support many LLM vendors
- 🎉 **Per-model Options** (temperature, tokens, etc.) for fine-tuning AI behavior to your needs
- 🎉 Support for GPT-4-32k
- 🎉 Improved Dialogs and Messages
- 🎉 Much Enhanced DX: TRPC integration, modularization, pluggable UI, etc
#### April / May: more #big-agi-energy
- 🎉 **[Google Search](docs/pixels/feature_react_google.png)** active in ReAct - add your keys to Settings > Google
Search
- 🎉 **[Reason+Act](docs/pixels/feature_react_turn_on.png)** preview feature - activate with 2-taps on the 'Chat' button
- 🎉 **[Image Generation](docs/pixels/feature_imagine_command.png)** using Prodia (BYO Keys) - /imagine - or menu option
- 🎉 **[Voice Synthesis](docs/pixels/feature_voice_1.png)** 📣 with ElevenLabs, including selection of custom voices
- 🎉 **[Precise Token Counter](docs/pixels/feature_token_counter.png)** 📈 extra-useful to pack the context window
- 🎉 **[Install Mobile APP](docs/pixels/feature_pwa.png)** 📲 looks like native (@harlanlewis)
- 🎉 **[UI language](docs/pixels/feature_language.png)** with auto-detect, and future app language! (@tbodyston)
- 🎉 **PDF Summarization** 🧩🤯 - ask questions to a PDF! (@fredliubojin)
- 🎉 **Code Execution: [Codepen](https://codepen.io/)/[Replit](https://replit.com/)** 💻 (@harlanlewis)
- 🎉 **[SVG Drawing](docs/pixels/feature_svg_drawing.png)** - draw with AI 🎨
- 🎉 Chats: multiple chats, AI titles, Import/Export, Selection mode
- 🎉 Rendering: Markdown, SVG, improved Code blocks
- 🎉 Integrations: OpenAI organization ID
- 🎉 [Cloudflare deployment instructions](docs/deploy-cloudflare.md),
[awesome-agi](https://github.com/enricoros/awesome-agi)
- 🎉 [Typing Avatars](docs/pixels/gif_typing_040123.gif) ⌨️
<!-- p><a href="docs/pixels/gif_typing_040123.gif"><img src="docs/pixels/gif_typing_040123.gif" width='700' alt="New Typing Avatars"/></a></p -->
#### March: first release
- 🎉 **[AI Personas](docs/pixels/feature_purpose_two.png)** - including Code, Science, Corporate, and Chat 🎭
- 🎉 **Privacy**: user-owned API keys 🔑 and localStorage 🛡️
- 🎉 **Context** - Attach or [Drag & Drop files](docs/pixels/feature_drop_target.png) to add them to the prompt 📁
- 🎉 **Syntax highlighting** - for multiple languages 🌈
- 🎉 **Code Execution: Sandpack** -
[now on branch]((https://github.com/enricoros/big-agi/commit/f678a0d463d5e9cf0733f577e11bd612b7902d89)) `variant-code-execution`
- 🎉 Chat with GPT-4 and 3.5 Turbo 🧠💨
- 🎉 Real-time streaming of AI responses ⚡
- 🎉 **Voice Input** 🎙️ - works great on Chrome / Windows
- 🎉 Integration: **[Paste.gg](docs/pixels/feature_paste_gg.png)** integration for chat sharing 📥
- 🎉 Integration: **[Helicone](https://www.helicone.ai/)** integration for API observability 📊
- 🌙 Dark model - Wide mode ⛶
<br/>
## Why this? 💡
Because the official Chat ___lacks important features___, is ___more limited than the api___, at times
___slow or unavailable___, and you cannot deploy it yourself, remix it, add features, or share it with
your friends.
Our users report that ___big-AGI is faster___, ___more reliable___, and ___features rich___
with features that matter to them.
![Much features, so fun](docs/pixels/big-AGI-compo2b.png)
## Develop 🧩
## 🧩 Develop
![TypeScript](https://img.shields.io/badge/TypeScript-007ACC?style=&logo=typescript&logoColor=white)
![React](https://img.shields.io/badge/React-61DAFB?style=&logo=react&logoColor=black)
![Next.js](https://img.shields.io/badge/Next.js-000000?style=&logo=vercel&logoColor=white)
Clone this repo, install the dependencies, and run the development server:
Clone this repo, install the dependencies (all locally), and run the development server (which auto-watches the
files for changes):
```bash
git clone https://github.com/enricoros/big-agi.git
@@ -127,36 +103,58 @@ npm install
npm run dev
```
Now the app should be running on `http://localhost:3000`
The development app will be running on `http://localhost:3000`. Development builds have the advantage of not requiring
a build step, but can be slower than production builds. Also, development builds won't have timeout on edge functions.
### Integrations:
## 🌐 Deploy manually
* [ElevenLabs](https://elevenlabs.io/) Voice Synthesis (bring your own voice too) - Settings > Text To Speech
* [Helicone](https://www.helicone.ai/) LLM Observability Platform - Settings > Advanced > API Host: 'oai.hconeai.com'
* [Paste.gg](https://paste.gg/) Paste Sharing - Chat Menu > Share via paste.gg
* [Prodia](https://prodia.com/) Image Generation - Settings > Image Generation > Api Key & Model
## Deploy with Docker 🐳
Specific docker information on [docs/deploy-docker.md](docs/deploy-docker.md). In short:
#### Pre-built image
Add your OpenAI API key to the `.env` file, then in a terminal run:
The _production_ build of the application is optimized for performance and is performed by the `npm run build` command,
after installing the required dependencies.
```bash
docker-compose up
# .. repeat the steps above up to `npm install`, then:
npm run build
npm run start --port 3000
```
#### Locally built image
The app will be running on the specified port, e.g. `http://localhost:3000`.
If you wish to build the image yourself, run
Want to deploy with username/password? See the [Authentication](docs/deploy-authentication.md) guide.
## 🐳 Deploy with Docker
For more detailed information on deploying with Docker, please refer to the [docker deployment documentation](docs/deploy-docker.md).
Build and run:
```bash
docker build -t big-agi .
docker run --detach 'big-agi'
docker run -d -p 3000:3000 big-agi
```
Or run the official container:
- manually: `docker run -d -p 3000:3000 ghcr.io/enricoros/big-agi`
- or, with docker-compose: `docker-compose up` or see [the documentation](docs/deploy-docker.md) for a composer file with integrated browsing
## ☁️ Deploy on Cloudflare Pages
Please refer to the [Cloudflare deployment documentation](docs/deploy-cloudflare.md).
## 🚀 Deploy on Vercel
Create your GitHub fork, create a Vercel project over that fork, and deploy it. Or press the button below for convenience.
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fenricoros%2Fbig-agi&env=OPENAI_API_KEY,OPENAI_API_HOST&envDescription=OpenAI%20KEY%20for%20your%20deployment.%20Set%20HOST%20only%20if%20non-default.)
## Integrations:
* Local models: Ollama, Oobabooga, LocalAi, etc.
* [ElevenLabs](https://elevenlabs.io/) Voice Synthesis (bring your own voice too) - Settings > Text To Speech
* [Helicone](https://www.helicone.ai/) LLM Observability Platform - Models > OpenAI > Advanced > API Host: 'oai.hconeai.com'
* [Paste.gg](https://paste.gg/) Paste Sharing - Chat Menu > Share via paste.gg
* [Prodia](https://prodia.com/) Image Generation - Settings > Image Generation > Api Key & Model
<br/>
This project is licensed under the MIT License.
+52
View File
@@ -0,0 +1,52 @@
import { createEmptyReadableStream, safeErrorString, serverFetchOrThrow } from '~/server/wire';
import { elevenlabsAccess, elevenlabsVoiceId, ElevenlabsWire, speechInputSchema } from '~/modules/elevenlabs/elevenlabs.router';
/* NOTE: Why does this file even exist?
This file is a workaround for a limitation in tRPC; it does not support ArrayBuffer responses,
and that would force us to use base64 encoding for the audio data, which would be a waste of
bandwidth. So instead, we use this file to make the request to ElevenLabs, and then return the
response as an ArrayBuffer. Unfortunately this means duplicating the code in the server-side
and client-side vs. the tRPC implementation. So at lease we recycle the input structures.
*/
const handler = async (req: Request) => {
try {
// construct the upstream request
const {
elevenKey, text, voiceId, nonEnglish,
streaming, streamOptimization,
} = speechInputSchema.parse(await req.json());
const path = `/v1/text-to-speech/${elevenlabsVoiceId(voiceId)}` + (streaming ? `/stream?optimize_streaming_latency=${streamOptimization || 1}` : '');
const { headers, url } = elevenlabsAccess(elevenKey, path);
const body: ElevenlabsWire.TTSRequest = {
text: text,
...(nonEnglish && { model_id: 'eleven_multilingual_v1' }),
};
// elevenlabs POST
const upstreamResponse: Response = await serverFetchOrThrow(url, 'POST', headers, body);
// NOTE: this is disabled, as we pass-through what we get upstream for speed, as it is not worthy
// to wait for the entire audio to be downloaded before we send it to the client
// if (!streaming) {
// const audioArrayBuffer = await upstreamResponse.arrayBuffer();
// return new NextResponse(audioArrayBuffer, { status: 200, headers: { 'Content-Type': 'audio/mpeg' } });
// }
// stream the data to the client
const audioReadableStream = upstreamResponse.body || createEmptyReadableStream();
return new Response(audioReadableStream, { status: 200, headers: { 'Content-Type': 'audio/mpeg' } });
} catch (error: any) {
const fetchOrVendorError = safeErrorString(error) + (error?.cause ? ' · ' + error.cause : '');
console.log(`api/elevenlabs/speech: fetch issue: ${fetchOrVendorError}`);
return new Response(`[Issue] elevenlabs: ${fetchOrVendorError}`, { status: 500 });
}
};
export const runtime = 'edge';
export { handler as POST };
+2
View File
@@ -0,0 +1,2 @@
export const runtime = 'edge';
export { openaiStreamingRelayHandler as POST } from '~/modules/llms/transports/server/openai/openai.streaming';
+19
View File
@@ -0,0 +1,19 @@
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
import { appRouterEdge } from '~/server/api/trpc.router-edge';
import { createTRPCFetchContext } from '~/server/api/trpc.server';
const handlerEdgeRoutes = (req: Request) =>
fetchRequestHandler({
router: appRouterEdge,
endpoint: '/api/trpc-edge',
req,
createContext: createTRPCFetchContext,
onError:
process.env.NODE_ENV === 'development'
? ({ path, error }) => console.error(`❌ tRPC-edge failed on ${path ?? '<no-path>'}:`, error)
: undefined,
});
export const runtime = 'edge';
export { handlerEdgeRoutes as GET, handlerEdgeRoutes as POST };
+19
View File
@@ -0,0 +1,19 @@
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
import { appRouterNode } from '~/server/api/trpc.router-node';
import { createTRPCFetchContext } from '~/server/api/trpc.server';
const handlerNodeRoutes = (req: Request) =>
fetchRequestHandler({
router: appRouterNode,
endpoint: '/api/trpc-node',
req,
createContext: createTRPCFetchContext,
onError:
process.env.NODE_ENV === 'development'
? ({ path, error }) => console.error(`❌ tRPC-node failed on ${path ?? '<no-path>'}:`, error)
: undefined,
});
export const runtime = 'nodejs';
export { handlerNodeRoutes as GET, handlerNodeRoutes as POST };
+4
View File
@@ -1,3 +1,7 @@
# Very simple docker-compose file to run the app on http://localhost:3000 (or http://127.0.0.1:3000).
#
# For more examples, such runnin big-AGI alongside a web browsing service, see the `docs/docker` folder.
version: '3.9'
services:
+123
View File
@@ -0,0 +1,123 @@
## Changelog
This is a high-level changelog. Calls out some of the high level features batched
by release.
- For the live roadmap, please see [the GitHub project](https://github.com/users/enricoros/projects/4/views/2)
### 1.8.0 - Dec 2023
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
- milestone: [1.8.0](https://github.com/enricoros/big-agi/milestone/8)
### What's New in 1.7.0 · Dec 10, 2023 · Attachment Theory 🌟
- **Attachments System Overhaul**: Drag, paste, link, snap, text, images, PDFs and more. [#251](https://github.com/enricoros/big-agi/issues/251)
- **Desktop Webcam Capture**: Image capture now available as Labs feature. [#253](https://github.com/enricoros/big-agi/issues/253)
- **Independent Browsing**: Full browsing support with Browserless. [Learn More](https://github.com/enricoros/big-agi/blob/main/docs/config-browse.md)
- **Overheat LLMs**: Push the creativity with higher LLM temperatures. [#256](https://github.com/enricoros/big-agi/issues/256)
- **Model Options Shortcut**: Quick adjust with `Ctrl+Shift+O`
- Optimized Voice Input and Performance
- Latest Ollama and Oobabooga models
- For developers: **Password Protection**: HTTP Basic Auth. [Learn How](https://github.com/enricoros/big-agi/blob/main/docs/deploy-authentication.md)
### What's New in 1.6.0 - Nov 28, 2023 · Surf's Up
- **Web Browsing**: Download web pages within chats - [browsing guide](https://github.com/enricoros/big-agi/blob/main/docs/config-browse.md)
- **Branching Discussions**: Create new conversations from any message
- **Keyboard Navigation**: Swift chat navigation with new shortcuts (e.g. ctrl+alt+left/right)
- **Performance Boost**: Faster rendering for a smoother experience
- **UI Enhancements**: Refined interface based on user feedback
- **New Features**: Anthropic Claude 2.1, `/help` command, and Flattener tool
- **For Developers**: Code quality upgrades and snackbar notifications
### What's New in 1.5.0 - Nov 19, 2023 · Loaded
- **Continued Voice**: Engage with hands-free interaction for a seamless experience
- **Visualization Tool**: Create data representations with our new visualization capabilities
- **Ollama Local Models**: Leverage local models support with our comprehensive guide
- **Text Tools**: Enjoy tools including highlight differences to refine your content
- **Mermaid Diagramming**: Render complex diagrams with our Mermaid language support
- **OpenAI 1106 Chat Models**: Experience the cutting-edge capabilities of the latest OpenAI models
- **SDXL Support**: Enhance your image generation with SDXL support for Prodia
- **Cloudflare OpenAI API Gateway**: Integrate with Cloudflare for a robust API gateway
- **Helicone for Anthropic**: Utilize Helicone's tools for Anthropic models
For Developers:
- Runtime Server-Side configuration: https://github.com/enricoros/big-agi/issues/189. Env vars are
not required to be set at build time anymore. The frontend will roundtrip to the backend at the
first request to get the configuration. See
https://github.com/enricoros/big-agi/blob/main/src/modules/backend/backend.router.ts.
- CloudFlare developers: please change the deployment command to
`rm app/api/trpc-node/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`,
as we transitioned to the App router in NextJS 14. The documentation in
[docs/deploy-cloudflare.md](../docs/deploy-cloudflare.md) is updated
### 1.4.0: Sept/Oct: scale OUT
- **Expanded Model Support**: Azure and [OpenRouter](https://openrouter.ai/docs#models) models, including gpt-4-32k
- **Share and clone** conversations with public links
- Removed the 20 chats hard limit ([Ashesh3](https://github.com/enricoros/big-agi/pull/158))
- Latex Rendering
- Augmented Chat modes (Labs)
### July/Aug: More Better Faster
- **Camera OCR** - real-world AI - take a picture of a text, and chat with it
- **Anthropic models** support, e.g. Claude
- **Backup/Restore** - save chats, and restore them later
- **[Local model support with Oobabooga server](../docs/config-local-oobabooga)** - run your own LLMs!
- **Flatten conversations** - conversations summarizer with 4 modes
- **Fork conversations** - create a new chat, to try with different endings
- New commands: /s to add a System message, and /a for an Assistant message
- New Chat modes: Write-only - just appends the message, without assistant response
- Fix STOP generation - in sync with the Vercel team to fix a long-standing NextJS issue
- Fixes on the HTML block - particularly useful to see error pages
### June: scale UP
- **[New OpenAI Models](https://openai.com/blog/function-calling-and-other-api-updates) support** - 0613 models, including 16k and 32k
- **Cleaner UI** - with rationalized Settings, Modals, and Configurators
- **Dynamic Models Configurator** - easy connection with different model vendors
- **Multiple Model Vendors Support** framework to support many LLM vendors
- **Per-model Options** (temperature, tokens, etc.) for fine-tuning AI behavior to your needs
- Support for GPT-4-32k
- Improved Dialogs and Messages
- Much Enhanced DX: TRPC integration, modularization, pluggable UI, etc
### April / May: more #big-agi-energy
- **[Google Search](../docs/pixels/feature_react_google.png)** active in ReAct - add your keys to Settings > Google
Search
- **[Reason+Act](../docs/pixels/feature_react_turn_on.png)** preview feature - activate with 2-taps on the 'Chat' button
- **[Image Generation](../docs/pixels/feature_imagine_command.png)** using Prodia (BYO Keys) - /imagine - or menu option
- **[Voice Synthesis](../docs/pixels/feature_voice_1.png)** 📣 with ElevenLabs, including selection of custom voices
- **[Precise Token Counter](../docs/pixels/feature_token_counter.png)** 📈 extra-useful to pack the context window
- **[Install Mobile APP](../docs/pixels/feature_pwa.png)** 📲 looks like native (@harlanlewis)
- **[UI language](../docs/pixels/feature_language.png)** with auto-detect, and future app language! (@tbodyston)
- **PDF Summarization** 🧩🤯 - ask questions to a PDF! (@fredliubojin)
- **Code Execution: [Codepen](https://codepen.io/)/[Replit](https://replit.com/)** 💻 (@harlanlewis)
- **[SVG Drawing](../docs/pixels/feature_svg_drawing.png)** - draw with AI 🎨
- Chats: multiple chats, AI titles, Import/Export, Selection mode
- Rendering: Markdown, SVG, improved Code blocks
- Integrations: OpenAI organization ID
- [Cloudflare deployment instructions](../docs/deploy-cloudflare.md),
[awesome-agi](https://github.com/enricoros/awesome-agi)
- [Typing Avatars](../docs/pixels/gif_typing_040123.gif) ⌨️
<!-- p><a href="../docs/pixels/gif_typing_040123.gif"><img src="../docs/pixels/gif_typing_040123.gif" width='700' alt="New Typing Avatars"/></a></p -->
### March: first release
- **[AI Personas](../docs/pixels/feature_purpose_two.png)** - including Code, Science, Corporate, and Chat 🎭
- **Privacy**: user-owned API keys 🔑 and localStorage 🛡️
- **Context** - Attach or [Drag & Drop files](../docs/pixels/feature_drop_target.png) to add them to the prompt 📁
- **Syntax highlighting** - for multiple languages 🌈
- **Code Execution: Sandpack** -
[now on branch]((https://github.com/enricoros/big-agi/commit/f678a0d463d5e9cf0733f577e11bd612b7902d89)) `variant-code-execution`
- Chat with GPT-4 and 3.5 Turbo 🧠💨
- Real-time streaming of AI responses ⚡
- **Voice Input** 🎙️ - works great on Chrome / Windows
- Integration: **[Paste.gg](../docs/pixels/feature_paste_gg.png)** integration for chat sharing 📥
- Integration: **[Helicone](https://www.helicone.ai/)** integration for API observability 📊
- 🌙 Dark model - Wide mode ⛶
+87
View File
@@ -0,0 +1,87 @@
# Configuring Azure OpenAI Service with `big-AGI`
The entire procedure takes about 5 minutes and involves creating an Azure account,
setting up the Azure OpenAI service, deploying models, and configuring `big-AGI`
to access these models.
Please note that Azure operates on a 'pay-as-you-go' pricing model and requires
credit card information tied to a 'subscription' to the Azure service.
## Configuring `big-AGI`
If you have an `API Endpoint` and `API Key`, you can configure big-AGI as follows:
1. Launch the `big-AGI` application
2. Go to the **Models** settings
3. Add a Vendor and select **Azure OpenAI**
- Enter the Endpoint (e.g., 'https://your-openai-api-1234.openai.azure.com/')
- Enter the API Key (e.g., 'fd5...........................ba')
The deployed models are now available in the application. If you don't have a configured
Azure OpenAI service instance, continue with the next section.
## Setting Up Azure
### Step 1: Azure Account & Subscription
1. Create an account on [azure.microsoft.com](https://azure.microsoft.com/en-us/)
2. Go to the [Azure Portal](https://portal.azure.com/)
3. Click on **Create a resource** in the top left corner
4. Search for **Subscription** and select **[Create Subscription](https://portal.azure.com/#create/Microsoft.Subscription)**
- Fill in the required fields and click on **Create**
- Note down the **Subscription ID** (e.g., `12345678-1234-1234-1234-123456789012`)
### Step 2: Apply for Azure OpenAI Service
We'll now be creating "OpenAI"-specific resources on Azure. This requires to 'apply',
and acceptance should be quick (even as low as minutes).
1. Visit [Azure OpenAI Service](https://aka.ms/azure-openai)
2. Click on **Apply for access**
- Fill in the required fields (including the subscription ID) and click on **Apply**
Once your application is accepted, you can create OpenAI resources on Azure.
### Step 3: Create Azure OpenAI Resource
For more information, see [Azure: Create and deploy OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
1. Click on **Create a resource** in the top left corner
2. Search for **OpenAI** and select **[Create OpenAI](https://portal.azure.com/#create/Microsoft.CognitiveServicesOpenAI)**
3. Fill in the necessary fields on the **Create OpenAI** page
![Creating an OpenAI service](pixels/config-azure-openai-create.png)
- Select the subscription
- Select a resource group or create a new one
- Select the region. Note that the region determines the available models.
> For instance, **Canada East** offers GPT-4-32k models, For the full list, see [GPT-4 models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
- Name the service (e.g., `your-openai-api-1234`)
- Select a pricing tier (e.g., `S0` for standard)
- Select: "All networks, including the internet, can access this resource."
- Click on **Review + create** and then **Create**
After creating the resource, you can access the API Keys and Endpoints. At any point, you can go to
the OpenAI Service instance page to get this information.
- Click on **Go to resource**
- Click on **Develop**
- Copy the `Endpoint`, called "Language API", e.g. 'https://your-openai-api-1234.openai.azure.com/'
- Copy `KEY 1`
### Step 4: Deploy Models
By default, Azure OpenAI resource instances don't have models available. You need to deploy the models you want to use.
1. Click on **Model Deployments > Manage Deployments**
2. Click on **+Create New Deployment**
![Deploying a model](pixels/config-azure-openai-deploy.png)
- Select the model you want to deploy
- Optionally select a version
- name the model, e.g., `gpt4-32k-0613`
Repeat as necessary for each model you want to deploy.
## Resources
- [Azure OpenAI Service Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/)
- [Guide: Create an Azure OpenAI Resource](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource?pivots=web-portal)
- [Azure OpenAI Models](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models)
+87
View File
@@ -0,0 +1,87 @@
# Browse Functionality in big-AGI 🌐
Allows users to load web pages across various components of `big-AGI`. This feature is supported by Puppeteer-based
browsing services, which are the most common way to render web pages in a headless environment.
Once configured, the Browsing service provides this functionality:
- **Paste a URL**: Simply paste/drag a URL into the chat, and `big-AGI` will load and attach the page (very effective)
- **Use /browse**: Type `/browse [URL]` in the chat to command `big-AGI` to load the specified web page
- **ReAct**: ReAct will automatically use the `loadURL()` function whenever a URL is encountered
First of all, you need to procure a Puppteer web browsing service endpoint. `big-AGI` supports services like:
| Service | Working | Type | Location | Special Features |
|--------------------------------------------------------------------------------------|---------|-------------|----------------|---------------------------------------------|
| [BrightData Scraping Browser](https://brightdata.com/products/scraping-browser) | Yes | Proprietary | Cloud | Advanced scraping tools, global IP pool |
| [Cloudflare Browser Rendering](https://developers.cloudflare.com/browser-rendering/) | ? | Proprietary | Cloud | Integrated CDN, optimized browser rendering |
| ⬇️ [Browserless 2.0](#-browserless-20) | Okay | OpenSource | Local (Docker) | Parallelism, debug viewer, advanced APIs |
| ⬇️ [Your Chrome Browser (ALPHA)](#-your-own-chrome-browser) | Alpha | Proprietary | Local (Chrome) | Personal, experimental use (ALPHA!) |
| other Puppeteer-based WSS Services | ? | Varied | Cloud/Local | Service-specific features |
## Configuration
1. **Procure an Endpoint**
- Ensure that your browsing service is running (remote or local) and has a WebSocket endpoint available
- Write down the address: `wss://${auth}@{some host}:{port}`, or ws:// for local services on your machine
2. **Configure `big-AGI`**
- navigate to **Preferences** > **Tools** > **Browse**
- Enter the 'wss://...' connection string provided by your browsing service
3. **Enable Features**: Choose which browse-related features you want to enable:
- **Attach URLs**: Automatically load and attach a page when pasting a URL into the composer
- **/browse Command**: Use the `/browse` command in the chat to load a web page
- **ReAct**: Enable the `loadURL()` function in ReAct for advanced interactions
### 🌐 Browserless 2.0
[Browserless 2.0](https://github.com/browserless/browserless) is a Docker-based service that provides a headless
browsing experience compatible with `big-AGI`. An open-source solution that simplifies web automation tasks,
in a scalable manner.
Launch Browserless with:
```bash
docker run -p 9222:3000 browserless/chrome:latest
```
Now you can use the following connection string in `big-AGI`: `ws://127.0.0.1:9222`.
You can also browse to [http://127.0.0.1:9222](http://127.0.0.1:9222) to see the Browserless debug viewer
and configure some options.
Note: if you are using `docker-compose`, please see the
[docker/docker-compose-browserless.yaml](docker/docker-compose-browserless.yaml) file for an example
on how to run `big-AGI` and Browserless simultaneously in a single application.
### 🌐 Your own Chrome browser
***EXPERIMENTAL - UNTESTED*** - You can use your own Chrome browser as a browsing service, by configuring it to expose
a WebSocket endpoint.
- close all the Chrome instances (on Windows, check the Task Manager if still running)
- start Chrome with the following command line options (on Windows, you can edit the shortcut properties):
- `--remote-debugging-port=9222`
- go to http://localhost:9222/json/version and copy the `webSocketDebuggerUrl` value
- it should be something like: `ws://localhost:9222/...`
- paste the value into the Endpoint configuration (see point 2 in the configuration)
### Server-Side Configuration
You can set the Puppeteer WebSocket endpoint (`PUPPETEER_WSS_ENDPOINT`) in the deployment before running it.
This is useful for self-hosted instances or when you want to pre-configure the endpoint for all users, and will
allow your to skip points 2 and 3 above.
Always deploy your own user authentication, authorization and security solution. For this feature, the tRPC
route that provides browsing service, shall be secured with a user authentication and authorization solution,
to prevent unauthorized access to the browsing service.
## Support
If you encounter any issues or have questions about configuring the browse functionality, join our community on Discord for support and discussions.
[![Official Discord](https://discordapp.com/api/guilds/1098796266906980422/widget.png?style=banner2)](https://discord.gg/MkH4qj2Jp9)
---
Enjoy the enhanced browsing experience within `big-AGI` and explore the web without ever leaving your chat!
+34
View File
@@ -0,0 +1,34 @@
# Local LLM integration with `localai`
Integrate local Large Language Models (LLMs) with [LocalAI](https://localai.io).
_Last updated Nov 7, 2023_
## Instructions
### LocalAI installation and configuration
Follow the guide at: https://localai.io/basics/getting_started/
For instance with [Use luna-ai-llama2 with docker compose](https://localai.io/basics/getting_started/#example-use-luna-ai-llama2-model-with-docker-compose):
- clone LocalAI
- get the model
- copy the prompt template
- start docker
- -> the server will be listening on `localhost:8080`
- verify it works by going to [http://localhost:8080/v1/models](http://localhost:8080/v1/models) on
your browser and seeing listed the model you downloaded
### Integrating LocalAI with big-AGI
- Go to Models > Add a model source of type: **LocalAI**
- Enter the address: `http://localhost:8080` (default)
- If running remotely, replace localhost with the IP of the machine. Make sure to use the **IP:Port** format
- Load the models
- Select model & Chat
> NOTE: LocalAI does not list details about the mdoels. Every model is assumed to be
> capable of chatting, and with a context window of 4096 tokens.
> Please update the [src/modules/llms/transports/server/openai/models.data.ts](../src/modules/llms/transports/server/openai/models.data.ts)
> file with the mapping information between LocalAI model IDs and names/descriptions/tokens, etc.
+61
View File
@@ -0,0 +1,61 @@
# Local LLM Integration with `text-web-ui` :llama:
Integrate local Large Language Models (LLMs) with
[oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui),
a specialized interface that includes a custom variant of the OpenAI API for a smooth integration process.
_Last updated on Dec 7, 2023_
### Components
The implementation of local LLMs involves the following components:
* **text-generation-webui**: A Python application with a Gradio web UI for operating Large Language Models.
* **Local Large Language Models "LLMs"**: Use large language models on your personal computer with consumer-grade GPUs or CPUs.
* **big-AGI**: An LLM UI that offers features such as Personas, OCR, Voice Support, Code Execution, AGI functions, and more.
## Instructions
This guide assumes that **big-AGI** is already installed on your system. Note that the text-generation-webui IP address must be accessible from the server running **big-AGI**.
### Text-web-ui Installation & Configuration:
1. Install [text-generation-webui](https://github.com/oobabooga/text-generation-webui#Installation):
- Follow the instructions in the official page (basicall clone the repo and run a script) [~10 minutes]
- Stop the Web UI as we need to modify the startup flags to enable the OpenAI API
2. Enable the **openai extension**
- Edit `CMD_FLAGS.txt`
- Make sure that `--listen --api` is present and uncommented
3. Restart text-generation-webui
- Double-click on "start"
- You should see something like:
```
2023-12-07 21:51:21 INFO:Loading the extension "openai"...
2023-12-07 21:51:21 INFO:OpenAI-compatible API URL:
http://0.0.0.0:5000
...
INFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
Running on local URL: http://0.0.0.0:7860
```
- This shows that:
- The Web UI is running on port 7860: http://127.0.0.1:7860
- **The OpenAI API is running on port 5000: http://127.0.0.1:5000**
4. Load your first model
- Open the text-generation-webui at [127.0.0.1:7860](http://127.0.0.1:7860/)
- Switch to the **Model** tab
- Download, for instance, `TheBloke/Llama-2-7B-Chat-GPTQ`
- Select the model once it's loaded
### Integrating text-web-ui with big-AGI:
1. Integrating Text-Generation-WebUI with big-AGI:
- Go to Models > Add a model source of type: **Oobabooga**
- Enter the address: `http://127.0.0.1:5000`
- If running remotely, replace 127.0.0.1 with the IP of the machine. Make sure to use the **IP:Port** format
- Load the models
- The active model must be selected and LOADED on the text-generation-webui as it doesn't support model switching or parallel requests.
- Select model & Chat
![config-oobabooga-0.png](pixels/config-oobabooga-0.png)
Enjoy the privacy and flexibility of local LLMs with `big-AGI` and `text-generation-webui`!
+81
View File
@@ -0,0 +1,81 @@
# `Ollama` x `big-AGI` :llama:
This guide helps you connect [Ollama](https://ollama.ai) [models](https://ollama.ai/library) to
[big-AGI](https://big-agi.com) for a professional AI/AGI operation and a good UI/Conversational
experience. The integration brings the popular big-AGI features to Ollama, including: voice chats,
editing tools, models switching, personas, and more.
![config-local-ollama-0-example.png](pixels/config-ollama-0-example.png)
## Quick Integration Guide
1. **Ensure Ollama API Server is Running**: Before starting, make sure your Ollama API server is up and running.
2. **Add Ollama as a Model Source**: In `big-AGI`, navigate to the **Models** section, select **Add a model source**, and choose **Ollama**.
3. **Enter Ollama Host URL**: Provide the Ollama Host URL where the API server is accessible (e.g., `http://localhost:11434`).
4. **Refresh Model List**: Once connected, refresh the list of available models to include the Ollama models.
5. **Start Using AI Personas**: Select an Ollama model and begin interacting with AI personas tailored to your needs.
### Ollama: installation and Setup
For detailed instructions on setting up the Ollama API server, please refer to the
[Ollama download page](https://ollama.ai/download) and [instructions for linux](https://github.com/jmorganca/ollama/blob/main/docs/linux.md).
### Visual Guide
* After adding the `Ollama` model vendor, entering the IP address of an Ollama server, and refreshing models:
<img src="pixels/config-ollama-1-models.png" alt="config-local-ollama-1-models.png" style="max-width: 320px;">
* The `Ollama` admin panel, with the `Pull` button highlighted, after pulling the "Yi" model:
<img src="pixels/config-ollama-2-admin-pull.png" alt="config-local-ollama-2-admin-pull.png" style="max-width: 320px;">
* You can now switch model/persona dynamically and text/voice chat with the models:
<img src="pixels/config-ollama-3-chat.png" alt="config-local-ollama-3-chat.png" style="max-width: 320px;">
### Advanced: Model parameters
For users who wish to delve deeper into advanced settings, `big-AGI` offers additional configuration options, such
as the model temperature, maximum tokens, etc.
### Advanced: Ollama under a reverse proxy
You can elegantly expose your Ollama server to the internet (and thus make it easier to use from your server-side
big-AGI deployments) by exposing it on an http/https URL, such as: `https://yourdomain.com/ollama`
On Ubuntu Servers, you will need to install `nginx` and configure it to proxy requests to Ollama.
```bash
sudo apt update
sudo apt install nginx
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.com
```
Then, edit the nginx configuration file `/etc/nginx/sites-enabled/default` and add the following block:
```nginx
location /ollama/ {
proxy_pass http://localhost:11434;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Disable buffering for the streaming responses
proxy_buffering off;
}
```
Reach out to our community if you need help with this.
### Community and Support
Join our community to share your experiences, get help, and discuss best practices:
[![Official Discord](https://discordapp.com/api/guilds/1098796266906980422/widget.png?style=banner2)](https://discord.gg/MkH4qj2Jp9)
---
`big-AGI` is committed to providing a powerful, intuitive, and privacy-respecting AI experience.
We are excited for you to explore the possibilities with Ollama models. Happy creating!
+31
View File
@@ -0,0 +1,31 @@
# OpenRouter Configuration
[OpenRouter](https://openrouter.ai) is a standalone, premium service
that provides access to <Link href='https://openrouter.ai/docs#models' target='_blank'>exclusive AI models</Link>
such as GPT-4 32k, Claude, and more. These models are typically not available to the public.
This document details the process of integrating OpenRouter with big-AGI.
### 1. OpenRouter Account Setup and API Key Generation
1. Register for an OpenRouter account at [openrouter.ai](https://openrouter.ai) by clicking on Sign In > Continue with Google.
2. Top up your account (minimum $5) by navigating to [openrouter.ai/account](https://openrouter.ai/account) > Add Credits > Pay with Stripe.
3. Generate an API key at [openrouter.ai/keys](https://openrouter.ai/keys) > API Key > Generate API Key.
- **Remember to copy and securely store your API key** - the key will not be displayed again and will be in the format `sk-or-v1-...`.
- Keep the key confidential as it can be used to expend your credits.
### 2. Integrating OpenRouter with big-AGI
1. Launch big-AGI, and navigate to the AI **Models** settings.
2. Add a Vendor, and select **OpenRouter**.
![feature-openrouter-add.png](pixels/feature-openrouter-add.png)
3. Input the API key into the **OpenRouter API Key** field, and load the Models.
![feature-openrouter-configure.png](pixels/feature-openrouter-configure.png)
4. OpenAI GPT4-32k and other models will now be accessible and selectable in the application.
### Pricing
OpenRouter independently manages its service and pricing and is not affiliated with big-AGI.
For more detailed information, please visit [this page](https://openrouter.ai/docs#models).
Please note that running large models such as GPT-4 32k can be costly and may rapidly consume
credits - a single prompt may cost $1 or more, at the time of writing.
+45
View File
@@ -0,0 +1,45 @@
# Authentication
`big-AGI` does not come with built-in authentication. To secure your deployment, you can implement authentication
in one of the following ways:
1. Build `big-AGI` with support for ⬇️ [HTTP Authentication](#http-authentication)
2. Utilize user authentication features provided by your ⬇️ [cloud deployment platform](#cloud-deployments-authentication)
3. Develop a custom authentication solution
<br/>
### HTTP Authentication
[HTTP Basic Authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication) is a simple method
to secure your application.
To enable it in `big-AGI`, you **must manually build the application**:
- Build `big-AGI` with HTTP authentication enabled:
- Clone the repository
- Rename `middleware_BASIC_AUTH.ts` to `middleware.ts`
- Build: usual simple build procedure (e.g. [Deploy manually](../README.md#-deploy-manually) or [Deploying with Docker](deploy-docker.md))
- Configure the following [environment variables](environment-variables.md) before launching `big-AGI`:
```dotenv
HTTP_BASIC_AUTH_USERNAME=<your username>
HTTP_BASIC_AUTH_PASSWORD=<your password>
```
- Start the application 🔒
<br/>
### Cloud Deployments Authentication
> This approach allows you to enable authentication without rebuilding the application by using the features
> provided by your cloud platform to manage user accounts and access.
Many cloud deployment platforms offer built-in authentication mechanisms. Refer to the platform's documentation
for setup instructions:
1. [CloudFlare Access / Zero Trust](https://www.cloudflare.com/zero-trust/products/access/)
2. [Vercel Authentication](https://vercel.com/docs/security/deployment-protection/methods-to-protect-deployments/vercel-authentication)
3. [Vercel Password Protection](https://vercel.com/docs/security/deployment-protection/methods-to-protect-deployments/password-protection)
4. Let us know when you test more solutions (Heroku, AWS IAM, Google IAP, etc.)
+47 -34
View File
@@ -1,55 +1,68 @@
# Deploying Next.js App on Cloudflare Pages
# Deploying a Next.js App on Cloudflare Pages
Follow these steps to deploy your Next.js app on Cloudflare Pages. This guide is based on
the [official Cloudflare developer documentation](https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/),
with a few additional steps.
> WARNING: Cloudflare Pages does not support traditional NodeJS runtimes, but only Edge Runtime functions.
>
> In this project we use Prisma connected to serverless Postgres, which at the moment cannot run on
> edge functions, so we cannot deploy this project on Cloudflare Pages.
>
> Workaround: Step 3.4. has been added below, to DELETE the NodeJS traditional runtime - which means that some
> parts of this application will not work.
> - [Side effects](https://github.com/enricoros/big-agi/blob/main/src/apps/chat/trade/server/trade.router.ts#L19):
> Sharing functionality to DB, and import from ChatGPT share, and post to Paste.GG will not work
> - See [Issue 174](https://github.com/enricoros/big-agi/issues/174).
>
> Longer term: follow [prisma/prisma: Support Edge Function deployments](https://github.com/prisma/prisma/issues/21394)
> and convert the Node runtime to Edge runtime once Prisma supports it.
## Step 1: Fork the Repository
This guide provides steps to deploy your Next.js app on Cloudflare Pages.
It is based on the [official Cloudflare developer documentation](https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/),
with some additional steps.
Fork the repository to your own GitHub account.
## Step 1: Repository Forking
## Step 2: Connect Cloudflare Pages to Your GitHub Account
Fork the repository to your personal GitHub account.
1. Go to the Cloudflare Pages section and click the `Create a project` button.
2. Click `Connect To Git` and give Cloudflare Pages either All GitHub account Repo access or selected Repo access. We
recommend using selected Repo access and selecting the forked repo from step 1.
## Step 2: Linking Cloudflare Pages to Your GitHub Account
## Step 3: Setup Build and Deployments
1. Navigate to the Cloudflare Pages section and click on the `Create a project` button.
2. Click `Connect To Git` and grant Cloudflare Pages access to either all GitHub account repositories or selected repositories.
We recommend using selected Repo access and selecting the forked repository from step 1.
1. Once you select the forked GitHub repo, click the `Begin Setup` button.
2. On this page, set your `Project name`, `Production branch` (e.g., main), and your Build settings.
3. Select `Next.js` from the `Framework preset` dropdown menu.
4. Leave the preset filled Build command and Build output directory as preset defaults.
5. Set `Environmental variables` (advanced) on this page to configure some variables as follows:
## Step 3: Configuring Build and Deployments
| Variable | Value |
|---------------------------|---------|
| `GO_VERSION` | `1.16` |
| `NEXT_TELEMETRY_DISABLED` | `1` |
| `NODE_VERSION` | `17` |
| `PHP_VERSION` | `7.4` |
| `PYTHON_VERSION` | `3.7` |
| `RUBY_VERSION` | `2.7.1` |
1. After selecting the forked GitHub repository, click the **Begin Setup** button
2. On this page, set your **Project name**, **Production branch** (e.g., main), and your Build settings
3. Choose `Next.js` from the **Framework preset** dropdown menu
4. Set a custom **Build Command**:
- `rm app/api/trpc-node/[trpc]/route.ts && npx @cloudflare/next-on-pages@1`
- see the tradeoffs for this deletion on the notice at the top
5. Keep the **Build output directory** as default
6. Click the **Save and Deploy** button
6. Click the `Save and Deploy` button.
## Step 4: Monitoring the Deployment Process
## Step 4: Monitor the Deployment Process
Observe the process as it initializes your build environment, clones the GitHub repository, builds the application, and deploys it
to the Cloudflare Network. Once complete, proceed to the project you created.
Watch the process run to initialize your build environment, clone the GitHub repo, build the application, and deploy to
the Cloudflare Network. Once that is done, proceed to the project you created.
## Step 5: Required: Set the `nodejs_compat` compatibility flag
## Step 5: Set up a Custom Domain
1. Navigate to the [Settings > Functions](https://dash.cloudflare.com/?to=/:account/pages/view/:pages-project/settings/functions) page of your newly created project
2. Scroll to `Compatibility flags` and enter "`nodejs_compat`" for both **Production** and **Preview** environments.
It should look like this: ![](pixels/config-deploy-cloudflare-compat2.png)
3. Re-deploy your project for the new flags to take effect
## Step 6: (Optional) Custom Domain Configuration
Use the `Custom domains` tab to set up your domain via CNAME.
## Step 6: Configure Access Policy and Web Analytics
## Step 7: (Optional) Access Policy and Web Analytics Configuration
Go to the `Settings` page and enable the following settings:
Navigate to the `Settings` page and enable the following settings:
1. Access Policy: Restrict [preview deployments](https://developers.cloudflare.com/pages/platform/preview-deployments/)
to members of your Cloudflare account via one-time pin and restrict primary `*.YOURPROJECT.pages.dev` domain.
See [Cloudflare Pages known issues](https://developers.cloudflare.com/pages/platform/known-issues/#enabling-access-on-your-pagesdev-domain)
for more information.
Refer to [Cloudflare Pages known issues](https://developers.cloudflare.com/pages/platform/known-issues/#enabling-access-on-your-pagesdev-domain)
for more details.
2. Enable Web Analytics.
Now you have successfully deployed your Next.js app on Cloudflare Pages.
Congratulations! You have successfully deployed your Next.js app on Cloudflare Pages.
+48 -14
View File
@@ -1,26 +1,60 @@
# Deploy `big-AGI` with Docker 🐳
# Deploying `big-AGI` with Docker
Deploy the big-AGI application using Docker containers for a consistent, efficient, and automated deployment process. Enjoy faster development cycles, easier collaboration, and seamless environment management. 🚀
Utilize Docker containers to deploy the big-AGI application for an efficient and automated deployment process.
Docker ensures faster development cycles, easier collaboration, and seamless environment management.
Docker is a platform for developing, packaging, and deploying applications as lightweight containers, ensuring consistent behavior across environments.
## Build and run your container 🔧
## `big-AGI` Docker Components
1. **Clone big-AGI**
```bash
git clone https://github.com/enricoros/big-agi.git
cd big-agi
```
2. **Build the Docker Image**: Build a local docker image from the provided Dockerfile:
```bash
docker build -t big-agi .
```
3. **Run the Docker Container**: start a Docker container from the newly built image,
and expose its http port 3000 to your `localhost:3000` using:
```bash
docker run -d -p 3000:3000 big-agi
```
4. Browse to [http://localhost:3000](http://localhost:3000)
The big-AGI repository includes a Dockerfile and a GitHub Actions workflow for building and publishing a Docker image of the application.
## Documentation
The big-AGI repository includes a Dockerfile and a GitHub Actions workflow for building and publishing a
Docker image of the application.
### Dockerfile
The [`Dockerfile`](../Dockerfile) sets up a Node.js environment, installs dependencies, and creates a production-ready version of the application.
The [`Dockerfile`](../Dockerfile) describes how to create a Docker image. It establishes a Node.js environment,
installs dependencies, and creates a production-ready version of the application as a local container.
### GitHub Actions Workflow
### Official container images
The [`.github/workflows/docker-image.yml`](../.github/workflows/docker-image.yml) file automates building and publishing the Docker image when changes are pushed to the `main` branch.
The [`.github/workflows/docker-image.yml`](../.github/workflows/docker-image.yml) file automates the
building and publishing of the Docker images to the GitHub Container Registry (ghcr) when changes are
pushed to the `main` branch.
## Deploy Steps
Official pre-built containers: [ghcr.io/enricoros/big-agi](https://github.com/enricoros/big-agi/pkgs/container/big-agi)
1. Clone the big-AGI repository
2. Navigate to the project directory
3. Build the Docker image using the provided Dockerfile
4. Run the Docker container with the built image
Run official pre-built containers:
```bash
docker run -d -p 3000:3000 ghcr.io/enricoros/big-agi
```
Embrace the benefits of Docker for a reliable and efficient big-AGI deployment. 🎉
### Run official containers
In addition, the repository also includes a `docker-compose.yaml` file, configured to run the pre-built
'ghcr image'. This file is used to define the `big-agi` service, the ports to expose, and the command to run.
If you have Docker Compose installed, you can run the Docker container with `docker-compose up`
to pull the Docker image (if it hasn't been pulled already) and start a Docker container. If you want to
update the image to the latest version, you can run `docker-compose pull` before starting the service.
```bash
docker-compose up -d
```
Leverage Docker's capabilities for a reliable and efficient big-AGI deployment.
@@ -0,0 +1,31 @@
# This file is used to run `big-AGI` and `browserless` with Docker Compose.
#
# The two containers are linked together and `big-AGI` is configured to use `browserless`
# as its Puppeteer endpoint (from the containers intranet, it is available browserless:3000).
#
# From your host, you can access big-AGI on http://127.0.0.1:3000 and browserless on http://127.0.0.1:9222.
#
# To start the containers, run:
# docker-compose -f docs/docker/docker-compose-browserless.yaml up
version: '3.9'
services:
big-agi:
image: ghcr.io/enricoros/big-agi:main
ports:
- "3000:3000"
env_file:
- .env
environment:
- PUPPETEER_WSS_ENDPOINT=ws://browserless:3000
command: [ "next", "start", "-p", "3000" ]
depends_on:
- browserless
browserless:
image: browserless/chrome:latest
ports:
- "9222:3000" # Map host's port 9222 to container's port 3000
environment:
- MAX_CONCURRENT_SESSIONS=10
+122
View File
@@ -0,0 +1,122 @@
# Environment Variables
This document provides an explanation of the environment variables used in the big-AGI application.
**All variables are optional**; and _UI options_ take precedence over _backend environment variables_,
which take place over _defaults_. This file is kept in sync with [`../src/server/env.mjs`](../src/server/env.mjs).
### Setting Environment Variables
Environment variables can be set by creating a `.env` file in the root directory of the project.
The following is an example `.env` for copy-paste convenience:
```bash
# Database
POSTGRES_PRISMA_URL=
POSTGRES_URL_NON_POOLING=
# LLMs
OPENAI_API_KEY=
OPENAI_API_HOST=
OPENAI_API_ORG_ID=
AZURE_OPENAI_API_ENDPOINT=
AZURE_OPENAI_API_KEY=
ANTHROPIC_API_KEY=
ANTHROPIC_API_HOST=
OLLAMA_API_HOST=
OPENROUTER_API_KEY=
# Model Observability: Helicone
HELICONE_API_KEY=
# Text-To-Speech
ELEVENLABS_API_KEY=
ELEVENLABS_API_HOST=
ELEVENLABS_VOICE_ID=
# Text-To-Image
PRODIA_API_KEY=
# Google Custom Search
GOOGLE_CLOUD_API_KEY=
GOOGLE_CSE_ID=
# Browse
PUPPETEER_WSS_ENDPOINT=
# Backend Analytics
BACKEND_ANALYTICS=
# Backend HTTP Basic Authentication
HTTP_BASIC_AUTH_USERNAME=
HTTP_BASIC_AUTH_PASSWORD=
```
## Variables Documentation
### Database
To enable features such as Chat Link Shring, you need to connect the backend to a database. We require
serverless Postgres, which is available on Vercel, Neon and more.
Also make sure that you run `npx prisma db:push` to create the initial schema on the database for the
first time (or update it on a later stage).
| Variable | Description |
|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `POSTGRES_PRISMA_URL` | The URL of the Postgres database used by Prisma - example: `postgres://USER:PASS@SOMEHOST.postgres.vercel-storage.com/SOMEDB?pgbouncer=true&connect_timeout=15` |
| `POSTGRES_URL_NON_POOLING` | The URL of the Postgres database without pooling |
### LLMs
The following variables when set will enable the corresponding LLMs on the server-side, without
requiring the user to enter an API key
| Variable | Description | Required |
|-----------------------------|-------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------|
| `OPENAI_API_KEY` | API key for OpenAI | Recommended |
| `OPENAI_API_HOST` | Changes the backend host for the OpenAI vendor, to enable platforms such as Helicone and CloudFlare AI Gateway | Optional |
| `OPENAI_API_ORG_ID` | Sets the "OpenAI-Organization" header field to support organization users | Optional |
| `AZURE_OPENAI_API_ENDPOINT` | Azure OpenAI endpoint - host only, without the path | Optional, but if set `AZURE_OPENAI_API_KEY` must also be set |
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as [config-aws-bedrock.md](config-aws-bedrock.md) | Optional |
| `OLLAMA_API_HOST` | Changes the backend host for the Ollama vendor. See [config-ollama.md](config-ollama.md) | |
| `OPENROUTER_API_KEY` | The API key for OpenRouter | Optional |
### Model Observability: Helicone
Helicone provides observability to your LLM calls. It is a paid service, with a generous free tier.
It is currently supported for:
- **Anthropic**: by setting the Helicone API key, Helicone is automatically activated
- **OpenAI**: you also need to set `OPENAI_API_HOST` to `oai.hconeai.com`, to enable routing
| Variable | Description |
|--------------------|--------------------------|
| `HELICONE_API_KEY` | The API key for Helicone |
### Specials
Enable the app to Talk, Draw, and Google things up.
| Variable | Description |
|:---------------------------|:------------------------------------------------------------------------------------------------------------------------|
| **Text-To-Speech** | [ElevenLabs](https://elevenlabs.io/) is a high quality speech synthesis service |
| `ELEVENLABS_API_KEY` | ElevenLabs API Key - used for calls, etc. |
| `ELEVENLABS_API_HOST` | Custom host for ElevenLabs |
| `ELEVENLABS_VOICE_ID` | Default voice ID for ElevenLabs |
| **Google Custom Search** | [Google Programmable Search Engine](https://programmablesearchengine.google.com/about/) produces links to pages |
| `GOOGLE_CLOUD_API_KEY` | Google Cloud API Key, used with the '/react' command - [Link to GCP](https://console.cloud.google.com/apis/credentials) |
| `GOOGLE_CSE_ID` | Google Custom/Programmable Search Engine ID - [Link to PSE](https://programmablesearchengine.google.com/) |
| **Text-To-Image** | [Prodia](https://prodia.com/) is a reliable image generation service |
| `PRODIA_API_KEY` | Prodia API Key - used with '/imagine ...' |
| **Browse** | |
| `PUPPETEER_WSS_ENDPOINT` | Puppeteer WebSocket endpoint - used for browsing, etc. |
| **Backend** | |
| `BACKEND_ANALYTICS` | Semicolon-separated list of analytics flags (see backend.analytics.ts). Flags: `domain` logs the responding domain. |
| `HTTP_BASIC_AUTH_USERNAME` | Username for HTTP Basic Authentication. See the [Authentication](deploy-authentication.md) guide. |
| `HTTP_BASIC_AUTH_PASSWORD` | Password for HTTP Basic Authentication. |
---
-45
View File
@@ -1,45 +0,0 @@
# Local LLM Integration with `text-web-ui` :llama:
Integrate local Large Language Models (LLMs) using
[oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui),
a specialized interface that incorporates a custom variant of the OpenAI API for a seamless integration experience.
_Last changed on Aug 8, 2023, using the CMD_FLAGS.txt file_
### Components
Implementation of local LLMs requires the following components:
* **text-generation-webui**: a python application with Gradio web UI for running Large Language Models
* **local Large Language Models "LLMs"**: use large language models on your own computer and with consumer GPUs or CPUs
* **big-AGI**: LLM UI, offering features such as Personas, OCR, Voice Support, Code Execution, AGI functions, and more
## Instructions
This guide presumes that **big-AGI** is already installed on your system - note that the text-generation-webui IP
address must be accessible from the Server running **big-AGI**.
1. Install [text-generation-webui](https://github.com/oobabooga/text-generation-webui#Installation)
- Download the one-click installer extract it, and double-click on "start" - 10 min
- Then close it, as we need to change the startup flags
2. Enable the **openai extension**
- Edit `CMD_FLAGS.txt`
- Update the contents from `--chat` to: `--chat --listen --extensions openai`
3. Restart text-generation-webui
- Double-click on "start"
- You will see something like: `OpenAI compatible API ready at: OPENAI_API_BASE=http://0.0.0.0:5001/v1`
- The OpenAI API is now running on port 5001, on both localhost (127.0.0.1) and your local IP address
4. Load your first model
- Open the text-generation-webui at [127.0.0.1:7860](http://127.0.0.1:7860/)
- Switch to the **Model** tab
- Download for instance `TheBloke/Llama-2-7b-Chat-GPTQ:gptq-4bit-32g-actorder_True` - 4.3 GB
- Select the model once loaded
5. Configure big-AGI:
- Models > Add a model source of type: **Oobabooga**
- Enter the address: `http://127.0.0.1:5001`
- replace 127.0.0.1 with the IP of the machine if running remotely - make sure to use the **IP:Port** format
- Load the models
- the active model must be selected on the text-generation-webui, as it doesn't support model switching or parallel requests
- Select model & Chat
Experience the privacy and flexibility of local LLMs with `big-AGI` and `text-generation-webui`! :tada:
Binary file not shown.

Before

Width:  |  Height:  |  Size: 283 KiB

After

Width:  |  Height:  |  Size: 279 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 626 KiB

After

Width:  |  Height:  |  Size: 618 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 370 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 730 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 MiB

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.7 KiB

After

Width:  |  Height:  |  Size: 2.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 9.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 195 KiB

After

Width:  |  Height:  |  Size: 157 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 49 KiB

-10
View File
@@ -1,10 +0,0 @@
# Scratchpad
Nobody will see this, right?
## Modules
### LLMs
- [ ] How to show server-side-configured OpenAI? - shall it be an auto-conf'd source that can be added?
- Would we allow people to add the key? ideally that conf would be immutable
+59
View File
@@ -0,0 +1,59 @@
/**
* Middleware to protect `big-AGI` with HTTP Basic Authentication
*
* For more information on how to deploy with HTTP Basic Authentication, see:
* - [deploy-authentication.md](docs/deploy-authentication.md)
*/
import type { NextRequest } from 'next/server';
import { NextResponse } from 'next/server';
// noinspection JSUnusedGlobalSymbols
export function middleware(request: NextRequest) {
// Validate deployment configuration
if (!process.env.HTTP_BASIC_AUTH_USERNAME || !process.env.HTTP_BASIC_AUTH_PASSWORD) {
console.warn('HTTP Basic Authentication is enabled but not configured');
return new Response('Unauthorized/Unconfigured', unauthResponse);
}
// Request client authentication if no credentials are provided
const authHeader = request.headers.get('authorization');
if (!authHeader?.startsWith('Basic '))
return new Response('Unauthorized', unauthResponse);
// Request authentication if credentials are invalid
const base64Credentials = authHeader.split(' ')[1];
const credentials = Buffer.from(base64Credentials, 'base64').toString('ascii');
const [username, password] = credentials.split(':');
if (
!username || !password ||
username !== process.env.HTTP_BASIC_AUTH_USERNAME ||
password !== process.env.HTTP_BASIC_AUTH_PASSWORD
)
return new Response('Unauthorized', unauthResponse);
return NextResponse.next();
}
// Response to send when authentication is required
const unauthResponse: ResponseInit = {
status: 401,
headers: {
'WWW-Authenticate': 'Basic realm="Secure big-AGI"',
},
};
export const config = {
matcher: [
// Include root
'/',
// Include pages
'/(call|index|news|personas|link)(.*)',
// Include API routes
'/api(.*)',
// Note: this excludes _next, /images etc..
],
};
-30
View File
@@ -1,30 +0,0 @@
/** @type {import('next').NextConfig} */
let nextConfig = {
reactStrictMode: true,
env: {
// defaults to TRUE, unless API Keys are set at build time; this flag is used by the UI
HAS_SERVER_KEYS_GOOGLE_CSE: !!process.env.GOOGLE_CLOUD_API_KEY && !!process.env.GOOGLE_CSE_ID,
HAS_SERVER_KEY_ANTHROPIC: !!process.env.ANTHROPIC_API_KEY,
HAS_SERVER_KEY_ELEVENLABS: !!process.env.ELEVENLABS_API_KEY,
HAS_SERVER_KEY_OPENAI: !!process.env.OPENAI_API_KEY,
HAS_SERVER_KEY_PRODIA: !!process.env.PRODIA_API_KEY,
},
webpack(config, { isServer, dev }) {
// @mui/joy: anything material gets redirected to Joy
config.resolve.alias['@mui/material'] = '@mui/joy';
// @dqbd/tiktoken: enable asynchronous WebAssembly
config.experiments = {
asyncWebAssembly: true,
layers: true,
};
return config;
},
};
// conditionally enable the nextjs bundle analyzer
if (process.env.ANALYZE_BUNDLE)
nextConfig = require('@next/bundle-analyzer')()(nextConfig);
module.exports = nextConfig;
+41
View File
@@ -0,0 +1,41 @@
/** @type {import('next').NextConfig} */
let nextConfig = {
reactStrictMode: true,
// Note: disabled to chech whether the project becomes slower with this
// modularizeImports: {
// '@mui/icons-material': {
// transform: '@mui/icons-material/{{member}}',
// },
// },
// [puppeteer] https://github.com/puppeteer/puppeteer/issues/11052
experimental: {
serverComponentsExternalPackages: ['puppeteer-core'],
},
webpack: (config, _options) => {
// @mui/joy: anything material gets redirected to Joy
config.resolve.alias['@mui/material'] = '@mui/joy';
// @dqbd/tiktoken: enable asynchronous WebAssembly
config.experiments = {
asyncWebAssembly: true,
layers: true,
};
return config;
},
};
// Validate environment variables, if set at build time. Will be actually read and used at runtime.
// This is the reason both this file and the servr/env.mjs files have this extension.
await import('./src/server/env.mjs');
// conditionally enable the nextjs bundle analyzer
if (process.env.ANALYZE_BUNDLE) {
const { default: withBundleAnalyzer } = await import('@next/bundle-analyzer');
nextConfig = withBundleAnalyzer({ openAnalyzer: true })(nextConfig);
}
export default nextConfig;
+1654 -1110
View File
File diff suppressed because it is too large Load Diff
+49 -35
View File
@@ -1,15 +1,16 @@
{
"name": "big-agi",
"version": "1.3.5",
"version": "1.7.0",
"private": true,
"engines": {
"node": "^18.0.0"
},
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint"
"lint": "next lint",
"env:pull": "npx vercel env pull .env.development.local",
"postinstall": "prisma generate",
"db:push": "prisma db push",
"db:studio": "prisma studio"
},
"dependencies": {
"@dqbd/tiktoken": "^1.0.7",
@@ -17,41 +18,54 @@
"@emotion/react": "^11.11.1",
"@emotion/server": "^11.11.0",
"@emotion/styled": "^11.11.0",
"@mui/icons-material": "^5.14.3",
"@mui/joy": "^5.0.0-beta.2",
"@next/bundle-analyzer": "^13.4.16",
"@tanstack/react-query": "4.32.6",
"@trpc/client": "^10.37.1",
"@trpc/next": "^10.37.1",
"@trpc/react-query": "^10.37.1",
"@trpc/server": "^10.37.1",
"@vercel/analytics": "^1.0.2",
"browser-fs-access": "^0.34.1",
"eventsource-parser": "^1.0.0",
"next": "^13.4.16",
"pdfjs-dist": "3.9.179",
"@mui/icons-material": "^5.14.18",
"@mui/joy": "^5.0.0-beta.15",
"@next/bundle-analyzer": "^14.0.3",
"@prisma/client": "^5.6.0",
"@sanity/diff-match-patch": "^3.1.1",
"@t3-oss/env-nextjs": "^0.7.1",
"@tanstack/react-query": "^4.36.1",
"@trpc/client": "^10.44.1",
"@trpc/next": "^10.44.1",
"@trpc/react-query": "^10.44.1",
"@trpc/server": "^10.44.1",
"@vercel/analytics": "^1.1.1",
"browser-fs-access": "^0.35.0",
"eventsource-parser": "^1.1.1",
"idb-keyval": "^6.2.1",
"next": "^14.0.3",
"pdfjs-dist": "4.0.189",
"plantuml-encoder": "^1.4.0",
"prismjs": "^1.29.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-markdown": "^8.0.7",
"remark-gfm": "^3.0.1",
"superjson": "^1.13.1",
"tesseract.js": "^4.1.1",
"uuid": "^9.0.0",
"zod": "3.21.4",
"zustand": "4.3.9"
"react-katex": "^3.0.1",
"react-markdown": "^9.0.1",
"react-timeago": "^7.2.0",
"remark-gfm": "^4.0.0",
"superjson": "^2.2.1",
"tesseract.js": "^5.0.3",
"uuid": "^9.0.1",
"zod": "^3.22.4",
"zustand": "~4.3.9"
},
"devDependencies": {
"@types/node": "^20.4.10",
"@types/plantuml-encoder": "^1.4.0",
"@types/prismjs": "^1.26.0",
"@types/react": "^18.2.20",
"@types/react-dom": "^18.2.7",
"@types/uuid": "^9.0.2",
"eslint": "^8.47.0",
"eslint-config-next": "^13.4.16",
"prettier": "^3.0.1",
"typescript": "^5.1.6"
"@cloudflare/puppeteer": "^0.0.5",
"@types/node": "^20.10.0",
"@types/plantuml-encoder": "^1.4.2",
"@types/prismjs": "^1.26.3",
"@types/react": "^18.2.38",
"@types/react-dom": "^18.2.17",
"@types/react-katex": "^3.0.3",
"@types/react-timeago": "^4.1.6",
"@types/uuid": "^9.0.7",
"eslint": "^8.54.0",
"eslint-config-next": "^14.0.3",
"prettier": "^3.1.0",
"prisma": "^5.6.0",
"typescript": "^5.3.2"
},
"engines": {
"node": "^20.0.0 || ^18.0.0"
}
}
+29 -42
View File
@@ -1,55 +1,42 @@
import * as React from 'react';
import Head from 'next/head';
import { MyAppProps } from 'next/app';
import { Analytics as VercelAnalytics } from '@vercel/analytics/react';
import { AppProps } from 'next/app';
import { CacheProvider, EmotionCache } from '@emotion/react';
import { CssBaseline, CssVarsProvider } from '@mui/joy';
import { QueryClient, QueryClientProvider } from '@tanstack/react-query';
import { apiQuery } from '~/modules/trpc/trpc.client';
import { Brand } from '~/common/app.config';
import { apiQuery } from '~/common/util/trpc.client';
import '~/common/styles/CodePrism.css'
import 'katex/dist/katex.min.css';
import '~/common/styles/CodePrism.css';
import '~/common/styles/GithubMarkdown.css';
import { Brand } from '~/common/brand';
import { createEmotionCache, theme } from '~/common/theme';
import { ProviderBackend } from '~/common/state/ProviderBackend';
import { ProviderSnacks } from '~/common/state/ProviderSnacks';
import { ProviderTRPCQueryClient } from '~/common/state/ProviderTRPCQueryClient';
import { ProviderTheming } from '~/common/state/ProviderTheming';
// Client-side cache, shared for the whole session of the user in the browser.
const clientSideEmotionCache = createEmotionCache();
const MyApp = ({ Component, emotionCache, pageProps }: MyAppProps) =>
<>
export interface MyAppProps extends AppProps {
emotionCache?: EmotionCache;
}
<Head>
<title>{Brand.Title.Common}</title>
<meta name='viewport' content='minimum-scale=1, initial-scale=1, width=device-width, shrink-to-fit=no' />
</Head>
<ProviderTheming emotionCache={emotionCache}>
<ProviderTRPCQueryClient>
<ProviderSnacks>
<ProviderBackend>
<Component {...pageProps} />
</ProviderBackend>
</ProviderSnacks>
</ProviderTRPCQueryClient>
</ProviderTheming>
function MyApp({ Component, emotionCache = clientSideEmotionCache, pageProps }: MyAppProps) {
const [queryClient] = React.useState(() => new QueryClient({
defaultOptions: {
queries: {
retry: false,
},
mutations: {
retry: false,
},
},
}));
return <>
<CacheProvider value={emotionCache}>
<Head>
<title>{Brand.Title.Common}</title>
<meta name='viewport' content='minimum-scale=1, initial-scale=1, width=device-width, shrink-to-fit=no' />
</Head>
{/* Rect-query provider */}
<QueryClientProvider client={queryClient}>
<CssVarsProvider defaultMode='light' theme={theme}>
{/* CssBaseline kickstart an elegant, consistent, and simple baseline to build upon. */}
<CssBaseline />
<Component {...pageProps} />
</CssVarsProvider>
</QueryClientProvider>
</CacheProvider>
<VercelAnalytics debug={false} />
</>;
}
// enables the react-query api invocation
</>;
// enables the React Query API invocation
export default apiQuery.withTRPC(MyApp);
+3 -5
View File
@@ -1,13 +1,11 @@
import * as React from 'react';
import { AppType } from 'next/app';
import { AppType, MyAppProps } from 'next/app';
import { default as Document, DocumentContext, DocumentProps, Head, Html, Main, NextScript } from 'next/document';
import createEmotionServer from '@emotion/server/create-instance';
import { getInitColorSchemeScript } from '@mui/joy/styles';
import { Brand } from '~/common/brand';
import { bodyFontClassName, createEmotionCache } from '~/common/theme';
import { MyAppProps } from './_app';
import { Brand } from '~/common/app.config';
import { bodyFontClassName, createEmotionCache } from '~/common/app.theme';
interface MyDocumentProps extends DocumentProps {
-39
View File
@@ -1,39 +0,0 @@
import { NextRequest, NextResponse } from 'next/server';
import { elevenlabsAccess, elevenlabsVoiceId, ElevenlabsWire, speechInputSchema } from '~/modules/elevenlabs/elevenlabs.router';
/* NOTE: Why does this file even exist?
This file is a workaround for a limitation in tRPC; it does not support ArrayBuffer responses,
and that would force us to use base64 encoding for the audio data, which would be a waste of
bandwidth. So instead, we use this file to make the request to ElevenLabs, and then return the
response as an ArrayBuffer. Unfortunately this means duplicating the code in the server-side
and client-side vs. the TRPC implementation. So at lease we recycle the input structures.
*/
export default async function handler(req: NextRequest) {
try {
// construct the upstream request
const { elevenKey, text, voiceId, nonEnglish } = speechInputSchema.parse(await req.json());
const { headers, url } = elevenlabsAccess(elevenKey, `/v1/text-to-speech/${elevenlabsVoiceId(voiceId)}`);
const body: ElevenlabsWire.TTSRequest = {
text: text,
...(nonEnglish && { model_id: 'eleven_multilingual_v1' }),
};
// elevenlabs POST
const response = await fetch(url, { headers, method: 'POST', body: JSON.stringify(body) });
const audioArrayBuffer = await response.arrayBuffer();
// return the audio
return new NextResponse(audioArrayBuffer, { status: 200, headers: { 'Content-Type': 'audio/mpeg' } });
} catch (error) {
console.error('api/elevenlabs/speech error:', error);
return new NextResponse(JSON.stringify(`textToSpeech error: ${error?.toString() || 'Network issue'}`), { status: 500 });
}
}
// noinspection JSUnusedGlobalSymbols
export const runtime = 'edge';
-207
View File
@@ -1,207 +0,0 @@
import { NextRequest, NextResponse } from 'next/server';
import { createParser as createEventsourceParser, EventSourceParser, ParsedEvent, ReconnectInterval } from 'eventsource-parser';
import { AnthropicWire } from '~/modules/llms/anthropic/anthropic.types';
import { OpenAI } from '~/modules/llms/openai/openai.types';
import { anthropicAccess, anthropicCompletionRequest } from '~/modules/llms/anthropic/anthropic.router';
import { chatStreamSchema, openAIAccess, openAIChatCompletionPayload } from '~/modules/llms/openai/openai.router';
/**
* Vendor stream parsers
* - The vendor can decide to terminate the connection (close: true), transmitting anything in 'text' before doing so
* - The vendor can also throw from this function, which will error and terminate the connection
*/
type AIStreamParser = (data: string) => { text: string, close: boolean };
// The peculiarity of our parser is the injection of a JSON structure at the beginning of the stream, to
// communicate parameters before the text starts flowing to the client.
function parseOpenAIStream(): AIStreamParser {
let hasBegun = false;
let hasWarned = false;
return data => {
const json: OpenAI.Wire.ChatCompletion.ResponseStreamingChunk = JSON.parse(data);
// an upstream error will be handled gracefully and transmitted as text (throw to transmit as 'error')
if (json.error)
return { text: `[OpenAI Issue] ${json.error.message || json.error}`, close: true };
if (json.choices.length !== 1)
throw new Error(`[OpenAI Issue] Expected 1 completion, got ${json.choices.length}`);
const index = json.choices[0].index;
if (index !== 0 && index !== undefined /* LocalAI hack/workaround until https://github.com/go-skynet/LocalAI/issues/788 */)
throw new Error(`[OpenAI Issue] Expected completion index 0, got ${index}`);
let text = json.choices[0].delta?.content /*|| json.choices[0]?.text*/ || '';
// hack: prepend the model name to the first packet
if (!hasBegun) {
hasBegun = true;
const firstPacket: OpenAI.API.Chat.StreamingFirstResponse = {
model: json.model,
};
text = JSON.stringify(firstPacket) + text;
}
// if there's a warning, log it once
if (json.warning && !hasWarned) {
hasWarned = true;
console.log('/api/llms/stream: OpenAI stream warning:', json.warning);
}
// workaround: LocalAI doesn't send the [DONE] event, but similarly to OpenAI, it sends a "finish_reason" delta update
const close = !!json.choices[0].finish_reason;
return { text, close };
};
}
// Anthropic event stream parser
function parseAnthropicStream(): AIStreamParser {
let hasBegun = false;
return data => {
const json: AnthropicWire.Complete.Response = JSON.parse(data);
let text = json.completion;
// hack: prepend the model name to the first packet
if (!hasBegun) {
hasBegun = true;
const firstPacket: OpenAI.API.Chat.StreamingFirstResponse = {
model: json.model,
};
text = JSON.stringify(firstPacket) + text;
}
return { text, close: false };
};
}
/**
* Creates a TransformStream that parses events from an EventSource stream using a custom parser.
* @returns {TransformStream<Uint8Array, string>} TransformStream parsing events.
*/
export function createEventStreamTransformer(vendorTextParser: AIStreamParser): TransformStream<Uint8Array, Uint8Array> {
const textDecoder = new TextDecoder();
const textEncoder = new TextEncoder();
let eventSourceParser: EventSourceParser;
return new TransformStream({
start: async (controller): Promise<void> => {
eventSourceParser = createEventsourceParser(
(event: ParsedEvent | ReconnectInterval) => {
// ignore 'reconnect-interval' and events with no data
if (event.type !== 'event' || !('data' in event))
return;
// event stream termination, close our transformed stream
if (event.data === '[DONE]') {
controller.terminate();
return;
}
try {
const { text, close } = vendorTextParser(event.data);
if (text)
controller.enqueue(textEncoder.encode(text));
if (close)
controller.terminate();
} catch (error: any) {
// console.log(`/api/llms/stream: parse issue: ${error?.message || error}`);
controller.enqueue(textEncoder.encode(`[Stream Issue] ${error?.message || error}`));
controller.terminate();
}
},
);
},
// stream=true is set because the data is not guaranteed to be final and un-chunked
transform: (chunk: Uint8Array) => {
eventSourceParser.feed(textDecoder.decode(chunk, { stream: true }));
},
});
}
async function throwResponseNotOk(response: Response) {
if (!response.ok) {
const errorPayload: object | null = await response.json().catch(() => null);
throw new Error(`${response.status} · ${response.statusText}${errorPayload ? ' · ' + JSON.stringify(errorPayload) : ''}`);
}
}
function createEmptyReadableStream(): ReadableStream {
return new ReadableStream({
start: (controller) => controller.close(),
});
}
export default async function handler(req: NextRequest): Promise<Response> {
// inputs - reuse the tRPC schema
const { vendorId, access, model, history } = chatStreamSchema.parse(await req.json());
// begin event streaming from the OpenAI API
let upstreamResponse: Response;
let vendorStreamParser: AIStreamParser;
try {
// prepare the API request data
let headersUrl: { headers: HeadersInit, url: string };
let body: object;
switch (vendorId) {
case 'anthropic':
headersUrl = anthropicAccess(access as any, '/v1/complete');
body = anthropicCompletionRequest(model, history, true);
vendorStreamParser = parseAnthropicStream();
break;
case 'openai':
headersUrl = openAIAccess(access as any, '/v1/chat/completions');
body = openAIChatCompletionPayload(model, history, null, 1, true);
vendorStreamParser = parseOpenAIStream();
break;
}
// POST to our API route
upstreamResponse = await fetch(headersUrl.url, {
method: 'POST',
headers: headersUrl.headers,
body: JSON.stringify(body),
});
await throwResponseNotOk(upstreamResponse);
} catch (error: any) {
const fetchOrVendorError = (error?.message || typeof error === 'string' ? error : JSON.stringify(error)) + (error?.cause ? ' · ' + error.cause : '');
console.log(`/api/llms/stream: fetch issue: ${fetchOrVendorError}`);
return new NextResponse('[OpenAI Issue] ' + fetchOrVendorError, { status: 500 });
}
/* The following code is heavily inspired by the Vercel AI SDK, but simplified to our needs and in full control.
* This replaces the former (custom) implementation that used to return a ReadableStream directly, and upon start,
* it was blindly fetching the upstream response and piping it to the client.
*
* We now use backpressure, as explained on: https://sdk.vercel.ai/docs/concepts/backpressure-and-cancellation
*
* NOTE: we have not benchmarked to see if there is performance impact by using this approach - we do want to have
* a 'healthy' level of inventory (i.e., pre-buffering) on the pipe to the client.
*/
const chatResponseStream = (upstreamResponse.body || createEmptyReadableStream())
.pipeThrough(createEventStreamTransformer(vendorStreamParser));
return new NextResponse(chatResponseStream, {
status: 200,
headers: {
'Content-Type': 'text/event-stream; charset=utf-8',
},
});
}
// noinspection JSUnusedGlobalSymbols
export const runtime = 'edge';
-38
View File
@@ -1,38 +0,0 @@
import { NextRequest } from 'next/server';
import { fetchRequestHandler } from '@trpc/server/adapters/fetch';
import { appRouter } from '~/modules/trpc/trpc.router';
import { createTRPCContext } from '~/modules/trpc/trpc.server';
/*
// NextJS (traditional, non-edge) API handler
import { createNextApiHandler } from '@trpc/server/adapters/next';
import { createTRPCContext } from '~/modules/trpc/trpc.server';
export default createNextApiHandler({
router: appRouter,
createContext: createTRPCContext,
onError:
process.env.NODE_ENV === 'development'
? ({ path, error }) => console.error(`❌ tRPC failed on ${path ?? '<no-path>'}:`, error)
: undefined,
});
*/
export default async function handler(req: NextRequest) {
return fetchRequestHandler({
endpoint: '/api/trpc',
router: appRouter,
req,
createContext: createTRPCContext,
onError:
process.env.NODE_ENV === 'development'
? ({ path, error }) => console.error(`❌ tRPC failed on ${path ?? '<no-path>'}:`, error)
: undefined,
});
}
// noinspection JSUnusedGlobalSymbols
export const runtime = 'edge';
+14
View File
@@ -0,0 +1,14 @@
import * as React from 'react';
import { AppCall } from '../src/apps/call/AppCall';
import { AppLayout } from '~/common/layout/AppLayout';
export default function CallPage() {
return (
<AppLayout>
<AppCall />
</AppLayout>
);
}
+1 -1
View File
@@ -6,7 +6,7 @@ import { useShowNewsOnUpdate } from '../src/apps/news/news.hooks';
import { AppLayout } from '~/common/layout/AppLayout';
export default function HomePage() {
export default function ChatPage() {
// show the News page on updates
useShowNewsOnUpdate();
-14
View File
@@ -1,14 +0,0 @@
import * as React from 'react';
import AppLabs from '../src/apps/labs/AppLabs';
import { AppLayout } from '~/common/layout/AppLayout';
export default function LabsPage() {
return (
<AppLayout suspendAutoModelsSetup>
<AppLabs />
</AppLayout>
);
}
+18
View File
@@ -0,0 +1,18 @@
import * as React from 'react';
import { useRouter } from 'next/router';
import { AppChatLink } from '../../../src/apps/link/AppChatLink';
import { AppLayout } from '~/common/layout/AppLayout';
export default function ChatLinkPage() {
const { query } = useRouter();
const chatLinkId = query?.chatLinkId as string ?? '';
return (
<AppLayout suspendAutoModelsSetup>
<AppChatLink linkId={chatLinkId} />
</AppLayout>
);
}
+141
View File
@@ -0,0 +1,141 @@
import * as React from 'react';
import { useRouter } from 'next/router';
import { Alert, Box, Button, Typography } from '@mui/joy';
import ArrowBackIcon from '@mui/icons-material/ArrowBack';
import { setComposerStartupText } from '../../src/apps/chat/components/composer/store-composer';
import { callBrowseFetchPage } from '~/modules/browse/browse.client';
import { AppLayout } from '~/common/layout/AppLayout';
import { LogoProgress } from '~/common/components/LogoProgress';
import { asValidURL } from '~/common/util/urlUtils';
import { navigateToIndex } from '~/common/app.routes';
/**
* This page will be invoked on mobile when sharing Text/URLs/Files from other APPs
* See the /public/manifest.json for how this is configured. Parameters:
* - text: the text to share
* - url: the URL to share
* - if the URL is a valid URL, it will be downloaded and the content will be shared
* - if the URL is not a valid URL, it will be shared as text
* - title: the title of the shared content
*/
function AppShareTarget() {
// state
const [errorMessage, setErrorMessage] = React.useState<string | null>(null);
const [intentText, setIntentText] = React.useState<string | null>(null);
const [intentURL, setIntentURL] = React.useState<string | null>(null);
const [isDownloading, setIsDownloading] = React.useState(false);
// external state
const { query } = useRouter();
const queueComposerTextAndLaunchApp = React.useCallback((text: string) => {
setComposerStartupText(text);
void navigateToIndex(true);
}, []);
// Detect the share Intent from the query
React.useEffect(() => {
// skip when query is not parsed yet
if (!Object.keys(query).length)
return;
// single item from the query
let queryTextItem: string[] | string | null = query.url || query.text || null;
if (Array.isArray(queryTextItem))
queryTextItem = queryTextItem[0];
// check if the item is a URL
const url = asValidURL(queryTextItem);
if (url)
setIntentURL(url);
else if (queryTextItem)
setIntentText(queryTextItem);
else
setErrorMessage('No text or url. Received: ' + JSON.stringify(query));
}, [query.url, query.text, query]);
// Text -> Composer
React.useEffect(() => {
if (intentText)
queueComposerTextAndLaunchApp(intentText);
}, [intentText, queueComposerTextAndLaunchApp]);
// URL -> download -> Composer
React.useEffect(() => {
if (intentURL) {
setIsDownloading(true);
callBrowseFetchPage(intentURL)
.then(page => {
if (page.stopReason !== 'error')
queueComposerTextAndLaunchApp('\n\n```' + intentURL + '\n' + page.content + '\n```\n');
else
setErrorMessage('Could not read any data' + page.error ? ': ' + page.error : '');
})
.catch(error => setErrorMessage(error?.message || error || 'Unknown error'))
.finally(() => setIsDownloading(false));
}
}, [intentURL, queueComposerTextAndLaunchApp]);
return (
<Box sx={{
backgroundColor: 'background.level2',
display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center',
flexGrow: 1,
}}>
{/* Logo with Circular Progress */}
<LogoProgress showProgress={isDownloading} />
{/* Title */}
<Typography level='title-lg' sx={{ mt: 2, mb: 1 }}>
{isDownloading ? 'Loading...' : errorMessage ? '' : intentURL ? 'Done' : 'Receiving...'}
</Typography>
{/* Possible Error */}
{errorMessage && <>
<Alert variant='soft' color='danger' sx={{ my: 1 }}>
<Typography>{errorMessage}</Typography>
</Alert>
<Button
variant='solid' color='danger'
onClick={() => navigateToIndex()}
endDecorator={<ArrowBackIcon />}
sx={{ mt: 2 }}
>
Cancel
</Button>
</>}
{/* URL under analysis */}
<Typography level='body-xs'>
{intentURL}
</Typography>
</Box>
);
}
/**
* This page will be invoked on mobile when sharing Text/URLs/Files from other APPs
* Example URL: https://localhost:3000/link/share_target?title=This+Title&text=https%3A%2F%2Fexample.com%2Fapp%2Fpath
*/
export default function LaunchPage() {
return (
<AppLayout>
<AppShareTarget />
</AppLayout>
);
}
+1 -1
View File
@@ -1,6 +1,6 @@
import * as React from 'react';
import AppNews from '../src/apps/news/AppNews';
import { AppNews } from '../src/apps/news/AppNews';
import { useMarkNewsAsSeen } from '../src/apps/news/news.hooks';
import { AppLayout } from '~/common/layout/AppLayout';
+1 -1
View File
@@ -5,7 +5,7 @@ import { AppPersonas } from '../src/apps/personas/AppPersonas';
import { AppLayout } from '~/common/layout/AppLayout';
export default function HomePage() {
export default function PersonasPage() {
return (
<AppLayout>
<AppPersonas />
-144
View File
@@ -1,144 +0,0 @@
import * as React from 'react';
import Image from 'next/image';
import { useRouter } from 'next/router';
import { Alert, Box, Button, CircularProgress, Typography } from '@mui/joy';
import ArrowBackIcon from '@mui/icons-material/ArrowBack';
import { useComposerStore } from '../src/apps/chat/components/composer/store-composer';
// import { callBrowseFetchSinglePage } from '~/modules/browse/browse.client';
import { AppLayout } from '~/common/layout/AppLayout';
import { asValidURL } from '~/common/util/urlUtils';
const LogoProgress = (props: { showProgress: boolean }) =>
<Box sx={{
width: 64,
height: 64,
position: 'relative',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
}}>
<Box sx={{ position: 'absolute', mt: 0.75 }}>
<Image src='/icons/favicon-32x32.png' alt='App Logo' width={32} height={32} />
</Box>
{props.showProgress && <CircularProgress size='lg' sx={{ position: 'absolute' }} />}
</Box>;
/**
* This page will be invoked on mobile when sharing Text/URLs/Files from other APPs
* Example URL: https://get.big-agi.com/share?title=This+Title&text=https%3A%2F%2Fexample.com%2Fapp%2Fpath
*/
export default function SharePage() {
// state
const [errorMessage, setErrorMessage] = React.useState<string | null>(null);
const [intentText, setIntentText] = React.useState<string | null>(null);
const [intentURL, setIntentURL] = React.useState<string | null>(null);
const [isDownloading, setIsDownloading] = React.useState(false);
// external state
const { query, push: routerPush, replace: routerReplace } = useRouter();
const queueComposerTextAndLaunchApp = React.useCallback((text: string) => {
useComposerStore.getState().setStartupText(text);
routerReplace('/').then(() => null);
}, [routerReplace]);
// Detect the share Intent from the query
React.useEffect(() => {
// skip when query is not parsed yet
if (!Object.keys(query).length)
return;
// single item from the query
let queryTextItem: string[] | string | null = query.url || query.text || null;
if (Array.isArray(queryTextItem))
queryTextItem = queryTextItem[0];
// check if the item is a URL
const url = asValidURL(queryTextItem);
if (url)
setIntentURL(url);
else if (queryTextItem)
setIntentText(queryTextItem);
else
setErrorMessage('No text or url. Received: ' + JSON.stringify(query));
}, [query.url, query.text, query]);
// Text -> Composer
React.useEffect(() => {
if (intentText)
queueComposerTextAndLaunchApp(intentText);
}, [intentText, queueComposerTextAndLaunchApp]);
// URL -> download -> Composer
React.useEffect(() => {
if (intentURL) {
setIsDownloading(true);
// TEMP: until the Browse module is ready, just use the URL, verbatim
queueComposerTextAndLaunchApp(intentURL);
setIsDownloading(false);
/*callBrowseFetchSinglePage(intentURL)
.then(pageContent => {
if (pageContent)
queueComposerTextAndLaunchApp('\n\n```' + intentURL + '\n' + pageContent + '\n```\n');
else
setErrorMessage('Could not read any data');
})
.catch(error => setErrorMessage(error?.message || error || 'Unknown error'))
.finally(() => setIsDownloading(false));*/
}
}, [intentURL, queueComposerTextAndLaunchApp]);
return (
<AppLayout suspendAutoModelsSetup>
<Box sx={{
backgroundColor: 'background.level2',
display: 'flex', flexDirection: 'column', alignItems: 'center', justifyContent: 'center',
flexGrow: 1,
}}>
{/* Logo with Circular Progress */}
<LogoProgress showProgress={isDownloading} />
{/* Title */}
<Typography level='title-lg' sx={{ mt: 2, mb: 1 }}>
{isDownloading ? 'Loading...' : errorMessage ? '' : intentURL ? 'Done' : 'Receiving...'}
</Typography>
{/* Possible Error */}
{errorMessage && <>
<Alert variant='soft' color='danger' sx={{ my: 1 }}>
<Typography>{errorMessage}</Typography>
</Alert>
<Button
variant='solid' color='danger'
onClick={() => routerPush('/')}
endDecorator={<ArrowBackIcon />}
sx={{ mt: 2 }}
>
Cancel
</Button>
</>}
{/* URL under analysis */}
<Typography level='body-xs'>
{intentURL}
</Typography>
</Box>
</AppLayout>
);
}
+63
View File
@@ -0,0 +1,63 @@
// Prisma is the ORM for server-side (API) access to the database
//
// This file defines the schema for the database.
// - make sure to run 'prisma generate' after making changes to this file
// - make sure to run 'prisma db push' to sync the remote database with the schema
//
// Database is optional: when the environment variables are not set, the database is not used at all,
// and the storage of data in Big-AGI is limited to client-side (browser) storage.
//
// The database is used for:
// - the 'sharing' function, to let users share the chats with each other
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("POSTGRES_PRISMA_URL") // uses connection pooling
directUrl = env("POSTGRES_URL_NON_POOLING") // uses a direct connection
}
//
// Storage of Linked Data
//
model LinkStorage {
id String @id @default(uuid())
ownerId String
visibility LinkStorageVisibility
dataType LinkStorageDataType
dataTitle String?
dataSize Int
data Json
upVotes Int @default(0)
downVotes Int @default(0)
flagsCount Int @default(0)
readCount Int @default(0)
writeCount Int @default(1)
// time-based expiration
expiresAt DateTime?
// manual deletion
deletionKey String
isDeleted Boolean @default(false)
deletedAt DateTime?
createdAt DateTime @default(now())
updatedAt DateTime @updatedAt
}
enum LinkStorageVisibility {
PUBLIC
UNLISTED
PRIVATE
}
enum LinkStorageDataType {
CHAT_V1
}
+1 -1
View File
@@ -25,7 +25,7 @@
}
],
"share_target": {
"action": "/share",
"action": "/link/share_target",
"method": "GET",
"enctype": "application/x-www-form-urlencoded",
"params": {
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
+1 -2
View File
File diff suppressed because one or more lines are too long
+48
View File
@@ -0,0 +1,48 @@
import * as React from 'react';
import { useRouter } from 'next/router';
import { Container, Sheet } from '@mui/joy';
import { AppCallQueryParams } from '~/common/app.routes';
import { InlineError } from '~/common/components/InlineError';
import { CallUI } from './CallUI';
import { CallWizard } from './CallWizard';
export function AppCall() {
// external state
const { query } = useRouter();
// derived state
const { conversationId, personaId } = query as any as AppCallQueryParams;
const validInput = !!conversationId && !!personaId;
return (
<Sheet variant='solid' color='neutral' invertedColors sx={{
display: 'flex', flexDirection: 'column', justifyContent: 'center',
flexGrow: 1,
overflowY: 'auto',
minHeight: 96,
}}>
<Container maxWidth='sm' sx={{
display: 'flex', flexDirection: 'column',
alignItems: 'center',
minHeight: '80dvh', justifyContent: 'space-evenly',
gap: { xs: 2, md: 4 },
}}>
{!validInput && <InlineError error={`Something went wrong. ${JSON.stringify(query)}`} />}
{validInput && (
<CallWizard conversationId={conversationId}>
<CallUI conversationId={conversationId} personaId={personaId} />
</CallWizard>
)}
</Container>
</Sheet>
);
}
+392
View File
@@ -0,0 +1,392 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { useRouter } from 'next/router';
import { Box, Card, ListItemDecorator, MenuItem, Switch, Typography } from '@mui/joy';
import ArrowBackIcon from '@mui/icons-material/ArrowBack';
import CallEndIcon from '@mui/icons-material/CallEnd';
import CallIcon from '@mui/icons-material/Call';
import ChatOutlinedIcon from '@mui/icons-material/ChatOutlined';
import MicIcon from '@mui/icons-material/Mic';
import MicNoneIcon from '@mui/icons-material/MicNone';
import RecordVoiceOverIcon from '@mui/icons-material/RecordVoiceOver';
import { useChatLLMDropdown } from '../chat/components/applayout/useLLMDropdown';
import { EXPERIMENTAL_speakTextStream } from '~/modules/elevenlabs/elevenlabs.client';
import { SystemPurposeId, SystemPurposes } from '../../data';
import { VChatMessageIn } from '~/modules/llms/transports/chatGenerate';
import { streamChat } from '~/modules/llms/transports/streamChat';
import { useElevenLabsVoiceDropdown } from '~/modules/elevenlabs/useElevenLabsVoiceDropdown';
import { Link } from '~/common/components/Link';
import { SpeechResult, useSpeechRecognition } from '~/common/components/useSpeechRecognition';
import { conversationTitle, createDMessage, DMessage, useChatStore } from '~/common/state/store-chats';
import { playSoundUrl, usePlaySoundUrl } from '~/common/util/audioUtils';
import { useLayoutPluggable } from '~/common/layout/store-applayout';
import { CallAvatar } from './components/CallAvatar';
import { CallButton } from './components/CallButton';
import { CallMessage } from './components/CallMessage';
import { CallStatus } from './components/CallStatus';
function CallMenuItems(props: {
pushToTalk: boolean,
setPushToTalk: (pushToTalk: boolean) => void,
override: boolean,
setOverride: (overridePersonaVoice: boolean) => void,
}) {
// external state
const { voicesDropdown } = useElevenLabsVoiceDropdown(false, !props.override);
const handlePushToTalkToggle = () => props.setPushToTalk(!props.pushToTalk);
const handleChangeVoiceToggle = () => props.setOverride(!props.override);
return <>
<MenuItem onClick={handlePushToTalkToggle}>
<ListItemDecorator>{props.pushToTalk ? <MicNoneIcon /> : <MicIcon />}</ListItemDecorator>
Push to talk
<Switch checked={props.pushToTalk} onChange={handlePushToTalkToggle} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem onClick={handleChangeVoiceToggle}>
<ListItemDecorator><RecordVoiceOverIcon /></ListItemDecorator>
Change Voice
<Switch checked={props.override} onChange={handleChangeVoiceToggle} sx={{ ml: 'auto' }} />
</MenuItem>
<MenuItem>
<ListItemDecorator>{' '}</ListItemDecorator>
{voicesDropdown}
</MenuItem>
<MenuItem component={Link} href='https://github.com/enricoros/big-agi/issues/175' target='_blank'>
<ListItemDecorator><ChatOutlinedIcon /></ListItemDecorator>
Voice Calls Feedback
</MenuItem>
</>;
}
export function CallUI(props: {
conversationId: string,
personaId: string,
}) {
// state
const [avatarClickCount, setAvatarClickCount] = React.useState<number>(0);// const [micMuted, setMicMuted] = React.useState(false);
const [callElapsedTime, setCallElapsedTime] = React.useState<string>('00:00');
const [callMessages, setCallMessages] = React.useState<DMessage[]>([]);
const [overridePersonaVoice, setOverridePersonaVoice] = React.useState<boolean>(false);
const [personaTextInterim, setPersonaTextInterim] = React.useState<string | null>(null);
const [pushToTalk, setPushToTalk] = React.useState(true);
const [stage, setStage] = React.useState<'ring' | 'declined' | 'connected' | 'ended'>('ring');
const responseAbortController = React.useRef<AbortController | null>(null);
// external state
const { push: routerPush } = useRouter();
const { chatLLMId, chatLLMDropdown } = useChatLLMDropdown();
const { chatTitle, messages } = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === props.conversationId);
return {
chatTitle: conversation ? conversationTitle(conversation) : 'no conversation',
messages: conversation ? conversation.messages : [],
};
}, shallow);
const persona = SystemPurposes[props.personaId as SystemPurposeId] ?? undefined;
const personaCallStarters = persona?.call?.starters ?? undefined;
const personaVoiceId = overridePersonaVoice ? undefined : (persona?.voices?.elevenLabs?.voiceId ?? undefined);
const personaSystemMessage = persona?.systemMessage ?? undefined;
// hooks and speech
const [speechInterim, setSpeechInterim] = React.useState<SpeechResult | null>(null);
const onSpeechResultCallback = React.useCallback((result: SpeechResult) => {
setSpeechInterim(result.done ? null : { ...result });
if (result.done) {
const transcribed = result.transcript.trim();
if (transcribed.length >= 1)
setCallMessages(messages => [...messages, createDMessage('user', transcribed)]);
}
}, []);
const { isSpeechEnabled, isRecording, isRecordingAudio, isRecordingSpeech, startRecording, stopRecording, toggleRecording } = useSpeechRecognition(onSpeechResultCallback, 1000);
// derived state
const isRinging = stage === 'ring';
const isConnected = stage === 'connected';
const isDeclined = stage === 'declined';
const isEnded = stage === 'ended';
/// Sounds
// pickup / hangup
React.useEffect(() => {
!isRinging && playSoundUrl(isConnected ? '/sounds/chat-begin.mp3' : '/sounds/chat-end.mp3');
}, [isRinging, isConnected]);
// ringtone
usePlaySoundUrl(isRinging ? '/sounds/chat-ringtone.mp3' : null, 300, 2800 * 2);
/// CONNECTED
const handleCallStop = () => {
stopRecording();
setStage('ended');
};
// [E] pickup -> seed message and call timer
// FIXME: Overriding the voice will reset the call - not a desired behavior
React.useEffect(() => {
if (!isConnected) return;
// show the call timer
setCallElapsedTime('00:00');
const start = Date.now();
const interval = setInterval(() => {
const elapsedSeconds = Math.floor((Date.now() - start) / 1000);
const minutes = Math.floor(elapsedSeconds / 60);
const seconds = elapsedSeconds % 60;
setCallElapsedTime(`${minutes < 10 ? '0' : ''}${minutes}:${seconds < 10 ? '0' : ''}${seconds}`);
}, 1000);
// seed the first message
const phoneMessages = personaCallStarters || ['Hello?', 'Hey!'];
const firstMessage = phoneMessages[Math.floor(Math.random() * phoneMessages.length)];
setCallMessages([createDMessage('assistant', firstMessage)]);
// fire/forget
void EXPERIMENTAL_speakTextStream(firstMessage, personaVoiceId);
return () => clearInterval(interval);
}, [isConnected, personaCallStarters, personaVoiceId]);
// [E] persona streaming response - upon new user message
React.useEffect(() => {
// only act when we have a new user message
if (!isConnected || callMessages.length < 1 || callMessages[callMessages.length - 1].role !== 'user')
return;
switch (callMessages[callMessages.length - 1].text) {
// do not respond
case 'Stop.':
return;
// command: close the call
case 'Goodbye.':
setStage('ended');
setTimeout(() => {
void routerPush('/');
}, 2000);
return;
// command: regenerate answer
case 'Retry.':
case 'Try again.':
setCallMessages(messages => messages.slice(0, messages.length - 2));
return;
// command: restart chat
case 'Restart.':
setCallMessages([]);
return;
}
// bail if no llm selected
if (!chatLLMId) return;
// temp fix: when the chat has no messages, only assume a single system message
const chatMessages: { role: VChatMessageIn['role'], text: string }[] = messages.length > 0
? messages
: personaSystemMessage
? [{ role: 'system', text: personaSystemMessage }]
: [];
// 'prompt' for a "telephone call"
// FIXME: can easily run ouf of tokens - if this gets traction, we'll fix it
const callPrompt: VChatMessageIn[] = [
{ role: 'system', content: 'You are having a phone call. Your response style is brief and to the point, and according to your personality, defined below.' },
...chatMessages.map(message => ({ role: message.role, content: message.text })),
{ role: 'system', content: 'You are now on the phone call related to the chat above. Respect your personality and answer with short, friendly and accurate thoughtful lines.' },
...callMessages.map(message => ({ role: message.role, content: message.text })),
];
// perform completion
responseAbortController.current = new AbortController();
let finalText = '';
let error: any | null = null;
streamChat(chatLLMId, callPrompt, responseAbortController.current.signal, (updatedMessage: Partial<DMessage>) => {
const text = updatedMessage.text?.trim();
if (text) {
finalText = text;
setPersonaTextInterim(text);
}
}).catch((err: DOMException) => {
if (err?.name !== 'AbortError')
error = err;
}).finally(() => {
setPersonaTextInterim(null);
setCallMessages(messages => [...messages, createDMessage('assistant', finalText + (error ? ` (ERROR: ${error.message || error.toString()})` : ''))]);
// fire/forget
void EXPERIMENTAL_speakTextStream(finalText, personaVoiceId);
});
return () => {
responseAbortController.current?.abort();
responseAbortController.current = null;
};
}, [isConnected, callMessages, chatLLMId, messages, personaVoiceId, personaSystemMessage, routerPush]);
// [E] Message interrupter
const abortTrigger = isConnected && isRecordingSpeech;
React.useEffect(() => {
if (abortTrigger && responseAbortController.current) {
responseAbortController.current.abort();
responseAbortController.current = null;
}
// TODO.. abort current speech
}, [abortTrigger]);
// [E] continuous speech recognition (reload)
const shouldStartRecording = isConnected && !pushToTalk && speechInterim === null && !isRecordingAudio;
React.useEffect(() => {
if (shouldStartRecording)
startRecording();
}, [shouldStartRecording, startRecording]);
// more derived state
const personaName = persona?.title ?? 'Unknown';
const isMicEnabled = isSpeechEnabled;
const isTTSEnabled = true;
const isEnabled = isMicEnabled && isTTSEnabled;
// pluggable UI
const menuItems = React.useMemo(() =>
<CallMenuItems
pushToTalk={pushToTalk} setPushToTalk={setPushToTalk}
override={overridePersonaVoice} setOverride={setOverridePersonaVoice} />
, [overridePersonaVoice, pushToTalk],
);
useLayoutPluggable(chatLLMDropdown, null, menuItems);
return <>
<Typography
level='h1'
sx={{
fontSize: { xs: '2.5rem', md: '3rem' },
textAlign: 'center',
mx: 2,
}}
>
{isConnected ? personaName : 'Hello'}
</Typography>
<CallAvatar
symbol={persona?.symbol || '?'}
imageUrl={persona?.imageUri}
isRinging={isRinging}
onClick={() => setAvatarClickCount(avatarClickCount + 1)}
/>
<CallStatus
callerName={isConnected ? undefined : personaName}
statusText={isRinging ? 'is calling you' : isDeclined ? 'call declined' : isEnded ? 'call ended' : callElapsedTime}
regardingText={chatTitle}
micError={!isMicEnabled} speakError={!isTTSEnabled}
/>
{/* Live Transcript, w/ streaming messages, audio indication, etc. */}
{(isConnected || isEnded) && (
<Card variant='soft' sx={{
flexGrow: 1,
minHeight: '15dvh', maxHeight: '24dvh',
overflow: 'auto',
width: '100%',
borderRadius: 'lg',
flexDirection: 'column-reverse',
}}>
{/* Messages in reverse order, for auto-scroll from the bottom */}
<Box sx={{ display: 'flex', flexDirection: 'column-reverse', gap: 1 }}>
{/* Listening... */}
{isRecording && (
<CallMessage
text={<>{speechInterim?.transcript ? speechInterim.transcript + ' ' : ''}<i>{speechInterim?.interimTranscript}</i></>}
variant={isRecordingSpeech ? 'solid' : 'outlined'}
role='user'
/>
)}
{/* Persona streaming text... */}
{!!personaTextInterim && (
<CallMessage
text={personaTextInterim}
variant='solid' color='neutral'
role='assistant'
/>
)}
{/* Messages (last 6 messages, in reverse order) */}
{callMessages.slice(-6).reverse().map((message) =>
<CallMessage
key={message.id}
text={message.text}
variant={message.role === 'assistant' ? 'solid' : 'soft'} color='neutral'
role={message.role} />,
)}
</Box>
</Card>
)}
{/* Call Buttons */}
<Box sx={{ width: '100%', display: 'flex', justifyContent: 'space-evenly' }}>
{/* [ringing] Decline / Accept */}
{isRinging && <CallButton Icon={CallEndIcon} text='Decline' color='danger' onClick={() => setStage('declined')} />}
{isRinging && isEnabled && <CallButton Icon={CallIcon} text='Accept' color='success' variant='soft' onClick={() => setStage('connected')} />}
{/* [Calling] Hang / PTT (mute not enabled yet) */}
{isConnected && <CallButton Icon={CallEndIcon} text='Hang up' color='danger' onClick={handleCallStop} />}
{isConnected && (pushToTalk
? <CallButton Icon={MicIcon} onClick={toggleRecording}
text={isRecordingSpeech ? 'Listening...' : isRecording ? 'Listening' : 'Push To Talk'}
variant={isRecordingSpeech ? 'solid' : isRecording ? 'soft' : 'outlined'} />
: null
// <CallButton disabled={true} Icon={MicOffIcon} onClick={() => setMicMuted(muted => !muted)}
// text={micMuted ? 'Muted' : 'Mute'}
// color={micMuted ? 'warning' : undefined} variant={micMuted ? 'solid' : 'outlined'} />
)}
{/* [ended] Back / Call Again */}
{(isEnded || isDeclined) && <Link noLinkStyle href='/'><CallButton Icon={ArrowBackIcon} text='Back' variant='soft' /></Link>}
{(isEnded || isDeclined) && <CallButton Icon={CallIcon} text='Call Again' color='success' variant='soft' onClick={() => setStage('connected')} />}
</Box>
{/* DEBUG state */}
{avatarClickCount > 10 && (avatarClickCount % 2 === 0) && (
<Card variant='outlined' sx={{ maxHeight: '25dvh', overflow: 'auto', whiteSpace: 'pre', py: 0, width: '100%' }}>
Special commands: Stop, Retry, Try Again, Restart, Goodbye.
{JSON.stringify({ isSpeechEnabled, isRecordingAudio, speechInterim }, null, 2)}
</Card>
)}
{/*{isEnded && <Card variant='solid' size='lg' color='primary'>*/}
{/* <CardContent>*/}
{/* <Typography>*/}
{/* Please rate the call quality, 1 to 5 - Just a Joke*/}
{/* </Typography>*/}
{/* </CardContent>*/}
{/*</Card>}*/}
</>;
}
+211
View File
@@ -0,0 +1,211 @@
import * as React from 'react';
import { keyframes } from '@emotion/react';
import { Box, Button, Card, CardContent, IconButton, ListItemDecorator, Typography } from '@mui/joy';
import ArrowForwardIcon from '@mui/icons-material/ArrowForward';
import ChatIcon from '@mui/icons-material/Chat';
import CheckIcon from '@mui/icons-material/Check';
import CloseIcon from '@mui/icons-material/Close';
import MicIcon from '@mui/icons-material/Mic';
import RecordVoiceOverIcon from '@mui/icons-material/RecordVoiceOver';
import WarningIcon from '@mui/icons-material/Warning';
import { navigateBack } from '~/common/app.routes';
import { openLayoutPreferences } from '~/common/layout/store-applayout';
import { useCapabilityBrowserSpeechRecognition, useCapabilityElevenLabs } from '~/common/components/useCapabilities';
import { useChatStore } from '~/common/state/store-chats';
import { useUICounter } from '~/common/state/store-ui';
const cssRainbowBackgroundKeyframes = keyframes`
100%, 0% {
background-color: rgb(128, 0, 0);
}
8% {
background-color: rgb(102, 51, 0);
}
16% {
background-color: rgb(64, 64, 0);
}
25% {
background-color: rgb(38, 76, 0);
}
33% {
background-color: rgb(0, 89, 0);
}
41% {
background-color: rgb(0, 76, 41);
}
50% {
background-color: rgb(0, 64, 64);
}
58% {
background-color: rgb(0, 51, 102);
}
66% {
background-color: rgb(0, 0, 128);
}
75% {
background-color: rgb(63, 0, 128);
}
83% {
background-color: rgb(76, 0, 76);
}
91% {
background-color: rgb(102, 0, 51);
}`;
function StatusCard(props: { icon: React.JSX.Element, hasIssue: boolean, text: string, button?: React.JSX.Element }) {
return (
<Card sx={{ width: '100%' }}>
<CardContent sx={{ flexDirection: 'row' }}>
<ListItemDecorator>
{props.icon}
</ListItemDecorator>
<Typography level='title-md' color={props.hasIssue ? 'warning' : undefined} sx={{ flexGrow: 1 }}>
{props.text}
{props.button}
</Typography>
<ListItemDecorator>
{props.hasIssue ? <WarningIcon color='warning' /> : <CheckIcon color='success' />}
</ListItemDecorator>
</CardContent>
</Card>
);
}
export function CallWizard(props: { strict?: boolean, conversationId: string, children: React.ReactNode }) {
// state
const [chatEmptyOverride, setChatEmptyOverride] = React.useState(false);
const [recognitionOverride, setRecognitionOverride] = React.useState(false);
// external state
const recognition = useCapabilityBrowserSpeechRecognition();
const synthesis = useCapabilityElevenLabs();
const chatIsEmpty = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === props.conversationId);
return !(conversation?.messages?.length);
});
const { novel, touch } = useUICounter('call-wizard');
// derived state
const overriddenEmptyChat = chatEmptyOverride || !chatIsEmpty;
const overriddenRecognition = recognitionOverride || recognition.mayWork;
const allGood = overriddenEmptyChat && overriddenRecognition && synthesis.mayWork;
const fatalGood = overriddenRecognition && synthesis.mayWork;
if (!novel && fatalGood)
return props.children;
const handleOverrideChatEmpty = () => setChatEmptyOverride(true);
const handleOverrideRecognition = () => setRecognitionOverride(true);
const handleConfigureElevenLabs = () => {
openLayoutPreferences(3);
};
const handleFinishButton = () => {
if (!allGood)
return navigateBack();
touch();
};
return <>
<Box sx={{ flexGrow: 0.5 }} />
<Typography level='title-lg' sx={{ fontSize: '3rem', fontWeight: 200, lineHeight: '1.5em', textAlign: 'center' }}>
Welcome to<br />
<Typography
component='span'
sx={{
backgroundColor: 'primary.solidActiveBg', mx: -0.5, px: 0.5,
animation: `${cssRainbowBackgroundKeyframes} 15s linear infinite`,
}}>
your first call
</Typography>
</Typography>
<Box sx={{ flexGrow: 0.5 }} />
<Typography level='body-lg'>
{/*Before you receive your first call, */}
Let&apos;s get you all set up.
</Typography>
{/* Chat Empty status */}
<StatusCard
icon={<ChatIcon />}
hasIssue={!overriddenEmptyChat}
text={overriddenEmptyChat ? 'Great! Your chat has messages.' : 'The chat is empty. Calls are effective when the caller has context.'}
button={overriddenEmptyChat ? undefined : (
<Button variant='outlined' onClick={handleOverrideChatEmpty} sx={{ mx: 1 }}>
Ignore
</Button>
)}
/>
{/* Add the speech to text feature status */}
<StatusCard
icon={<MicIcon />}
text={
((overriddenRecognition && !recognition.warnings.length) ? 'Speech recognition should be good to go.' : 'There might be a speech recognition issue.')
+ (recognition.isApiAvailable ? '' : ' Your browser does not support the speech recognition API.')
+ (recognition.isDeviceNotSupported ? ' Your device does not provide this feature.' : '')
+ (recognition.warnings.length ? ' ⚠️ ' + recognition.warnings.join(' · ') : '')
}
button={overriddenRecognition ? undefined : (
<Button variant='outlined' onClick={handleOverrideRecognition} sx={{ mx: 1 }}>
Ignore
</Button>
)}
hasIssue={!overriddenRecognition}
/>
{/* Text to Speech status */}
<StatusCard
icon={<RecordVoiceOverIcon />}
text={
(synthesis.mayWork ? 'Voice synthesis should be ready.' : 'There might be an issue with ElevenLabs voice synthesis.')
+ (synthesis.isConfiguredServerSide ? '' : (synthesis.isConfiguredClientSide ? '' : ' Please add your API key in the settings.'))
}
button={synthesis.mayWork ? undefined : (
<Button variant='outlined' onClick={handleConfigureElevenLabs} sx={{ mx: 1 }}>
Configure
</Button>
)}
hasIssue={!synthesis.mayWork}
/>
{/*<Typography>*/}
{/* 1. To start a call, click the "Accept" button when you receive an incoming call.*/}
{/* 2. If your mic is enabled, you'll see a "Push to Talk" button. Press and hold it to speak, then release it to stop speaking.*/}
{/* 3. If your mic is disabled, you can still type your messages in the chat and the assistant will respond.*/}
{/* 4. During the call, you can control the voice synthesis settings from the menu in the top right corner.*/}
{/* 5. To end the call, click the "Hang up" button.*/}
{/*</Typography>*/}
<Box sx={{ flexGrow: 2 }} />
{/* bottom: text & button */}
<Box sx={{ display: 'flex', justifyContent: 'space-around', alignItems: 'center', width: '100%', gap: 2, px: 0.5 }}>
<Typography level='body-lg'>
{allGood ? 'Ready, Set, Call' : 'Please resolve the issues above before proceeding with the call'}
</Typography>
<IconButton
size='lg' variant={allGood ? 'soft' : 'solid'} color={allGood ? 'success' : 'danger'}
onClick={handleFinishButton} sx={{ borderRadius: '50px' }}
>
{allGood ? <ArrowForwardIcon sx={{ fontSize: '1.5em' }} /> : <CloseIcon sx={{ fontSize: '1.5em' }} />}
</IconButton>
</Box>
<Box sx={{ flexGrow: 0.5 }} />
</>;
}
+48
View File
@@ -0,0 +1,48 @@
import * as React from 'react';
import { keyframes } from '@emotion/react';
import { Avatar, Box } from '@mui/joy';
const cssScaleKeyframes = keyframes`
0% {
transform: scale(1);
}
50% {
transform: scale(1.2);
}
100% {
transform: scale(1);
}`;
export function CallAvatar(props: { symbol: string, imageUrl?: string, isRinging: boolean, onClick: () => void }) {
return (
<Avatar
variant='soft' color='neutral'
onClick={props.onClick}
src={props.imageUrl}
sx={{
'--Avatar-size': { xs: '160px', md: '200px' },
'--variant-borderWidth': '4px',
boxShadow: !props.imageUrl ? 'md' : null,
fontSize: { xs: '100px', md: '120px' },
}}
>
{/* As fallback, show the large Persona Symbol */}
{!props.imageUrl && (
<Box
sx={{
...(props.isRinging
? { animation: `${cssScaleKeyframes} 1.4s ease-in-out infinite` }
: {}),
}}
>
{props.symbol}
</Box>
)}
</Avatar>
);
}
+43
View File
@@ -0,0 +1,43 @@
import * as React from 'react';
import { Box, ColorPaletteProp, IconButton, Typography, VariantProp } from '@mui/joy';
/**
* Large button to operate the call, e.g.
* --------
* | 🎤 |
* | Mute |
* --------
*/
export function CallButton(props: {
Icon: React.FC, text: string,
variant?: VariantProp, color?: ColorPaletteProp, disabled?: boolean,
onClick?: () => void,
}) {
return (
<Box
onClick={() => !props.disabled && props.onClick?.()}
sx={{
display: 'flex', flexDirection: 'column', alignItems: 'center',
gap: { xs: 1, md: 2 },
}}
>
<IconButton
disabled={props.disabled} variant={props.variant || 'solid'} color={props.color}
sx={{
'--IconButton-size': { xs: '4.2rem', md: '5rem' },
borderRadius: '50%',
// boxShadow: 'lg',
}}>
<props.Icon />
</IconButton>
<Typography level='title-md' variant={props.disabled ? 'soft' : undefined}>
{props.text}
</Typography>
</Box>
);
}
+33
View File
@@ -0,0 +1,33 @@
import * as React from 'react';
import { Chip, ColorPaletteProp, VariantProp } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
import { VChatMessageIn } from '~/modules/llms/transports/chatGenerate';
export function CallMessage(props: {
text?: string | React.JSX.Element,
variant?: VariantProp, color?: ColorPaletteProp,
role: VChatMessageIn['role'],
sx?: SxProps,
}) {
return (
<Chip
color={props.color} variant={props.variant}
sx={{
alignSelf: props.role === 'user' ? 'end' : 'start',
whiteSpace: 'break-spaces',
borderRadius: 'lg',
mt: 'auto',
// boxShadow: 'md',
py: 1,
...(props.sx || {}),
}}
>
{props.text}
</Chip>
);
}
+47
View File
@@ -0,0 +1,47 @@
import * as React from 'react';
import { Box, Typography } from '@mui/joy';
import { InlineError } from '~/common/components/InlineError';
/**
* A status message for the call, such as:
*
* $Name
* "Connecting..." or "Call ended",
* re: $Regarding
*/
export function CallStatus(props: {
callerName?: string,
statusText: string,
regardingText?: string,
micError: boolean, speakError: boolean,
// llmComponent?: React.JSX.Element,
}) {
return (
<Box sx={{ display: 'flex', flexDirection: 'column' }}>
{!!props.callerName && <Typography level='h3' sx={{ textAlign: 'center' }}>
<b>{props.callerName}</b>
</Typography>}
{/*{props.llmComponent}*/}
<Typography level='body-md' sx={{ textAlign: 'center' }}>
{props.statusText}
</Typography>
{!!props.regardingText && <Typography level='body-md' sx={{ textAlign: 'center', mt: 0 }}>
re: {props.regardingText}
</Typography>}
{props.micError && <InlineError
severity='danger' error='But this browser does not support speech recognition... 🤦‍♀️ - Try Chrome on Windows?' />}
{props.speakError && <InlineError
severity='danger' error='And text-to-speech is not configured... 🤦‍♀️ - Configure it in Settings?' />}
</Box>
);
}
+344 -143
View File
@@ -1,96 +1,132 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { Box } from '@mui/joy';
import ForkRightIcon from '@mui/icons-material/ForkRight';
import { CmdRunBrowse } from '~/modules/browse/browse.client';
import { CmdRunProdia } from '~/modules/prodia/prodia.client';
import { CmdRunReact } from '~/modules/aifn/react/react';
import { DiagramConfig, DiagramsModal } from '~/modules/aifn/digrams/DiagramsModal';
import { FlattenerModal } from '~/modules/aifn/flatten/FlattenerModal';
import { TradeConfig, TradeModal } from '~/modules/trade/TradeModal';
import { imaginePromptFromText } from '~/modules/aifn/imagine/imaginePromptFromText';
import { useModelsStore } from '~/modules/llms/store-llms';
import { speakText } from '~/modules/elevenlabs/elevenlabs.client';
import { useBrowseStore } from '~/modules/browse/store-module-browsing';
import { useChatLLM, useModelsStore } from '~/modules/llms/store-llms';
import { ConfirmationModal } from '~/common/components/ConfirmationModal';
import { createDMessage, DMessage, useChatStore } from '~/common/state/store-chats';
import { useLayoutPluggable } from '~/common/layout/store-applayout';
import { GlobalShortcutItem, ShortcutKeyName, useGlobalShortcuts } from '~/common/components/useGlobalShortcut';
import { addSnackbar, removeSnackbar } from '~/common/components/useSnackbarsStore';
import { createDMessage, DConversationId, DMessage, getConversation, useConversation } from '~/common/state/store-chats';
import { openLayoutLLMOptions, useLayoutPluggable } from '~/common/layout/store-applayout';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { ChatDrawerItems } from './components/applayout/ChatDrawerItems';
import type { ComposerOutputMultiPart } from './components/composer/composer.types';
import { ChatDrawerItemsMemo } from './components/applayout/ChatDrawerItems';
import { ChatDropdowns } from './components/applayout/ChatDropdowns';
import { ChatMenuItems } from './components/applayout/ChatMenuItems';
import { ChatMessageList } from './components/ChatMessageList';
import { CmdAddRoleMessage, extractCommands } from './commands';
import { CmdAddRoleMessage, CmdHelp, createCommandsHelpMessage, extractCommands } from './editors/commands';
import { Composer } from './components/composer/Composer';
import { Ephemerals } from './components/Ephemerals';
import { TradeConfig, TradeModal } from './trade/TradeModal';
import { usePanesManager } from './components/usePanesManager';
import { runAssistantUpdatingState } from './editors/chat-stream';
import { runBrowseUpdatingState } from './editors/browse-load';
import { runImageGenerationUpdatingState } from './editors/image-generate';
import { runReActUpdatingState } from './editors/react-tangent';
const SPECIAL_ID_ALL_CHATS = 'all-chats';
/**
* Mode: how to treat the input from the Composer
*/
export type ChatModeId = 'immediate' | 'write-user' | 'react' | 'draw-imagine' | 'draw-imagine-plus';
// definition of chat modes
export type ChatModeId = 'immediate' | 'immediate-follow-up' | 'react' | 'write-user';
export const ChatModeItems: { [key in ChatModeId]: { label: string; description: string | React.JSX.Element; experimental?: boolean } } = {
'immediate': {
label: 'Chat',
description: 'AI-powered responses',
},
'immediate-follow-up': {
label: 'Chat & Follow-up',
description: 'Chat with follow-up questions',
experimental: true,
},
'react': {
label: 'Reason+Act',
description: 'Answer your questions with ReAct and search',
},
'write-user': {
label: 'Write',
description: 'No AI responses',
},
};
const SPECIAL_ID_WIPE_ALL: DConversationId = 'wipe-chats';
export function AppChat() {
// state
const [chatModeId, setChatModeId] = React.useState<ChatModeId>('immediate');
const [isMessageSelectionMode, setIsMessageSelectionMode] = React.useState(false);
const [diagramConfig, setDiagramConfig] = React.useState<DiagramConfig | null>(null);
const [tradeConfig, setTradeConfig] = React.useState<TradeConfig | null>(null);
const [clearConfirmationId, setClearConfirmationId] = React.useState<string | null>(null);
const [deleteConfirmationId, setDeleteConfirmationId] = React.useState<string | null>(null);
const [flattenConversationId, setFlattenConversationId] = React.useState<string | null>(null);
const [clearConversationId, setClearConversationId] = React.useState<DConversationId | null>(null);
const [deleteConversationId, setDeleteConversationId] = React.useState<DConversationId | null>(null);
const [flattenConversationId, setFlattenConversationId] = React.useState<DConversationId | null>(null);
const showNextTitle = React.useRef(false);
const composerTextAreaRef = React.useRef<HTMLTextAreaElement>(null);
// external state
const { activeConversationId, isConversationEmpty, duplicateConversation, deleteAllConversations, setMessages, systemPurposeId, setAutoTitle } = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === state.activeConversationId);
return {
activeConversationId: state.activeConversationId,
isConversationEmpty: conversation ? !conversation.messages.length : true,
// conversationsCount: state.conversations.length,
duplicateConversation: state.duplicateConversation,
deleteAllConversations: state.deleteAllConversations,
setMessages: state.setMessages,
systemPurposeId: conversation?.systemPurposeId ?? null,
setAutoTitle: state.setAutoTitle,
};
}, shallow);
const { chatLLM } = useChatLLM();
const {
chatPanes,
focusedConversationId,
navigateHistoryInFocusedPane,
openConversationInFocusedPane,
openConversationInSplitPane,
setFocusedPaneIndex,
} = usePanesManager();
const {
title: focusedChatTitle,
chatIdx: focusedChatNumber,
isChatEmpty: isFocusedChatEmpty,
areChatsEmpty,
newConversationId,
_remove_systemPurposeId: focusedSystemPurposeId,
prependNewConversation,
branchConversation,
deleteConversation,
wipeAllConversations,
setMessages,
} = useConversation(focusedConversationId);
// Window actions
const chatPaneIDs = chatPanes.length > 0 ? chatPanes.map(pane => pane.conversationId) : [null];
const setActivePaneIndex = React.useCallback((idx: number) => {
setFocusedPaneIndex(idx);
}, [setFocusedPaneIndex]);
const setFocusedConversationId = React.useCallback((conversationId: DConversationId | null) => {
conversationId && openConversationInFocusedPane(conversationId);
}, [openConversationInFocusedPane]);
const openSplitConversationId = React.useCallback((conversationId: DConversationId | null) => {
conversationId && openConversationInSplitPane(conversationId);
}, [openConversationInSplitPane]);
const handleNavigateHistory = React.useCallback((direction: 'back' | 'forward') => {
if (navigateHistoryInFocusedPane(direction))
showNextTitle.current = true;
}, [navigateHistoryInFocusedPane]);
// [0 to 1] create a conversation if there's none active
React.useEffect(() => {
if (!activeConversationId)
useChatStore.getState().conversations.length === 0 && useChatStore.getState().createConversation();
}, [activeConversationId]);
if (showNextTitle.current) {
showNextTitle.current = false;
const title = (focusedChatNumber >= 0 ? `#${focusedChatNumber + 1} · ` : '') + (focusedChatTitle || 'New Chat');
const id = addSnackbar({ key: 'focused-title', message: title, type: 'title' });
return () => removeSnackbar(id);
}
}, [focusedChatNumber, focusedChatTitle]);
const handleExecuteConversation = async (chatModeId: ChatModeId, conversationId: string, history: DMessage[]) => {
// Execution
const _handleExecute = React.useCallback(async (chatModeId: ChatModeId, conversationId: DConversationId, history: DMessage[]) => {
const { chatLLMId } = useModelsStore.getState();
if (!conversationId || !chatLLMId) return;
if (!chatModeId || !conversationId || !chatLLMId) return;
// /command: overrides the chat mode
// "/command ...": overrides the chat mode
const lastMessage = history.length > 0 ? history[history.length - 1] : null;
if (lastMessage?.role === 'user') {
const pieces = extractCommands(lastMessage.text);
if (pieces.length == 2 && pieces[0].type === 'cmd' && pieces[1].type === 'text') {
const command = pieces[0].value;
const prompt = pieces[1].value;
const [command, prompt] = [pieces[0].value, pieces[1].value];
if (CmdRunProdia.includes(command)) {
setMessages(conversationId, history);
return await runImageGenerationUpdatingState(conversationId, prompt);
@@ -99,147 +135,302 @@ export function AppChat() {
setMessages(conversationId, history);
return await runReActUpdatingState(conversationId, prompt, chatLLMId);
}
if (CmdRunBrowse.includes(command) && prompt?.trim() && useBrowseStore.getState().enableCommandBrowse) {
setMessages(conversationId, history);
return await runBrowseUpdatingState(conversationId, prompt);
}
if (CmdAddRoleMessage.includes(command)) {
lastMessage.role = command.startsWith('/s') ? 'system' : command.startsWith('/a') ? 'assistant' : 'user';
lastMessage.sender = 'Bot';
lastMessage.text = prompt;
return setMessages(conversationId, history);
}
if (CmdHelp.includes(command)) {
return setMessages(conversationId, [...history, createCommandsHelpMessage()]);
}
}
}
// synchronous long-duration tasks, which update the state as they go
if (chatModeId && chatLLMId && systemPurposeId) {
if (chatLLMId && focusedSystemPurposeId) {
switch (chatModeId) {
case 'immediate':
case 'immediate-follow-up':
return await runAssistantUpdatingState(conversationId, history, chatLLMId, systemPurposeId, true, chatModeId === 'immediate-follow-up');
return await runAssistantUpdatingState(conversationId, history, chatLLMId, focusedSystemPurposeId);
case 'write-user':
return setMessages(conversationId, history);
case 'react':
if (!lastMessage?.text)
break;
setMessages(conversationId, history);
return await runReActUpdatingState(conversationId, lastMessage.text, chatLLMId);
case 'write-user':
setMessages(conversationId, history);
return;
case 'draw-imagine':
case 'draw-imagine-plus':
if (!lastMessage?.text)
break;
const imagePrompt = chatModeId == 'draw-imagine-plus'
? await imaginePromptFromText(lastMessage.text) || 'An error sign.'
: lastMessage.text;
setMessages(conversationId, history.map(message => message.id !== lastMessage.id ? message : {
...message,
text: `${CmdRunProdia[0]} ${imagePrompt}`,
}));
return await runImageGenerationUpdatingState(conversationId, imagePrompt);
}
}
// ISSUE: if we're here, it means we couldn't do the job, at least sync the history
console.log('handleExecuteConversation: issue running', conversationId, lastMessage);
console.log('handleExecuteConversation: issue running', chatModeId, conversationId, lastMessage);
setMessages(conversationId, history);
}, [focusedSystemPurposeId, setMessages]);
const handleComposerAction = (chatModeId: ChatModeId, conversationId: DConversationId, multiPartMessage: ComposerOutputMultiPart): boolean => {
// validate inputs
if (multiPartMessage.length !== 1 || multiPartMessage[0].type !== 'text-block') {
addSnackbar({
key: 'chat-composer-action-invalid',
message: 'Only a single text part is supported for now.',
type: 'issue',
overrides: {
autoHideDuration: 2000,
},
});
return false;
}
const userText = multiPartMessage[0].text;
// find conversation
const conversation = getConversation(conversationId);
if (!conversation)
return false;
// start execution (async)
void _handleExecute(chatModeId, conversationId, [
...conversation.messages,
createDMessage('user', userText),
]);
return true;
};
const _findConversation = (conversationId: string) =>
conversationId ? useChatStore.getState().conversations.find(c => c.id === conversationId) ?? null : null;
const handleConversationExecuteHistory = async (conversationId: DConversationId, history: DMessage[]) =>
await _handleExecute('immediate', conversationId, history);
const handleSendUserMessage = async (conversationId: string, userText: string) => {
const conversation = _findConversation(conversationId);
const handleMessageRegenerateLast = React.useCallback(async () => {
const focusedConversation = getConversation(focusedConversationId);
if (focusedConversation?.messages?.length) {
const lastMessage = focusedConversation.messages[focusedConversation.messages.length - 1];
return await _handleExecute('immediate', focusedConversation.id, lastMessage.role === 'assistant'
? focusedConversation.messages.slice(0, -1)
: [...focusedConversation.messages],
);
}
}, [focusedConversationId, _handleExecute]);
const handleTextDiagram = async (diagramConfig: DiagramConfig | null) => setDiagramConfig(diagramConfig);
const handleTextImaginePlus = async (conversationId: DConversationId, messageText: string) => {
const conversation = getConversation(conversationId);
if (conversation)
return await handleExecuteConversation(chatModeId, conversationId, [...conversation.messages, createDMessage('user', userText)]);
return await _handleExecute('draw-imagine-plus', conversationId, [
...conversation.messages,
createDMessage('user', messageText),
]);
};
const handleExecuteChatHistory = async (conversationId: string, history: DMessage[]) =>
await handleExecuteConversation(chatModeId, conversationId, history);
const handleTextSpeak = async (text: string) => {
await speakText(text);
};
const handleImagineFromText = async (conversationId: string, messageText: string) => {
const conversation = _findConversation(conversationId);
if (conversation) {
const prompt = await imaginePromptFromText(messageText);
if (prompt)
return await handleExecuteConversation('immediate', conversationId, [...conversation.messages, createDMessage('user', `${CmdRunProdia[0]} ${prompt}`)]);
// Chat actions
const handleConversationNew = React.useCallback(() => {
// activate an existing new conversation if present, or create another
setFocusedConversationId(newConversationId
? newConversationId
: prependNewConversation(focusedSystemPurposeId ?? undefined),
);
composerTextAreaRef.current?.focus();
}, [focusedSystemPurposeId, newConversationId, prependNewConversation, setFocusedConversationId]);
const handleConversationImportDialog = () => setTradeConfig({ dir: 'import' });
const handleConversationExport = (conversationId: DConversationId | null) => setTradeConfig({ dir: 'export', conversationId });
const handleConversationBranch = React.useCallback((conversationId: DConversationId, messageId: string | null): DConversationId | null => {
showNextTitle.current = true;
const branchedConversationId = branchConversation(conversationId, messageId);
addSnackbar({
key: 'branch-conversation',
message: 'Branch started.',
type: 'success',
overrides: {
autoHideDuration: 3000,
startDecorator: <ForkRightIcon />,
},
});
const branchInAltPanel = useUXLabsStore.getState().labsSplitBranching;
if (branchInAltPanel)
openSplitConversationId(branchedConversationId);
else
setFocusedConversationId(branchedConversationId);
return branchedConversationId;
}, [branchConversation, openSplitConversationId, setFocusedConversationId]);
const handleConversationFlatten = (conversationId: DConversationId) => setFlattenConversationId(conversationId);
const handleConfirmedClearConversation = React.useCallback(() => {
if (clearConversationId) {
setMessages(clearConversationId, []);
setClearConversationId(null);
}
};
}, [clearConversationId, setMessages]);
const handleConversationClear = (conversationId: DConversationId) => setClearConversationId(conversationId);
const handleClearConversation = (conversationId: string) => setClearConfirmationId(conversationId);
const handleConfirmedClearConversation = () => {
if (clearConfirmationId) {
setMessages(clearConfirmationId, []);
setAutoTitle(clearConfirmationId, '');
setClearConfirmationId(null);
}
};
const handleDeleteAllConversations = () => setDeleteConfirmationId(SPECIAL_ID_ALL_CHATS);
const handleConfirmedDeleteConversation = () => {
if (deleteConfirmationId) {
if (deleteConfirmationId === SPECIAL_ID_ALL_CHATS) {
deleteAllConversations();
}// else
// deleteConversation(deleteConfirmationId);
setDeleteConfirmationId(null);
if (deleteConversationId) {
let nextConversationId: DConversationId | null;
if (deleteConversationId === SPECIAL_ID_WIPE_ALL)
nextConversationId = wipeAllConversations(focusedSystemPurposeId ?? undefined);
else
nextConversationId = deleteConversation(deleteConversationId);
setFocusedConversationId(nextConversationId);
setDeleteConversationId(null);
}
};
const handleConversationsDeleteAll = () => setDeleteConversationId(SPECIAL_ID_WIPE_ALL);
const handleImportConversation = () => setTradeConfig({ dir: 'import' });
const handleConversationDelete = React.useCallback((conversationId: DConversationId, bypassConfirmation: boolean) => {
if (bypassConfirmation)
setFocusedConversationId(deleteConversation(conversationId));
else
setDeleteConversationId(conversationId);
}, [deleteConversation, setFocusedConversationId]);
const handleExportConversation = (conversationId: string | null) => setTradeConfig({ dir: 'export', conversationId });
const handleFlattenConversation = (conversationId: string) => setFlattenConversationId(conversationId);
// Shortcuts
const handleOpenChatLlmOptions = React.useCallback(() => {
const { chatLLMId } = useModelsStore.getState();
if (!chatLLMId) return;
openLayoutLLMOptions(chatLLMId);
}, []);
const shortcuts = React.useMemo((): GlobalShortcutItem[] => [
['o', true, true, false, handleOpenChatLlmOptions],
['r', true, true, false, handleMessageRegenerateLast],
['n', true, false, true, handleConversationNew],
['b', true, false, true, () => isFocusedChatEmpty || focusedConversationId && handleConversationBranch(focusedConversationId, null)],
['x', true, false, true, () => isFocusedChatEmpty || focusedConversationId && handleConversationClear(focusedConversationId)],
['d', true, false, true, () => focusedConversationId && handleConversationDelete(focusedConversationId, false)],
[ShortcutKeyName.Left, true, false, true, () => handleNavigateHistory('back')],
[ShortcutKeyName.Right, true, false, true, () => handleNavigateHistory('forward')],
], [focusedConversationId, handleConversationBranch, handleConversationDelete, handleConversationNew, handleMessageRegenerateLast, handleNavigateHistory, handleOpenChatLlmOptions, isFocusedChatEmpty]);
useGlobalShortcuts(shortcuts);
// Pluggable ApplicationBar components
const centerItems = React.useMemo(() =>
<ChatDropdowns conversationId={activeConversationId} />,
[activeConversationId],
<ChatDropdowns conversationId={focusedConversationId} />,
[focusedConversationId],
);
const drawerItems = React.useMemo(() =>
<ChatDrawerItems
conversationId={activeConversationId}
onImportConversation={handleImportConversation}
onDeleteAllConversations={handleDeleteAllConversations}
<ChatDrawerItemsMemo
activeConversationId={focusedConversationId}
disableNewButton={isFocusedChatEmpty}
onConversationActivate={setFocusedConversationId}
onConversationDelete={handleConversationDelete}
onConversationImportDialog={handleConversationImportDialog}
onConversationNew={handleConversationNew}
onConversationsDeleteAll={handleConversationsDeleteAll}
/>,
[activeConversationId],
[focusedConversationId, handleConversationDelete, handleConversationNew, isFocusedChatEmpty, setFocusedConversationId],
);
const menuItems = React.useMemo(() =>
<ChatMenuItems
conversationId={activeConversationId} isConversationEmpty={isConversationEmpty}
isMessageSelectionMode={isMessageSelectionMode} setIsMessageSelectionMode={setIsMessageSelectionMode}
onClearConversation={handleClearConversation}
onDuplicateConversation={duplicateConversation}
onExportConversation={handleExportConversation}
onFlattenConversation={handleFlattenConversation}
conversationId={focusedConversationId}
hasConversations={!areChatsEmpty}
isConversationEmpty={isFocusedChatEmpty}
isMessageSelectionMode={isMessageSelectionMode}
setIsMessageSelectionMode={setIsMessageSelectionMode}
onConversationBranch={handleConversationBranch}
onConversationClear={handleConversationClear}
onConversationExport={handleConversationExport}
onConversationFlatten={handleConversationFlatten}
/>,
[activeConversationId, duplicateConversation, isConversationEmpty, isMessageSelectionMode],
[areChatsEmpty, focusedConversationId, handleConversationBranch, isFocusedChatEmpty, isMessageSelectionMode],
);
useLayoutPluggable(centerItems, drawerItems, menuItems);
return <>
<ChatMessageList
conversationId={activeConversationId}
isMessageSelectionMode={isMessageSelectionMode} setIsMessageSelectionMode={setIsMessageSelectionMode}
onExecuteChatHistory={handleExecuteChatHistory}
onImagineFromText={handleImagineFromText}
sx={{
flexGrow: 1,
backgroundColor: 'background.level1',
overflowY: 'auto', // overflowY: 'hidden'
minHeight: 96,
}} />
<Box sx={{
flexGrow: 1,
display: 'flex', flexDirection: { xs: 'column', md: 'row' },
overflow: 'clip',
}}>
<Ephemerals
conversationId={activeConversationId}
sx={{
// flexGrow: 0.1,
flexShrink: 0.5,
overflowY: 'auto',
minHeight: 64,
}} />
{chatPaneIDs.map((_conversationId, idx) => (
<Box key={'chat-pane-' + idx} onClick={() => setActivePaneIndex(idx)} sx={{
flexGrow: 1, flexBasis: 1,
display: 'flex', flexDirection: 'column',
overflow: 'clip',
}}>
<ChatMessageList
conversationId={_conversationId}
chatLLMContextTokens={chatLLM?.contextTokens}
isMessageSelectionMode={isMessageSelectionMode}
setIsMessageSelectionMode={setIsMessageSelectionMode}
onConversationBranch={handleConversationBranch}
onConversationExecuteHistory={handleConversationExecuteHistory}
onTextDiagram={handleTextDiagram}
onTextImagine={handleTextImaginePlus}
onTextSpeak={handleTextSpeak}
sx={{
flexGrow: 1,
backgroundColor: 'background.level1',
overflowY: 'auto',
minHeight: 96,
// outline the current focused pane
...(chatPaneIDs.length < 2 ? {}
: (_conversationId === focusedConversationId)
? {
border: '2px solid',
borderColor: 'primary.solidBg',
} : {
padding: '2px',
}),
}}
/>
<Ephemerals
conversationId={_conversationId}
sx={{
// flexGrow: 0.1,
flexShrink: 0.5,
overflowY: 'auto',
minHeight: 64,
}} />
</Box>
))}
</Box>
<Composer
conversationId={activeConversationId} messageId={null}
chatModeId={chatModeId} setChatModeId={setChatModeId}
isDeveloperMode={systemPurposeId === 'Developer'}
onSendMessage={handleSendUserMessage}
chatLLM={chatLLM}
composerTextAreaRef={composerTextAreaRef}
conversationId={focusedConversationId}
isDeveloperMode={focusedSystemPurposeId === 'Developer'}
onAction={handleComposerAction}
sx={{
zIndex: 21, // position: 'sticky', bottom: 0,
backgroundColor: 'background.surface',
@@ -249,25 +440,35 @@ export function AppChat() {
}} />
{/* Import / Export */}
{!!tradeConfig && <TradeModal config={tradeConfig} onClose={() => setTradeConfig(null)} />}
{/* Diagrams */}
{!!diagramConfig && <DiagramsModal config={diagramConfig} onClose={() => setDiagramConfig(null)} />}
{/* Flatten */}
{!!flattenConversationId && <FlattenerModal conversationId={flattenConversationId} onClose={() => setFlattenConversationId(null)} />}
{!!flattenConversationId && (
<FlattenerModal
conversationId={flattenConversationId}
onConversationBranch={handleConversationBranch}
onClose={() => setFlattenConversationId(null)}
/>
)}
{/* Import / Export */}
{!!tradeConfig && <TradeModal config={tradeConfig} onConversationActivate={setFocusedConversationId} onClose={() => setTradeConfig(null)} />}
{/* [confirmation] Reset Conversation */}
{!!clearConfirmationId && <ConfirmationModal
open onClose={() => setClearConfirmationId(null)} onPositive={handleConfirmedClearConversation}
confirmationText={'Are you sure you want to discard all the messages?'} positiveActionText={'Clear conversation'}
{!!clearConversationId && <ConfirmationModal
open onClose={() => setClearConversationId(null)} onPositive={handleConfirmedClearConversation}
confirmationText={'Are you sure you want to discard all messages?'} positiveActionText={'Clear conversation'}
/>}
{/* [confirmation] Delete All */}
{!!deleteConfirmationId && <ConfirmationModal
open onClose={() => setDeleteConfirmationId(null)} onPositive={handleConfirmedDeleteConversation}
confirmationText={deleteConfirmationId === SPECIAL_ID_ALL_CHATS
{!!deleteConversationId && <ConfirmationModal
open onClose={() => setDeleteConversationId(null)} onPositive={handleConfirmedDeleteConversation}
confirmationText={deleteConversationId === SPECIAL_ID_WIPE_ALL
? 'Are you absolutely sure you want to delete ALL conversations? This action cannot be undone.'
: 'Are you sure you want to delete this conversation?'}
positiveActionText={deleteConfirmationId === SPECIAL_ID_ALL_CHATS
positiveActionText={deleteConversationId === SPECIAL_ID_WIPE_ALL
? 'Yes, delete all'
: 'Delete conversation'}
/>}
+146 -75
View File
@@ -4,110 +4,173 @@ import { shallow } from 'zustand/shallow';
import { Box, List } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
import { useChatLLM } from '~/modules/llms/store-llms';
import type { DiagramConfig } from '~/modules/aifn/digrams/DiagramsModal';
import { createDMessage, DMessage, useChatStore } from '~/common/state/store-chats';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { ShortcutKeyName, useGlobalShortcut } from '~/common/components/useGlobalShortcut';
import { InlineError } from '~/common/components/InlineError';
import { createDMessage, DConversationId, DMessage, getConversation, useChatStore } from '~/common/state/store-chats';
import { openLayoutPreferences } from '~/common/layout/store-applayout';
import { useCapabilityElevenLabs, useCapabilityProdia } from '~/common/components/useCapabilities';
import { ChatMessage } from './message/ChatMessage';
import { ChatMessageMemo } from './message/ChatMessage';
import { CleanerMessage, MessagesSelectionHeader } from './message/CleanerMessage';
import { PersonaSelector } from './persona-selector/PersonaSelector';
import { useChatShowSystemMessages } from '../store-app-chat';
/**
* A list of ChatMessages
*/
export function ChatMessageList(props: {
conversationId: string | null,
conversationId: DConversationId | null,
chatLLMContextTokens?: number,
isMessageSelectionMode: boolean, setIsMessageSelectionMode: (isMessageSelectionMode: boolean) => void,
onExecuteChatHistory: (conversationId: string, history: DMessage[]) => void,
onImagineFromText: (conversationId: string, userText: string) => void,
sx?: SxProps
onConversationBranch: (conversationId: DConversationId, messageId: string) => void,
onConversationExecuteHistory: (conversationId: DConversationId, history: DMessage[]) => void,
onTextDiagram: (diagramConfig: DiagramConfig | null) => Promise<any>,
onTextImagine: (conversationId: DConversationId, selectedText: string) => Promise<any>,
onTextSpeak: (selectedText: string) => Promise<any>,
sx?: SxProps,
}) {
// state
const [isImagining, setIsImagining] = React.useState(false);
const [isSpeaking, setIsSpeaking] = React.useState(false);
const [selectedMessages, setSelectedMessages] = React.useState<Set<string>>(new Set());
// external state
const showSystemMessages = useUIPreferencesStore(state => state.showSystemMessages);
const { messages, editMessage, deleteMessage, historyTokenCount } = useChatStore(state => {
const [showSystemMessages] = useChatShowSystemMessages();
const { conversationMessages, historyTokenCount, editMessage, deleteMessage, setMessages } = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === props.conversationId);
return {
messages: conversation ? conversation.messages : [],
editMessage: state.editMessage, deleteMessage: state.deleteMessage,
conversationMessages: conversation ? conversation.messages : [],
historyTokenCount: conversation ? conversation.tokenCount : 0,
deleteMessage: state.deleteMessage,
editMessage: state.editMessage,
setMessages: state.setMessages,
};
}, shallow);
const { chatLLM } = useChatLLM();
const { mayWork: isImaginable } = useCapabilityProdia();
const { mayWork: isSpeakable } = useCapabilityElevenLabs();
const handleMessageDelete = (messageId: string) =>
props.conversationId && deleteMessage(props.conversationId, messageId);
// derived state
const { conversationId, onConversationBranch, onConversationExecuteHistory, onTextDiagram, onTextImagine, onTextSpeak } = props;
const handleMessageEdit = (messageId: string, newText: string) =>
props.conversationId && editMessage(props.conversationId, messageId, { text: newText }, true);
const handleImagineFromText = (messageText: string) =>
props.conversationId && props.onImagineFromText(props.conversationId, messageText);
const handleRestartFromMessage = (messageId: string, offset: number) => {
const truncatedHistory = messages.slice(0, messages.findIndex(m => m.id === messageId) + offset + 1);
props.conversationId && props.onExecuteChatHistory(props.conversationId, truncatedHistory);
};
// text actions
const handleRunExample = (text: string) =>
props.conversationId && props.onExecuteChatHistory(props.conversationId, [...messages, createDMessage('user', text)]);
conversationId && onConversationExecuteHistory(conversationId, [...conversationMessages, createDMessage('user', text)]);
// hide system messages if the user chooses so
// NOTE: reverse is because we'll use flexDirection: 'column-reverse' to auto-snap-to-bottom
const filteredMessages = messages.filter(m => m.role !== 'system' || showSystemMessages).reverse();
// message menu methods proxy
// when there are no messages, show the purpose selector
if (!filteredMessages.length)
return props.conversationId ? (
<Box sx={props.sx || {}}>
<PersonaSelector conversationId={props.conversationId} runExample={handleRunExample} />
</Box>
) : null;
const handleConversationBranch = React.useCallback((messageId: string) => {
conversationId && onConversationBranch(conversationId, messageId);
}, [conversationId, onConversationBranch]);
const handleConversationRestartFrom = React.useCallback((messageId: string, offset: number) => {
const messages = getConversation(conversationId)?.messages;
if (messages) {
const truncatedHistory = messages.slice(0, messages.findIndex(m => m.id === messageId) + offset + 1);
conversationId && onConversationExecuteHistory(conversationId, truncatedHistory);
}
}, [conversationId, onConversationExecuteHistory]);
const handleConversationTruncate = React.useCallback((messageId: string) => {
const messages = getConversation(conversationId)?.messages;
if (conversationId && messages) {
const truncatedHistory = messages.slice(0, messages.findIndex(m => m.id === messageId) + 1);
setMessages(conversationId, truncatedHistory);
}
}, [conversationId, setMessages]);
const handleMessageDelete = React.useCallback((messageId: string) => {
conversationId && deleteMessage(conversationId, messageId);
}, [conversationId, deleteMessage]);
const handleMessageEdit = React.useCallback((messageId: string, newText: string) => {
conversationId && editMessage(conversationId, messageId, { text: newText }, true);
}, [conversationId, editMessage]);
const handleTextDiagram = React.useCallback(async (messageId: string, text: string) => {
conversationId && await onTextDiagram({ conversationId: conversationId, messageId, text });
}, [conversationId, onTextDiagram]);
const handleTextImagine = React.useCallback(async (text: string) => {
if (!isImaginable)
return openLayoutPreferences(2);
if (conversationId) {
setIsImagining(true);
await onTextImagine(conversationId, text);
setIsImagining(false);
}
}, [conversationId, isImaginable, onTextImagine]);
const handleTextSpeak = React.useCallback(async (text: string) => {
if (!isSpeakable)
return openLayoutPreferences(3);
setIsSpeaking(true);
await onTextSpeak(text);
setIsSpeaking(false);
}, [isSpeakable, onTextSpeak]);
const handleToggleSelected = (messageId: string, selected: boolean) => {
// operate on the local selection set
const handleSelectAll = (selected: boolean) => {
const newSelected = new Set<string>();
if (selected)
for (const message of conversationMessages)
newSelected.add(message.id);
setSelectedMessages(newSelected);
};
const handleSelectMessage = (messageId: string, selected: boolean) => {
const newSelected = new Set(selectedMessages);
selected ? newSelected.add(messageId) : newSelected.delete(messageId);
setSelectedMessages(newSelected);
};
const handleSelectAllMessages = (selected: boolean) => {
const newSelected = new Set<string>();
if (selected)
for (const message of messages)
newSelected.add(message.id);
setSelectedMessages(newSelected);
};
const handleDeleteSelectedMessages = () => {
if (props.conversationId)
const handleSelectionDelete = () => {
if (conversationId)
for (const selectedMessage of selectedMessages)
deleteMessage(props.conversationId, selectedMessage);
deleteMessage(conversationId, selectedMessage);
setSelectedMessages(new Set());
};
useGlobalShortcut(props.isMessageSelectionMode && ShortcutKeyName.Esc, false, false, false, () => {
props.setIsMessageSelectionMode(false);
});
// scrollbar style
// const scrollbarStyle: SxProps = {
// '&::-webkit-scrollbar': {
// md: {
// width: 8,
// background: theme.palette.neutral.plainHoverBg,
// },
// },
// '&::-webkit-scrollbar-thumb': {
// background: theme.palette.neutral.solidBg,
// borderRadius: 6,
// },
// '&::-webkit-scrollbar-thumb:hover': {
// background: theme.palette.neutral.solidHoverBg,
// },
// };
// text-diff functionality, find the messages to diff with
const { diffMessage, diffText } = React.useMemo(() => {
const [msgB, msgA] = conversationMessages.filter(m => m.role === 'assistant').reverse();
if (msgB?.text && msgA?.text && !msgB?.typing) {
const textA = msgA.text, textB = msgB.text;
const lenA = textA.length, lenB = textB.length;
if (lenA > 80 && lenB > 80 && lenA > lenB / 3 && lenB > lenA / 3)
return { diffMessage: msgB, diffText: textA };
}
return { diffMessage: undefined, diffText: undefined };
}, [conversationMessages]);
// no content: show the persona selector
const filteredMessages = conversationMessages
.filter(m => m.role !== 'system' || showSystemMessages) // hide the System message if the user choses to
.reverse(); // 'reverse' is because flexDirection: 'column-reverse' to auto-snap-to-bottom
if (!filteredMessages.length)
return (
<Box sx={{ ...props.sx }}>
{conversationId
? <PersonaSelector conversationId={conversationId} runExample={handleRunExample} />
: <InlineError severity='info' error='Select a conversation' sx={{ m: 2 }} />}
</Box>
);
return (
<List sx={{
@@ -115,27 +178,35 @@ export function ChatMessageList(props: {
// this makes sure that the the window is scrolled to the bottom (column-reverse)
display: 'flex', flexDirection: 'column-reverse',
// fix for the double-border on the last message (one by the composer, one to the bottom of the message)
marginBottom: '-1px',
// marginBottom: '-1px',
}}>
{filteredMessages.map((message, idx) =>
props.isMessageSelectionMode ? (
<CleanerMessage
key={'sel-' + message.id} message={message}
isBottom={idx === 0} remainingTokens={(chatLLM ? chatLLM.contextTokens : 0) - historyTokenCount}
selected={selectedMessages.has(message.id)} onToggleSelected={handleToggleSelected}
key={'sel-' + message.id}
message={message}
isBottom={idx === 0} remainingTokens={(props.chatLLMContextTokens || 0) - historyTokenCount}
selected={selectedMessages.has(message.id)} onToggleSelected={handleSelectMessage}
/>
) : (
<ChatMessage
key={'msg-' + message.id} message={message}
<ChatMessageMemo
key={'msg-' + message.id}
message={message}
diffPreviousText={message === diffMessage ? diffText : undefined}
isBottom={idx === 0}
onMessageDelete={() => handleMessageDelete(message.id)}
onMessageEdit={newText => handleMessageEdit(message.id, newText)}
onMessageRunFrom={(offset: number) => handleRestartFromMessage(message.id, offset)}
onImagine={handleImagineFromText}
isImagining={isImagining} isSpeaking={isSpeaking}
onConversationBranch={handleConversationBranch}
onConversationRestartFrom={handleConversationRestartFrom}
onConversationTruncate={handleConversationTruncate}
onMessageDelete={handleMessageDelete}
onMessageEdit={handleMessageEdit}
onTextDiagram={handleTextDiagram}
onTextImagine={handleTextImagine}
onTextSpeak={handleTextSpeak}
/>
),
@@ -148,8 +219,8 @@ export function ChatMessageList(props: {
isBottom={filteredMessages.length === 0}
sumTokens={historyTokenCount}
onClose={() => props.setIsMessageSelectionMode(false)}
onSelectAll={handleSelectAllMessages}
onDeleteMessages={handleDeleteSelectedMessages}
onSelectAll={handleSelectAll}
onDeleteMessages={handleSelectionDelete}
/>
)}
+2 -4
View File
@@ -5,7 +5,7 @@ import { Box, Grid, IconButton, Sheet, Stack, styled, Typography, useTheme } fro
import { SxProps } from '@mui/joy/styles/types';
import CloseIcon from '@mui/icons-material/Close';
import { DEphemeral, useChatStore } from '~/common/state/store-chats';
import { DConversationId, DEphemeral, useChatStore } from '~/common/state/store-chats';
const StateLine = styled(Typography)(({ theme }) => ({
@@ -32,8 +32,6 @@ function PrimitiveRender({ name, value }: { name: string, value: string | number
return <StateLine><b>{name}</b>: <b>{value}</b></StateLine>;
else if (typeof value === 'boolean')
return <StateLine><b>{name}</b>: <b>{value ? 'true' : 'false'}</b></StateLine>;
else if (typeof value === 'symbol')
return <StateLine><b>{name}</b>: <b>{value.toString()}</b></StateLine>;
else
return <StateLine><b>{name}</b>: unknown?</StateLine>;
}
@@ -126,7 +124,7 @@ function EphemeralItem({ conversationId, ephemeral }: { conversationId: string,
}
export function Ephemerals(props: { conversationId: string | null, sx?: SxProps }) {
export function Ephemerals(props: { conversationId: DConversationId | null, sx?: SxProps }) {
// global state
const theme = useTheme();
const ephemerals = useChatStore(state => {
@@ -1,77 +1,69 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { Box, ListDivider, ListItemDecorator, MenuItem, Tooltip, Typography } from '@mui/joy';
import { Box, ListDivider, ListItemDecorator, MenuItem, Typography } from '@mui/joy';
import AddIcon from '@mui/icons-material/Add';
import DeleteOutlineIcon from '@mui/icons-material/DeleteOutline';
import FileUploadIcon from '@mui/icons-material/FileUpload';
import { MAX_CONVERSATIONS, useChatStore } from '~/common/state/store-chats';
import { setLayoutDrawerAnchor } from '~/common/layout/store-applayout';
import { DConversationId, useChatStore } from '~/common/state/store-chats';
import { OpenAIIcon } from '~/common/components/icons/OpenAIIcon';
import { closeLayoutDrawer } from '~/common/layout/store-applayout';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { useUXLabsStore } from '~/common/state/store-ux-labs';
import { ConversationItem } from './ConversationItem';
import { OpenAIIcon } from '~/modules/llms/openai/OpenAIIcon';
import { ChatNavigationItemMemo } from './ChatNavigationItem';
type ListGrouping = 'off' | 'persona';
// type ListGrouping = 'off' | 'persona';
export function ChatDrawerItems(props: {
conversationId: string | null
onDeleteAllConversations: () => void,
onImportConversation: () => void,
export const ChatDrawerItemsMemo = React.memo(ChatDrawerItems);
function ChatDrawerItems(props: {
activeConversationId: DConversationId | null,
disableNewButton: boolean,
onConversationActivate: (conversationId: DConversationId) => void,
onConversationDelete: (conversationId: DConversationId, bypassConfirmation: boolean) => void,
onConversationImportDialog: () => void,
onConversationNew: () => void,
onConversationsDeleteAll: () => void,
}) {
// local state
const [grouping] = React.useState<ListGrouping>('off');
const { onConversationDelete, onConversationNew, onConversationActivate } = props;
// const [grouping] = React.useState<ListGrouping>('off');
// external state
const conversationIDs = useChatStore(state => state.conversations.map(
conversation => conversation.id,
), shallow);
const { topNewConversationId, maxChatMessages, setActiveConversationId, createConversation, deleteConversation } = useChatStore(state => ({
topNewConversationId: state.conversations.length ? state.conversations[0].messages.length === 0 ? state.conversations[0].id : null : null,
maxChatMessages: state.conversations.reduce((longest, conversation) => Math.max(longest, conversation.messages.length), 0),
setActiveConversationId: state.setActiveConversationId,
createConversation: state.createConversation,
deleteConversation: state.deleteConversation,
}), shallow);
const { experimentalLabs, showSymbols } = useUIPreferencesStore(state => ({
experimentalLabs: state.experimentalLabs,
showSymbols: state.zenMode !== 'cleaner',
}), shallow);
const conversations = useChatStore(state => state.conversations, shallow);
const showSymbols = useUIPreferencesStore(state => state.zenMode !== 'cleaner');
const labsEnhancedUI = useUXLabsStore(state => state.labsEnhancedUI);
// derived state
const maxChatMessages = conversations.reduce((longest, _c) => Math.max(longest, _c.messages.length), 1);
const totalConversations = conversations.length;
const hasChats = totalConversations > 0;
const singleChat = totalConversations === 1;
const softMaxReached = totalConversations >= 50;
const hasChats = conversationIDs.length > 0;
const singleChat = conversationIDs.length === 1;
const maxReached = conversationIDs.length >= MAX_CONVERSATIONS;
const handleButtonNew = React.useCallback(() => {
onConversationNew();
closeLayoutDrawer();
}, [onConversationNew]);
const closeDrawerMenu = () => setLayoutDrawerAnchor(null);
const handleNew = () => {
// if the first in the stack is a new conversation, just activate it
if (topNewConversationId)
setActiveConversationId(topNewConversationId);
else
createConversation();
closeDrawerMenu();
};
const handleConversationActivate = React.useCallback((conversationId: string, closeMenu: boolean) => {
setActiveConversationId(conversationId);
const handleConversationActivate = React.useCallback((conversationId: DConversationId, closeMenu: boolean) => {
onConversationActivate(conversationId);
if (closeMenu)
closeDrawerMenu();
}, [setActiveConversationId]);
closeLayoutDrawer();
}, [onConversationActivate]);
const handleConversationDelete = React.useCallback((conversationId: string) => {
if (!singleChat && conversationId)
deleteConversation(conversationId);
}, [deleteConversation, singleChat]);
const handleConversationDelete = React.useCallback((conversationId: DConversationId) => {
!singleChat && conversationId && onConversationDelete(conversationId, true);
}, [onConversationDelete, singleChat]);
const NewPrefix = maxReached && <Tooltip title={`Maximum limit: ${MAX_CONVERSATIONS} chats. Proceeding will remove the oldest chat.`}><Box sx={{ mr: 2 }}></Box></Tooltip>;
// grouping
let sortedIds = conversationIDs;
/*let sortedIds = conversationIDs;
if (grouping === 'persona') {
const conversations = useChatStore.getState().conversations;
@@ -88,7 +80,7 @@ export function ChatDrawerItems(props: {
// flatten grouped conversations
sortedIds = Object.values(groupedConversations).flat();
}
}*/
return <>
@@ -98,9 +90,12 @@ export function ChatDrawerItems(props: {
{/* </Typography>*/}
{/*</ListItem>*/}
<MenuItem disabled={maxReached || (!!topNewConversationId && topNewConversationId === props.conversationId)} onClick={handleNew}>
<MenuItem disabled={props.disableNewButton} onClick={handleButtonNew}>
<ListItemDecorator><AddIcon /></ListItemDecorator>
{NewPrefix}New
<Box sx={{ flexGrow: 1, display: 'flex', justifyContent: 'space-between', gap: 1 }}>
New
{/*<KeyStroke combo='Ctrl + Alt + N' />*/}
</Box>
</MenuItem>
<ListDivider sx={{ mb: 0 }} />
@@ -120,22 +115,22 @@ export function ChatDrawerItems(props: {
{/* </ToggleButtonGroup>*/}
{/*</ListItem>*/}
{sortedIds.map(conversationId =>
<ConversationItem
key={'c-id-' + conversationId}
conversationId={conversationId}
isActive={conversationId === props.conversationId}
isSingle={singleChat}
{conversations.map(conversation =>
<ChatNavigationItemMemo
key={'nav-' + conversation.id}
conversation={conversation}
isActive={conversation.id === props.activeConversationId}
isLonely={singleChat}
maxChatMessages={(labsEnhancedUI || softMaxReached) ? maxChatMessages : 0}
showSymbols={showSymbols}
maxChatMessages={experimentalLabs ? maxChatMessages : 0}
conversationActivate={handleConversationActivate}
conversationDelete={handleConversationDelete}
onConversationActivate={handleConversationActivate}
onConversationDelete={handleConversationDelete}
/>)}
</Box>
<ListDivider sx={{ mt: 0 }} />
<MenuItem onClick={props.onImportConversation}>
<MenuItem onClick={props.onConversationImportDialog}>
<ListItemDecorator>
<FileUploadIcon />
</ListItemDecorator>
@@ -143,24 +138,12 @@ export function ChatDrawerItems(props: {
<OpenAIIcon sx={{ fontSize: 'xl', ml: 'auto' }} />
</MenuItem>
<MenuItem disabled={!hasChats} onClick={props.onDeleteAllConversations}>
<MenuItem disabled={!hasChats} onClick={props.onConversationsDeleteAll}>
<ListItemDecorator><DeleteOutlineIcon /></ListItemDecorator>
<Typography>
Delete all
Delete {totalConversations >= 2 ? `all ${totalConversations} chats` : 'chat'}
</Typography>
</MenuItem>
{/*<ListItem>*/}
{/* <Typography level='body-sm'>*/}
{/* Scratchpad*/}
{/* </Typography>*/}
{/*</ListItem>*/}
{/*<MenuItem>*/}
{/* <ListItemDecorator />*/}
{/* <Typography sx={{ opacity: 0.5 }}>*/}
{/* Feature <Link href={`${Brand.URIs.OpenRepo}/issues/17`} target='_blank'>#17</Link>*/}
{/* </Typography>*/}
{/*</MenuItem>*/}
</>;
}
@@ -1,92 +1,26 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { ListItemButton, ListItemDecorator, Typography } from '@mui/joy';
import BuildCircleIcon from '@mui/icons-material/BuildCircle';
import SettingsIcon from '@mui/icons-material/Settings';
import type { DConversationId } from '~/common/state/store-chats';
import { DLLMId, DModelSourceId } from '~/modules/llms/llm.types';
import { SystemPurposeId, SystemPurposes } from '../../../../data';
import { useModelsStore } from '~/modules/llms/store-llms';
import { AppBarDropdown, DropdownItems } from '~/common/layout/AppBarDropdown';
import { useChatStore } from '~/common/state/store-chats';
import { useUIPreferencesStore, useUIStateStore } from '~/common/state/store-ui';
import { useChatLLMDropdown } from './useLLMDropdown';
import { usePersonaIdDropdown } from './usePersonaDropdown';
export function ChatDropdowns(props: {
conversationId: string | null
conversationId: DConversationId | null
}) {
// external state
const { chatLLMId, setChatLLMId, llms } = useModelsStore(state => ({
chatLLMId: state.chatLLMId,
setChatLLMId: state.setChatLLMId,
llms: state.llms,
}), shallow);
const { zenMode } = useUIPreferencesStore(state => ({ zenMode: state.zenMode }), shallow);
const { systemPurposeValue, setSystemPurposeId } = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === props.conversationId);
return {
systemPurposeValue: conversation?.systemPurposeId ?? null,
setSystemPurposeId: state.setSystemPurposeId,
};
}, shallow);
const { openLLMOptions, openModelsSetup } = useUIStateStore(state => ({
openLLMOptions: state.openLLMOptions, openModelsSetup: state.openModelsSetup,
}), shallow);
const handleChatModelChange = (event: any, value: DLLMId | null) =>
value && props.conversationId && setChatLLMId(value);
const handleSystemPurposeChange = (event: any, value: SystemPurposeId | null) =>
value && props.conversationId && setSystemPurposeId(props.conversationId, value);
const handleOpenLLMOptions = () => chatLLMId && openLLMOptions(chatLLMId);
// build model menu items, filtering-out hidden models, and add Source separators
const llmItems: DropdownItems = {};
let prevSourceId: DModelSourceId | null = null;
for (const llm of llms) {
if (!llm.hidden || llm.id === chatLLMId) {
if (!prevSourceId || llm.sId !== prevSourceId) {
if (prevSourceId)
llmItems[`sep-${llm.id}`] = { type: 'separator', title: llm.sId };
prevSourceId = llm.sId;
}
llmItems[llm.id] = { title: llm.label };
}
}
// state
const { chatLLMDropdown } = useChatLLMDropdown();
const { personaDropdown } = usePersonaIdDropdown(props.conversationId);
return <>
{/* Model selector */}
<AppBarDropdown
items={llmItems}
value={chatLLMId} onChange={handleChatModelChange}
placeholder='Models …'
appendOption={<>
{chatLLMId && (
<ListItemButton key='menu-opt' onClick={handleOpenLLMOptions}>
<ListItemDecorator><SettingsIcon color='success' /></ListItemDecorator><Typography>Options</Typography>
</ListItemButton>
)}
<ListItemButton key='menu-llms' onClick={openModelsSetup}>
<ListItemDecorator><BuildCircleIcon color='success' /></ListItemDecorator><Typography>Models</Typography>
</ListItemButton>
</>}
/>
{chatLLMDropdown}
{/* Persona selector */}
<AppBarDropdown
items={SystemPurposes} showSymbols={zenMode !== 'cleaner'}
value={systemPurposeValue} onChange={handleSystemPurposeChange}
placeholder='Personas …'
/>
{personaDropdown}
</>;
}
@@ -1,7 +1,6 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { ListDivider, ListItemDecorator, MenuItem, Switch } from '@mui/joy';
import { Box, ListDivider, ListItemDecorator, MenuItem, Switch } from '@mui/joy';
import CheckBoxOutlineBlankOutlinedIcon from '@mui/icons-material/CheckBoxOutlineBlankOutlined';
import CheckBoxOutlinedIcon from '@mui/icons-material/CheckBoxOutlined';
import ClearIcon from '@mui/icons-material/Clear';
@@ -10,59 +9,67 @@ import FileDownloadIcon from '@mui/icons-material/FileDownload';
import ForkRightIcon from '@mui/icons-material/ForkRight';
import SettingsSuggestIcon from '@mui/icons-material/SettingsSuggest';
import { setLayoutMenuAnchor } from '~/common/layout/store-applayout';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import type { DConversationId } from '~/common/state/store-chats';
import { KeyStroke } from '~/common/components/KeyStroke';
import { closeLayoutMenu } from '~/common/layout/store-applayout';
import { useUICounter } from '~/common/state/store-ui';
import { useChatShowSystemMessages } from '../../store-app-chat';
export function ChatMenuItems(props: {
conversationId: string | null, isConversationEmpty: boolean,
isMessageSelectionMode: boolean, setIsMessageSelectionMode: (isMessageSelectionMode: boolean) => void,
onClearConversation: (conversationId: string) => void,
onDuplicateConversation: (conversationId: string) => void,
onExportConversation: (conversationId: string | null) => void,
onFlattenConversation: (conversationId: string) => void,
conversationId: DConversationId | null,
hasConversations: boolean,
isConversationEmpty: boolean,
isMessageSelectionMode: boolean,
setIsMessageSelectionMode: (isMessageSelectionMode: boolean) => void,
onConversationBranch: (conversationId: DConversationId, messageId: string | null) => void,
onConversationClear: (conversationId: DConversationId) => void,
onConversationExport: (conversationId: DConversationId | null) => void,
onConversationFlatten: (conversationId: DConversationId) => void,
}) {
// external state
const { showSystemMessages, setShowSystemMessages } = useUIPreferencesStore(state => ({
showSystemMessages: state.showSystemMessages, setShowSystemMessages: state.setShowSystemMessages,
}), shallow);
const { touch: shareTouch } = useUICounter('export-share');
const [showSystemMessages, setShowSystemMessages] = useChatShowSystemMessages();
// derived state
const disabled = !props.conversationId || props.isConversationEmpty;
const closeContextMenu = () => setLayoutMenuAnchor(null);
const handleSystemMessagesToggle = () => setShowSystemMessages(!showSystemMessages);
const handleConversationExport = (e: React.MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
closeContextMenu();
props.onExportConversation(!disabled ? props.conversationId : null);
const closeMenu = (event: React.MouseEvent) => {
event.stopPropagation();
closeLayoutMenu();
};
const handleConversationDuplicate = (e: React.MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
closeContextMenu();
props.conversationId && props.onDuplicateConversation(props.conversationId);
const handleConversationClear = (event: React.MouseEvent<HTMLDivElement>) => {
closeMenu(event);
props.conversationId && props.onConversationClear(props.conversationId);
};
const handleConversationFlatten = (e: React.MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
closeContextMenu();
props.conversationId && props.onFlattenConversation(props.conversationId);
const handleConversationBranch = (event: React.MouseEvent<HTMLDivElement>) => {
closeMenu(event);
props.conversationId && props.onConversationBranch(props.conversationId, null);
};
const handleToggleMessageSelectionMode = (e: React.MouseEvent) => {
e.stopPropagation();
closeContextMenu();
const handleConversationExport = (event: React.MouseEvent<HTMLDivElement>) => {
closeMenu(event);
props.onConversationExport(!disabled ? props.conversationId : null);
shareTouch();
};
const handleConversationFlatten = (event: React.MouseEvent<HTMLDivElement>) => {
closeMenu(event);
props.conversationId && props.onConversationFlatten(props.conversationId);
};
const handleToggleMessageSelectionMode = (event: React.MouseEvent) => {
closeMenu(event);
props.setIsMessageSelectionMode(!props.isMessageSelectionMode);
};
const handleConversationClear = (e: React.MouseEvent<HTMLDivElement>) => {
e.stopPropagation();
props.conversationId && props.onClearConversation(props.conversationId);
};
const handleToggleSystemMessages = () => setShowSystemMessages(!showSystemMessages);
return <>
@@ -72,29 +79,21 @@ export function ChatMenuItems(props: {
{/* </Typography>*/}
{/*</ListItem>*/}
<MenuItem onClick={handleSystemMessagesToggle}>
<MenuItem onClick={handleToggleSystemMessages}>
<ListItemDecorator><SettingsSuggestIcon /></ListItemDecorator>
System message
<Switch checked={showSystemMessages} onChange={handleSystemMessagesToggle} sx={{ ml: 'auto' }} />
<Switch checked={showSystemMessages} onChange={handleToggleSystemMessages} sx={{ ml: 'auto' }} />
</MenuItem>
<ListDivider inset='startContent' />
<MenuItem disabled={disabled} onClick={handleConversationDuplicate}>
<ListItemDecorator>
{/*<Badge size='sm' color='success'>*/}
<ForkRightIcon color='success' />
{/*</Badge>*/}
</ListItemDecorator>
Duplicate
<MenuItem disabled={disabled} onClick={handleConversationBranch}>
<ListItemDecorator><ForkRightIcon /></ListItemDecorator>
Branch
</MenuItem>
<MenuItem disabled={disabled} onClick={handleConversationFlatten}>
<ListItemDecorator>
{/*<Badge size='sm' color='success'>*/}
<CompressIcon color='success' />
{/*</Badge>*/}
</ListItemDecorator>
<ListItemDecorator><CompressIcon color='success' /></ListItemDecorator>
Flatten
</MenuItem>
@@ -107,16 +106,19 @@ export function ChatMenuItems(props: {
</span>
</MenuItem>
<MenuItem onClick={handleConversationExport}>
<MenuItem disabled={!props.hasConversations} onClick={handleConversationExport}>
<ListItemDecorator>
<FileDownloadIcon />
</ListItemDecorator>
Export
Share / Export ...
</MenuItem>
<MenuItem disabled={disabled} onClick={handleConversationClear}>
<ListItemDecorator><ClearIcon /></ListItemDecorator>
Reset
<Box sx={{ flexGrow: 1, display: 'flex', justifyContent: 'space-between', gap: 1 }}>
Reset
{!disabled && <KeyStroke combo='Ctrl + Alt + X' />}
</Box>
</MenuItem>
</>;
@@ -1,107 +1,107 @@
import * as React from 'react';
import { shallow } from 'zustand/shallow';
import { Avatar, Box, IconButton, ListItemDecorator, MenuItem, Typography } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
import CloseIcon from '@mui/icons-material/Close';
import DeleteOutlineIcon from '@mui/icons-material/DeleteOutline';
import { DConversation, useChatStore } from '~/common/state/store-chats';
import { InlineTextarea } from '~/common/components/InlineTextarea';
import { useUIPreferencesStore } from '~/common/state/store-ui';
import { SystemPurposes } from '../../../../data';
import { InlineTextarea } from '~/common/components/InlineTextarea';
import { conversationTitle, DConversation, DConversationId, useChatStore } from '~/common/state/store-chats';
import { useUIPreferencesStore } from '~/common/state/store-ui';
const DEBUG_CONVERSATION_IDs = false;
const conversationTitle = (conversation: DConversation): string =>
conversation.userTitle || conversation.autoTitle || 'new conversation'; // 👋💬🗨️
export const ChatNavigationItemMemo = React.memo(ChatNavigationItem);
export function ConversationItem(props: {
conversationId: string,
isActive: boolean, isSingle: boolean, showSymbols: boolean, maxChatMessages: number,
conversationActivate: (conversationId: string, closeMenu: boolean) => void,
conversationDelete: (conversationId: string) => void,
function ChatNavigationItem(props: {
conversation: DConversation,
isActive: boolean,
isLonely: boolean,
maxChatMessages: number,
showSymbols: boolean,
onConversationActivate: (conversationId: DConversationId, closeMenu: boolean) => void,
onConversationDelete: (conversationId: DConversationId) => void,
}) {
const { conversation, isActive } = props;
// state
const [isEditingTitle, setIsEditingTitle] = React.useState(false);
const [deleteArmed, setDeleteArmed] = React.useState(false);
// external state
const doubleClickToEdit = useUIPreferencesStore(state => state.doubleClickToEdit);
// bind to conversation
const cState = useChatStore(state => {
const conversation = state.conversations.find(conversation => conversation.id === props.conversationId);
return conversation && {
isNew: conversation.messages.length === 0,
messageCount: conversation.messages.length,
assistantTyping: !!conversation.abortController,
systemPurposeId: conversation.systemPurposeId,
title: conversationTitle(conversation),
setUserTitle: state.setUserTitle,
};
}, shallow);
// derived state
const { id: conversationId } = conversation;
const isNew = conversation.messages.length === 0;
const messageCount = conversation.messages.length;
const assistantTyping = !!conversation.abortController;
const systemPurposeId = conversation.systemPurposeId;
const title = conversationTitle(conversation, 'new conversation');
// const setUserTitle = state.setUserTitle;
// auto-close the arming menu when clicking away
// NOTE: there currently is a bug (race condition) where the menu closes on a new item right after opening
// because the isActive prop is not yet updated
React.useEffect(() => {
if (deleteArmed && !props.isActive)
if (deleteArmed && !isActive)
setDeleteArmed(false);
}, [deleteArmed, props.isActive]);
}, [deleteArmed, isActive]);
// sanity check: shouldn't happen, but just in case
if (!cState) return null;
const { isNew, messageCount, assistantTyping, setUserTitle, systemPurposeId, title } = cState;
const handleActivate = () => props.conversationActivate(props.conversationId, true);
const handleConversationActivate = () => props.onConversationActivate(conversationId, true);
const handleEditBegin = () => setIsEditingTitle(true);
const handleTitleEdit = () => setIsEditingTitle(true);
const handleEdited = (text: string) => {
const handleTitleEdited = (text: string) => {
setIsEditingTitle(false);
setUserTitle(props.conversationId, text);
useChatStore.getState().setUserTitle(conversationId, text);
};
const handleDeleteBegin = (e: React.MouseEvent) => {
e.stopPropagation();
if (!props.isActive)
props.conversationActivate(props.conversationId, false);
const handleDeleteButtonShow = (event: React.MouseEvent) => {
event.stopPropagation();
if (!isActive)
props.onConversationActivate(conversationId, false);
else
setDeleteArmed(true);
};
const handleDeleteConfirm = (e: React.MouseEvent) => {
const handleDeleteButtonHide = () => setDeleteArmed(false);
const handleConversationDelete = (event: React.MouseEvent) => {
if (deleteArmed) {
setDeleteArmed(false);
e.stopPropagation();
props.conversationDelete(props.conversationId);
event.stopPropagation();
props.onConversationDelete(conversationId);
}
};
const handleDeleteCancel = () => setDeleteArmed(false);
const textSymbol = (systemPurposeId && SystemPurposes[systemPurposeId]?.symbol) || '❓';
const buttonSx: SxProps = { ml: 1, ...(props.isActive ? { color: 'white' } : {}) };
const textSymbol = SystemPurposes[systemPurposeId]?.symbol || '❓';
const buttonSx: SxProps = { ml: 1, ...(isActive ? { color: 'white' } : {}) };
const progress = props.maxChatMessages ? 100 * messageCount / props.maxChatMessages : 0;
return (
<MenuItem
variant={props.isActive ? 'solid' : 'plain'} color='neutral'
selected={props.isActive}
onClick={handleActivate}
variant={isActive ? 'solid' : 'plain'} color='neutral'
selected={isActive}
onClick={handleConversationActivate}
sx={{
// py: 0,
position: 'relative',
border: 'none', // note, there's a default border of 1px and invisible.. hmm
'&:hover > button': { opacity: 1 },
...(isActive ? { bgcolor: 'red' } : {}),
}}
>
{/* Optional prgoress bar */}
{/* Optional progress bar, underlay */}
{progress > 0 && (
<Box sx={{
backgroundColor: 'neutral.softActiveBg',
@@ -132,13 +132,13 @@ export function ConversationItem(props: {
{/* Text */}
{!isEditingTitle ? (
<Box onDoubleClick={() => doubleClickToEdit ? handleEditBegin() : null} sx={{ flexGrow: 1 }}>
{DEBUG_CONVERSATION_IDs ? props.conversationId.slice(0, 10) : title}{assistantTyping && '...'}
<Box onDoubleClick={() => doubleClickToEdit ? handleTitleEdit() : null} sx={{ flexGrow: 1 }}>
{DEBUG_CONVERSATION_IDs ? conversationId.slice(0, 10) : title}{assistantTyping && '...'}
</Box>
) : (
<InlineTextarea initialText={title} onEdit={handleEdited} sx={{ ml: -1.5, mr: -0.5, flexGrow: 1 }} />
<InlineTextarea initialText={title} onEdit={handleTitleEdited} sx={{ ml: -1.5, mr: -0.5, flexGrow: 1 }} />
)}
@@ -154,21 +154,21 @@ export function ConversationItem(props: {
{/*</IconButton>*/}
{/* Delete Arming */}
{!props.isSingle && !deleteArmed && (
{!props.isLonely && !deleteArmed && (
<IconButton
variant={props.isActive ? 'solid' : 'outlined'} color='neutral'
variant={isActive ? 'solid' : 'outlined'} color='neutral'
size='sm' sx={{ opacity: { xs: 1, sm: 0 }, transition: 'opacity 0.3s', ...buttonSx }}
onClick={handleDeleteBegin}>
onClick={handleDeleteButtonShow}>
<DeleteOutlineIcon />
</IconButton>
)}
{/* Delete / Cancel buttons */}
{!props.isSingle && deleteArmed && <>
<IconButton size='sm' variant='solid' color='danger' sx={buttonSx} onClick={handleDeleteConfirm}>
{!props.isLonely && deleteArmed && <>
<IconButton size='sm' variant='solid' color='danger' sx={buttonSx} onClick={handleConversationDelete}>
<DeleteOutlineIcon />
</IconButton>
<IconButton size='sm' variant='solid' color='neutral' sx={buttonSx} onClick={handleDeleteCancel}>
<IconButton size='sm' variant='solid' color='neutral' sx={buttonSx} onClick={handleDeleteButtonHide}>
<CloseIcon />
</IconButton>
</>}

Some files were not shown because too many files have changed in this diff Show More