Compare commits

...

18 Commits

Author SHA1 Message Date
Enrico Ros 6ae440d252 1.7.3: Patch release for Mistral support 2023-12-12 17:01:40 -08:00
Enrico Ros c0c724afc1 Mistral Platform: full support
Closes #273.
2023-12-12 16:39:06 -08:00
Enrico Ros a265112ce1 Mistral Platform: backend-configurable support (#273) 2023-12-12 16:39:06 -08:00
Enrico Ros 75605ed408 Dropdown: support model vendor icons 2023-12-12 16:39:06 -08:00
Enrico Ros ad38ff4157 LLMs: safer and smarter access 2023-12-12 16:39:06 -08:00
Enrico Ros 08c60e53b1 LLMs: reorder template params 2023-12-12 16:39:06 -08:00
Enrico Ros d0dcb2ac02 LLMs: getTransportAccess 2023-12-12 16:39:06 -08:00
Enrico Ros fbeb604b26 Update README.md 2023-12-12 03:42:05 -08:00
Enrico Ros c4f3b1df77 Update README.md 2023-12-12 03:40:44 -08:00
Enrico Ros 5a1f9caaac Roll rest 2023-12-12 03:16:35 -08:00
Enrico Ros 2fc70d5e95 Roll other dev deps 2023-12-12 03:12:43 -08:00
Enrico Ros 43adadef78 Roll Material/Joy/Next 2023-12-12 03:11:14 -08:00
Enrico Ros 96f6e7628b Roll Prisma 2023-12-12 03:08:10 -08:00
Enrico Ros 32ad82bcee Drag/Drop: do not remove the text from the source 2023-12-12 03:07:31 -08:00
Enrico Ros 3d72aec369 Roll pdfjs-dist 2023-12-12 02:58:06 -08:00
Enrico Ros d244ee2cca Update Docker image workflow.
Assume the vX.Y.Z is the latest (and will have the latest tag). Removing this to remove the 'stable' tag, as
latest is better.

The 'main' branch keeps the development tag.
2023-12-12 01:38:57 -08:00
Enrico Ros cc8a235ae3 Bits 2023-12-12 01:21:43 -08:00
Enrico Ros ae348812de OpenRouter: improve showing of discounted models 2023-12-12 01:14:33 -08:00
47 changed files with 923 additions and 386 deletions
+1 -1
View File
@@ -13,7 +13,7 @@ on:
push:
branches:
- main
- main-stable # Trigger on pushes to the main-stable branch
#- main-stable # Disabled as the v* tag is used for stable releases
tags:
- 'v*' # Trigger on version tags (e.g., v1.7.0)
+6 -5
View File
@@ -1,8 +1,8 @@
# BIG-AGI 🧠✨
Welcome to big-AGI 👋, the GPT application for professionals that need form, function,
simplicity, and speed. Powered by the latest models from 7 vendors, including
open-source, `big-AGI` offers best-in-class Voice and Chat with AI Personas,
Welcome to big-AGI 👋, the GPT application for professionals that need function, form,
simplicity, and speed. Powered by the latest models from 7 vendors and
open-source model servers, `big-AGI` offers best-in-class Voice and Chat with AI Personas,
visualizations, coding, drawing, calling, and quite more -- all in a polished UX.
Pros use big-AGI. 🚀 Developers love big-AGI. 🤖
@@ -21,7 +21,7 @@ shows the current developments and future ideas.
- Got a suggestion? [_Add your roadmap ideas_](https://github.com/enricoros/big-agi/issues/new?&template=roadmap-request.md)
- Want to contribute? [_Pick up a task!_](https://github.com/users/enricoros/projects/4/views/4) - _easy_ to _pro_
### What's New in 1.7.2 · Dec 12, 2023 · Attachment Theory 🌟
### What's New in 1.7.3 · Dec 13, 2023 · Attachment Theory 🌟
- **Attachments System Overhaul**: Drag, paste, link, snap, text, images, PDFs and more. [#251](https://github.com/enricoros/big-agi/issues/251)
- **Desktop Webcam Capture**: Image capture now available as Labs feature. [#253](https://github.com/enricoros/big-agi/issues/253)
@@ -32,7 +32,8 @@ shows the current developments and future ideas.
- Latest Ollama and Oobabooga models
- For developers: **Password Protection**: HTTP Basic Auth. [Learn How](https://github.com/enricoros/big-agi/blob/main/docs/deploy-authentication.md)
- [1.7.1]: Improved Ollama chats. [#270](https://github.com/enricoros/big-agi/issues/270)
- [1.7.2]: Updated OpenRouter models (incl. Mixtral 8x7B)
- [1.7.2]: OpenRouter login & free models 🎁
- [1.7.3]: Mistral Platform support. [#273](https://github.com/enricoros/big-agi/issues/273)
### What's New in 1.6.0 - Nov 28, 2023
+3 -2
View File
@@ -10,7 +10,7 @@ by release.
- work in progress: [big-AGI open roadmap](https://github.com/users/enricoros/projects/4/views/2), [help here](https://github.com/users/enricoros/projects/4/views/4)
- milestone: [1.8.0](https://github.com/enricoros/big-agi/milestone/8)
### What's New in 1.7.2 · Dec 11, 2023 · Attachment Theory 🌟
### What's New in 1.7.3 · Dec 13, 2023 · Attachment Theory 🌟
- **Attachments System Overhaul**: Drag, paste, link, snap, text, images, PDFs and more. [#251](https://github.com/enricoros/big-agi/issues/251)
- **Desktop Webcam Capture**: Image capture now available as Labs feature. [#253](https://github.com/enricoros/big-agi/issues/253)
@@ -21,7 +21,8 @@ by release.
- Latest Ollama and Oobabooga models
- For developers: **Password Protection**: HTTP Basic Auth. [Learn How](https://github.com/enricoros/big-agi/blob/main/docs/deploy-authentication.md)
- [1.7.1]: Improved Ollama chats. [#270](https://github.com/enricoros/big-agi/issues/270)
- [1.7.2]: Updated OpenRouter models (incl. Mixtral 8x7B)
- [1.7.2]: OpenRouter login & free models 🎁
- [1.7.3]: Mistral Platform support. [#273](https://github.com/enricoros/big-agi/issues/273)
### What's New in 1.6.0 - Nov 28, 2023 · Surf's Up
+2
View File
@@ -24,6 +24,7 @@ AZURE_OPENAI_API_ENDPOINT=
AZURE_OPENAI_API_KEY=
ANTHROPIC_API_KEY=
ANTHROPIC_API_HOST=
MISTRAL_API_KEY=
OLLAMA_API_HOST=
OPENROUTER_API_KEY=
@@ -79,6 +80,7 @@ requiring the user to enter an API key
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key, see [config-azure-openai.md](config-azure-openai.md) | Optional, but if set `AZURE_OPENAI_API_ENDPOINT` must also be set |
| `ANTHROPIC_API_KEY` | The API key for Anthropic | Optional |
| `ANTHROPIC_API_HOST` | Changes the backend host for the Anthropic vendor, to enable platforms such as [config-aws-bedrock.md](config-aws-bedrock.md) | Optional |
| `MISTRAL_API_KEY` | The API key for Mistral | Optional |
| `OLLAMA_API_HOST` | Changes the backend host for the Ollama vendor. See [config-ollama.md](config-ollama.md) | |
| `OPENROUTER_API_KEY` | The API key for OpenRouter | Optional |
+485 -251
View File
File diff suppressed because it is too large Load Diff
+15 -15
View File
@@ -1,6 +1,6 @@
{
"name": "big-agi",
"version": "1.7.2",
"version": "1.7.3",
"private": true,
"scripts": {
"dev": "next dev",
@@ -18,10 +18,10 @@
"@emotion/react": "^11.11.1",
"@emotion/server": "^11.11.0",
"@emotion/styled": "^11.11.0",
"@mui/icons-material": "^5.14.18",
"@mui/joy": "^5.0.0-beta.15",
"@next/bundle-analyzer": "^14.0.3",
"@prisma/client": "^5.6.0",
"@mui/icons-material": "^5.14.19",
"@mui/joy": "^5.0.0-beta.17",
"@next/bundle-analyzer": "^14.0.4",
"@prisma/client": "^5.7.0",
"@sanity/diff-match-patch": "^3.1.1",
"@t3-oss/env-nextjs": "^0.7.1",
"@tanstack/react-query": "^4.36.1",
@@ -33,8 +33,8 @@
"browser-fs-access": "^0.35.0",
"eventsource-parser": "^1.1.1",
"idb-keyval": "^6.2.1",
"next": "^14.0.3",
"pdfjs-dist": "4.0.189",
"next": "^14.0.4",
"pdfjs-dist": "4.0.269",
"plantuml-encoder": "^1.4.0",
"prismjs": "^1.29.0",
"react": "^18.2.0",
@@ -51,19 +51,19 @@
},
"devDependencies": {
"@cloudflare/puppeteer": "^0.0.5",
"@types/node": "^20.10.0",
"@types/node": "^20.10.4",
"@types/plantuml-encoder": "^1.4.2",
"@types/prismjs": "^1.26.3",
"@types/react": "^18.2.38",
"@types/react": "^18.2.43",
"@types/react-dom": "^18.2.17",
"@types/react-katex": "^3.0.3",
"@types/react-katex": "^3.0.4",
"@types/react-timeago": "^4.1.6",
"@types/uuid": "^9.0.7",
"eslint": "^8.54.0",
"eslint-config-next": "^14.0.3",
"prettier": "^3.1.0",
"prisma": "^5.6.0",
"typescript": "^5.3.2"
"eslint": "^8.55.0",
"eslint-config-next": "^14.0.4",
"prettier": "^3.1.1",
"prisma": "^5.7.0",
"typescript": "^5.3.3"
},
"engines": {
"node": "^20.0.0 || ^18.0.0"
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -23,14 +23,26 @@ function AppBarLLMDropdown(props: {
const llmItems: DropdownItems = {};
let prevSourceId: DModelSourceId | null = null;
for (const llm of props.llms) {
if (!llm.hidden || llm.id === props.chatLlmId) {
if (!prevSourceId || llm.sId !== prevSourceId) {
if (prevSourceId)
llmItems[`sep-${llm.id}`] = { type: 'separator', title: llm.sId };
prevSourceId = llm.sId;
}
llmItems[llm.id] = { title: llm.label };
// filter-out hidden models
if (!(!llm.hidden || llm.id === props.chatLlmId))
continue;
// add separators when changing sources
if (!prevSourceId || llm.sId !== prevSourceId) {
if (prevSourceId)
llmItems[`sep-${llm.id}`] = {
type: 'separator',
title: llm.sId,
};
prevSourceId = llm.sId;
}
// add the model item
llmItems[llm.id] = {
title: llm.label,
// icon: llm.id.startsWith('some vendor') ? <VendorIcon /> : undefined,
};
}
const handleChatLLMChange = (_event: any, value: DLLMId | null) => value && props.setChatLlmId(value);
@@ -331,7 +331,8 @@ export function Composer(props: {
const handleOverlayDragOver = React.useCallback((e: React.DragEvent) => {
eatDragEvent(e);
// e.dataTransfer.dropEffect = 'copy';
// this makes sure we don't "transfer" (or move) the attachment, but we tell the sender we'll copy it
e.dataTransfer.dropEffect = 'copy';
}, [eatDragEvent]);
const handleOverlayDrop = React.useCallback(async (event: React.DragEvent) => {
+1 -1
View File
@@ -7,7 +7,7 @@ import VisibilityIcon from '@mui/icons-material/Visibility';
import VisibilityOffIcon from '@mui/icons-material/VisibilityOff';
import { DLLMId, useModelsStore } from '~/modules/llms/store-llms';
import { findVendorById } from '~/modules/llms/vendors/vendor.registry';
import { findVendorById } from '~/modules/llms/vendors/vendors.registry';
import { FormLabelStart } from '~/common/components/forms/FormLabelStart';
import { GoodModal } from '~/common/components/GoodModal';
+2 -1
View File
@@ -7,7 +7,7 @@ import VisibilityOffOutlinedIcon from '@mui/icons-material/VisibilityOffOutlined
import { DLLM, DModelSourceId, useModelsStore } from '~/modules/llms/store-llms';
import { IModelVendor } from '~/modules/llms/vendors/IModelVendor';
import { findVendorById } from '~/modules/llms/vendors/vendor.registry';
import { findVendorById } from '~/modules/llms/vendors/vendors.registry';
import { GoodTooltip } from '~/common/components/GoodTooltip';
import { openLayoutLLMOptions } from '~/common/layout/store-applayout';
@@ -109,6 +109,7 @@ export function ModelsList(props: {
<List variant='soft' size='sm' sx={{
borderRadius: 'sm',
pl: { xs: 0, md: 1 },
overflowY: 'auto',
}}>
{items}
</List>
+1 -1
View File
@@ -4,7 +4,7 @@ import { shallow } from 'zustand/shallow';
import { Box, Checkbox, Divider } from '@mui/joy';
import { DModelSource, DModelSourceId, useModelsStore } from '~/modules/llms/store-llms';
import { createModelSourceForDefaultVendor, findVendorById } from '~/modules/llms/vendors/vendor.registry';
import { createModelSourceForDefaultVendor, findVendorById } from '~/modules/llms/vendors/vendors.registry';
import { GoodModal } from '~/common/components/GoodModal';
import { closeLayoutModelsSetup, openLayoutModelsSetup, useLayoutModelsSetup } from '~/common/layout/store-applayout';
@@ -7,7 +7,7 @@ import DeleteOutlineIcon from '@mui/icons-material/DeleteOutline';
import { type DModelSourceId, useModelsStore } from '~/modules/llms/store-llms';
import { type IModelVendor, type ModelVendorId } from '~/modules/llms/vendors/IModelVendor';
import { createModelSourceForVendor, findAllVendors, findVendorById } from '~/modules/llms/vendors/vendor.registry';
import { createModelSourceForVendor, findAllVendors, findVendorById } from '~/modules/llms/vendors/vendors.registry';
import { CloseableMenu } from '~/common/components/CloseableMenu';
import { ConfirmationModal } from '~/common/components/ConfirmationModal';
@@ -29,7 +29,7 @@ function vendorIcon(vendor: IModelVendor | null, greenMark: boolean) {
icon = <vendor.Icon />;
}
return (greenMark && icon)
? <Badge color='primary' size='sm' badgeContent=''>{icon}</Badge>
? <Badge color='success' size='sm' badgeContent=''>{icon}</Badge>
: icon;
}
@@ -92,7 +92,11 @@ export function ModelsSourceSelector(props: {
<ListItemDecorator>
{vendorIcon(vendor, !!vendor.hasBackendCap && vendor.hasBackendCap())}
</ListItemDecorator>
{vendor.name}{/*{sourceCount > 0 && ` (added)`}*/}
{vendor.name}
{/*{sourceCount > 0 && ` (added)`}*/}
{!!vendor.hasFreeModels && ` 🎁`}
{/*{!!vendor.instanceLimit && ` (${sourceCount}/${vendor.instanceLimit})`}*/}
{vendor.location === 'local' && <span style={{ opacity: 0.5 }}>local</span>}
</MenuItem>
),
};
+3 -2
View File
@@ -67,7 +67,7 @@ export const NewsItems: NewsItem[] = [
],
},*/
{
versionCode: '1.7.2',
versionCode: '1.7.3',
versionName: 'Attachment Theory',
versionDate: new Date('2023-12-11T06:00:00Z'), // new Date().toISOString()
// versionDate: new Date('2023-12-10T12:00:00Z'), // 1.7.0
@@ -81,7 +81,8 @@ export const NewsItems: NewsItem[] = [
{ text: <>Optimized voice input and performance</> },
{ text: <>Latest Ollama and Oobabooga models</> },
{ text: <>1.7.1: Improved <B href={RIssues + '/270'}>Ollama chats</B></> },
{ text: <>1.7.2: Updated OpenRouter models</> },
{ text: <>1.7.2: Updated OpenRouter models 🎁</> },
{ text: <>1.7.3: <B href={RIssues + '/273'}>Mistral Platform</B> support</> },
],
},
{
+1 -1
View File
@@ -23,7 +23,7 @@ export function GoodModal(props: {
const showBottomClose = !!props.onClose && props.hideBottomClose !== true;
return (
<Modal open={props.open} onClose={props.onClose}>
<ModalOverflow>
<ModalOverflow sx={{p:1}}>
<ModalDialog
sx={{
minWidth: { xs: 360, sm: 500, md: 600, lg: 700 },
@@ -0,0 +1,10 @@
import * as React from 'react';
import { SvgIcon } from '@mui/joy';
import { SxProps } from '@mui/joy/styles/types';
export function MistralIcon(props: { sx?: SxProps }) {
return <SvgIcon viewBox='0 0 24 24' width='24' height='24' strokeWidth={0} stroke='none' fill='currentColor' strokeLinecap='butt' strokeLinejoin='miter' {...props}>
<path d='m 2,2 v 4 4 V 14 v 4 4 h 4 v -4 -4 h 4 v 4 h 4 v -4 h 4 v 4 4 h 4 v -4 -4 -4 -4 V 2 h -4 v 4 h -4 v 4 h -4 v -4 H 6 V 2 Z' />
</SvgIcon>;
}
+14 -8
View File
@@ -9,6 +9,7 @@ export type DropdownItems = Record<string, {
title: string,
symbol?: string,
type?: 'separator'
icon?: React.ReactNode,
}>;
@@ -71,20 +72,25 @@ export function AppBarDropdown<TValue extends string>(props: {
{!!props.prependOption && Object.keys(props.items).length >= 1 && <Divider />}
<Box sx={{ overflowY: 'auto' }}>
{Object.keys(props.items).map((key: string, idx: number) => <React.Fragment key={'key-' + idx}>
{props.items[key].type === 'separator'
? <ListDivider />
: <Option value={key} sx={{ whiteSpace: 'nowrap' }}>
{props.showSymbols && <ListItemDecorator sx={{ fontSize: 'xl' }}>{props.items[key]?.symbol + ' '}</ListItemDecorator>}
{props.items[key].title}
{Object.keys(props.items).map((key: string, idx: number) => {
const item = props.items[key];
if (item.type === 'separator')
return <ListDivider key={'key-' + idx} />;
return (
<Option key={'key-' + idx} value={key} sx={{ whiteSpace: 'nowrap' }}>
{props.showSymbols && <ListItemDecorator sx={{ fontSize: 'xl' }}>{item?.symbol + ' '}</ListItemDecorator>}
{props.showSymbols && !!item.icon && <ListItemDecorator>{item?.icon}</ListItemDecorator>}
{item.title}
{/*{key === props.value && (*/}
{/* <IconButton variant='soft' onClick={() => alert('aa')} sx={{ ml: 'auto' }}>*/}
{/* <SettingsIcon color='success' />*/}
{/* </IconButton>*/}
{/*)}*/}
</Option>
}
</React.Fragment>)}
);
})}
</Box>
{!!props.appendOption && Object.keys(props.items).length >= 1 && <ListDivider />}
+1 -1
View File
@@ -14,7 +14,7 @@ export async function pdfToText(pdfBuffer: ArrayBuffer): Promise<string> {
const { getDocument, GlobalWorkerOptions } = await import('pdfjs-dist');
// Set the worker script path
GlobalWorkerOptions.workerSrc = '/workers/pdf.worker.min.js';
GlobalWorkerOptions.workerSrc = '/workers/pdf.worker.min.mjs';
const pdf = await getDocument(pdfBuffer).promise;
const textPages: string[] = []; // Initialize an array to hold text from all pages
+4 -1
View File
@@ -1,5 +1,7 @@
import { z } from 'zod';
import type { BackendCapabilities } from '~/modules/backend/state-backend';
import { createTRPCRouter, publicProcedure } from '~/server/api/trpc.server';
import { env } from '~/server/env.mjs';
import { fetchJsonOrTRPCError } from '~/server/api/trpc.serverutils';
@@ -26,11 +28,12 @@ export const backendRouter = createTRPCRouter({
hasImagingProdia: !!env.PRODIA_API_KEY,
hasLlmAnthropic: !!env.ANTHROPIC_API_KEY,
hasLlmAzureOpenAI: !!env.AZURE_OPENAI_API_KEY && !!env.AZURE_OPENAI_API_ENDPOINT,
hasLlmMistral: !!env.MISTRAL_API_KEY,
hasLlmOllama: !!env.OLLAMA_API_HOST,
hasLlmOpenAI: !!env.OPENAI_API_KEY || !!env.OPENAI_API_HOST,
hasLlmOpenRouter: !!env.OPENROUTER_API_KEY,
hasVoiceElevenLabs: !!env.ELEVENLABS_API_KEY,
};
} satisfies BackendCapabilities;
}),
+2
View File
@@ -9,6 +9,7 @@ export interface BackendCapabilities {
hasImagingProdia: boolean;
hasLlmAnthropic: boolean;
hasLlmAzureOpenAI: boolean;
hasLlmMistral: boolean;
hasLlmOllama: boolean;
hasLlmOpenAI: boolean;
hasLlmOpenRouter: boolean;
@@ -30,6 +31,7 @@ const useBackendStore = create<BackendStore>()(
hasImagingProdia: false,
hasLlmAnthropic: false,
hasLlmAzureOpenAI: false,
hasLlmMistral: false,
hasLlmOllama: false,
hasLlmOpenAI: false,
hasLlmOpenRouter: false,
+14 -6
View File
@@ -2,7 +2,7 @@ import { create } from 'zustand';
import { shallow } from 'zustand/shallow';
import { persist } from 'zustand/middleware';
import type { ModelVendorId } from './vendors/IModelVendor';
import type { IModelVendor, ModelVendorId } from './vendors/IModelVendor';
import type { SourceSetupOpenRouter } from './vendors/openrouter/openrouter.vendor';
@@ -272,16 +272,24 @@ export function useChatLLM() {
/**
* Source-specific read/write - great time saver
*/
export function useSourceSetup<TSourceSetup, TAccess>(sourceId: DModelSourceId, getAccess: (partialSetup?: Partial<TSourceSetup>) => TAccess) {
// invalidate when the setup changes
export function useSourceSetup<TSourceSetup, TAccess>(sourceId: DModelSourceId, vendor: IModelVendor<TSourceSetup, TAccess>) {
// invalidates only when the setup changes
const { updateSourceSetup, ...rest } = useModelsStore(state => {
const source: DModelSource<TSourceSetup> | null = state.sources.find(source => source.id === sourceId) ?? null;
// find the source (or null)
const source: DModelSource<TSourceSetup> | null = state.sources.find(source => source.id === sourceId) as DModelSource<TSourceSetup> ?? null;
// (safe) source-derived properties
const sourceSetupValid = (source?.setup && vendor?.validateSetup) ? vendor.validateSetup(source.setup as TSourceSetup) : false;
const sourceLLMs = source ? state.llms.filter(llm => llm._source === source) : [];
const access = vendor.getTransportAccess(source?.setup);
return {
source,
sourceLLMs,
access,
sourceHasLLMs: !!sourceLLMs.length,
access: getAccess(source?.setup),
sourceSetupValid,
updateSourceSetup: state.updateSourceSetup,
};
}, shallow);
+1 -1
View File
@@ -1,6 +1,6 @@
import type { DLLMId } from '../store-llms';
import type { OpenAIWire } from './server/openai/openai.wiretypes';
import { findVendorForLlmOrThrow } from '../vendors/vendor.registry';
import { findVendorForLlmOrThrow } from '../vendors/vendors.registry';
export interface VChatMessageIn {
@@ -0,0 +1,33 @@
import { z } from 'zod';
// [Mistral] Models List API - Response
export const wireMistralModelsListOutputSchema = z.object({
id: z.string(),
object: z.literal('model'),
created: z.number(),
owned_by: z.string(),
root: z.null().optional(),
parent: z.null().optional(),
// permission: z.array(wireMistralModelsListPermissionsSchema)
});
// export type WireMistralModelsListOutput = z.infer<typeof wireMistralModelsListOutputSchema>;
/*
const wireMistralModelsListPermissionsSchema = z.object({
id: z.string(),
object: z.literal('model_permission'),
created: z.number(),
allow_create_engine: z.boolean(),
allow_sampling: z.boolean(),
allow_logprobs: z.boolean(),
allow_search_indices: z.boolean(),
allow_view: z.boolean(),
allow_fine_tuning: z.boolean(),
organization: z.string(),
group: z.null().optional(),
is_blocking: z.boolean()
});
*/
@@ -1,7 +1,10 @@
import type { ModelDescriptionSchema } from '../server.schemas';
import { LLM_IF_OAI_Chat, LLM_IF_OAI_Complete, LLM_IF_OAI_Fn, LLM_IF_OAI_Vision } from '../../../store-llms';
import { SERVER_DEBUG_WIRE } from '~/server/wire';
import { LLM_IF_OAI_Chat, LLM_IF_OAI_Complete, LLM_IF_OAI_Fn, LLM_IF_OAI_Vision } from '../../../store-llms';
import type { ModelDescriptionSchema } from '../server.schemas';
import { wireMistralModelsListOutputSchema } from './mistral.wiretypes';
// [Azure] / [OpenAI]
const _knownOpenAIChatModels: ManualMappings = [
@@ -204,6 +207,63 @@ export function localAIModelToModelDescription(modelId: string): ModelDescriptio
}
// [Mistral]
const _knownMistralChatModels: ManualMappings = [
{
idPrefix: 'mistral-medium',
label: 'Mistral Medium',
description: 'Mistral internal prototype model.',
contextWindow: 32768,
interfaces: [LLM_IF_OAI_Chat],
},
{
idPrefix: 'mistral-small',
label: 'Mistral Small',
description: 'Higher reasoning capabilities and more capabilities (English, French, German, Italian, Spanish, and Code)',
contextWindow: 32768,
interfaces: [LLM_IF_OAI_Chat],
},
{
idPrefix: 'mistral-tiny',
label: 'Mistral Tiny',
description: 'Used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial',
contextWindow: 32768,
interfaces: [LLM_IF_OAI_Chat],
},
{
idPrefix: 'mistral-embed',
label: 'Mistral Embed',
description: 'Mistral Medium on Mistral',
// output: 1024 dimensions
maxCompletionTokens: 1024, // HACK - it's 1024 dimensions, but those are not 'completion tokens'
contextWindow: 32768, // actually unknown, assumed from the other models
interfaces: [],
hidden: true,
},
];
export function mistralModelToModelDescription(_model: unknown): ModelDescriptionSchema {
const model = wireMistralModelsListOutputSchema.parse(_model);
return fromManualMapping(_knownMistralChatModels, model.id, model.created, undefined, {
idPrefix: model.id,
label: model.id.replaceAll(/[_-]/g, ' '),
description: 'New Mistral Model',
contextWindow: 32768,
interfaces: [LLM_IF_OAI_Chat], // assume..
hidden: true,
});
}
export function mistralModelsSort(a: ModelDescriptionSchema, b: ModelDescriptionSchema): number {
if (a.hidden && !b.hidden)
return 1;
if (!a.hidden && b.hidden)
return -1;
return a.id.localeCompare(b.id);
}
// [Oobabooga]
const _knownOobaboogaChatModels: ManualMappings = [];
@@ -346,7 +406,7 @@ export function openRouterModelToModelDescription(modelId: string, created: numb
const orModel = orModelMap[modelId] ?? null;
let label = orModel?.name || modelId.replace('/', ' · ');
if (orModel?.cp === 0 && orModel?.cc === 0)
label += ' · 🎁 Free';
label += ' · 🎁'; // Free? Discounted?
if (SERVER_DEBUG_WIRE && !orModel)
console.log(' - openRouterModelToModelDescription: non-mapped model id:', modelId);
@@ -9,12 +9,12 @@ import { Brand } from '~/common/app.config';
import type { OpenAIWire } from './openai.wiretypes';
import { listModelsOutputSchema, ModelDescriptionSchema } from '../server.schemas';
import { localAIModelToModelDescription, oobaboogaModelToModelDescription, openAIModelToModelDescription, openRouterModelFamilySortFn, openRouterModelToModelDescription } from './models.data';
import { localAIModelToModelDescription, mistralModelsSort, mistralModelToModelDescription, oobaboogaModelToModelDescription, openAIModelToModelDescription, openRouterModelFamilySortFn, openRouterModelToModelDescription } from './models.data';
// Input Schemas
const openAIDialects = z.enum(['azure', 'localai', 'oobabooga', 'openai', 'openrouter']);
const openAIDialects = z.enum(['azure', 'localai', 'mistral', 'oobabooga', 'openai', 'openrouter']);
export const openAIAccessSchema = z.object({
dialect: openAIDialects,
@@ -186,12 +186,18 @@ export const llmOpenAIRouter = createTRPCRouter({
.map((model): ModelDescriptionSchema => openAIModelToModelDescription(model.id, model.created));
break;
case 'mistral':
models = openAIModels
.map(mistralModelToModelDescription)
.sort(mistralModelsSort);
break;
case 'openrouter':
models = openAIModels
.sort(openRouterModelFamilySortFn)
.map(model => openRouterModelToModelDescription(model.id, model.created, (model as any)?.['context_length']));
break;
}
return { models };
@@ -267,9 +273,10 @@ async function openaiPOST<TOut extends object, TPostBody extends object>(access:
}
const DEFAULT_HELICONE_OPENAI_HOST = 'oai.hconeai.com';
const DEFAULT_MISTRAL_HOST = 'https://api.mistral.ai';
const DEFAULT_OPENAI_HOST = 'api.openai.com';
const DEFAULT_OPENROUTER_HOST = 'https://openrouter.ai/api';
const DEFAULT_HELICONE_OPENAI_HOST = 'oai.hconeai.com';
export function fixupHost(host: string, apiPath: string): string {
if (!host.startsWith('http'))
@@ -361,6 +368,20 @@ export function openAIAccess(access: OpenAIAccessSchema, modelRefId: string | nu
};
case 'mistral':
// https://docs.mistral.ai/platform/client
const mistralKey = access.oaiKey || env.MISTRAL_API_KEY || '';
const mistralHost = fixupHost(access.oaiHost || DEFAULT_MISTRAL_HOST, apiPath);
return {
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': `Bearer ${mistralKey}`,
},
url: mistralHost + apiPath,
};
case 'openrouter':
const orKey = access.oaiKey || env.OPENROUTER_API_KEY || '';
const orHost = fixupHost(access.oaiHost || DEFAULT_OPENROUTER_HOST, apiPath);
@@ -67,6 +67,7 @@ export async function openaiStreamingRelayHandler(req: NextRequest): Promise<Res
case 'azure':
case 'localai':
case 'mistral':
case 'oobabooga':
case 'openai':
case 'openrouter':
+2 -2
View File
@@ -1,7 +1,7 @@
import { apiAsync } from '~/common/util/trpc.client';
import type { DLLM, DLLMId } from '../store-llms';
import { findVendorForLlmOrThrow } from '../vendors/vendor.registry';
import { findVendorForLlmOrThrow } from '../vendors/vendors.registry';
import type { ChatStreamFirstPacketSchema, ChatStreamInputSchema } from './server/openai/openai.streaming';
import type { OpenAIWire } from './server/openai/openai.wiretypes';
@@ -27,7 +27,7 @@ export async function streamChat(
onUpdate: (update: Partial<{ text: string, typing: boolean, originLLM: string }>, done: boolean) => void,
): Promise<void> {
const { llm, vendor } = findVendorForLlmOrThrow(llmId);
const access = vendor.getAccess(llm._source.setup) as ChatStreamInputSchema['access'];
const access = vendor.getTransportAccess(llm._source.setup) as ChatStreamInputSchema['access'];
return await vendorStreamChat(access, llm, messages, abortSignal, onUpdate);
}
+11 -6
View File
@@ -1,18 +1,20 @@
import type React from 'react';
import type { DLLM, DModelSourceId } from '../store-llms';
import { VChatFunctionIn, VChatMessageIn, VChatMessageOrFunctionCallOut, VChatMessageOut } from '../transports/chatGenerate';
import type { VChatFunctionIn, VChatMessageIn, VChatMessageOrFunctionCallOut, VChatMessageOut } from '../transports/chatGenerate';
export type ModelVendorId = 'anthropic' | 'azure' | 'localai' | 'ollama' | 'oobabooga' | 'openai' | 'openrouter';
export type ModelVendorId = 'anthropic' | 'azure' | 'localai' | 'mistral' | 'ollama' | 'oobabooga' | 'openai' | 'openrouter';
export type ModelVendorRegistryType = Record<ModelVendorId, IModelVendor>;
export interface IModelVendor<TSourceSetup = unknown, TLLMOptions = unknown, TAccess = unknown, TDLLM = DLLM<TSourceSetup, TLLMOptions>> {
export interface IModelVendor<TSourceSetup = unknown, TAccess = unknown, TLLMOptions = unknown, TDLLM = DLLM<TSourceSetup, TLLMOptions>> {
readonly id: ModelVendorId;
readonly name: string;
readonly rank: number;
readonly location: 'local' | 'cloud';
readonly instanceLimit: number;
readonly hasFreeModels?: boolean;
readonly hasBackendCap?: () => boolean;
// components
@@ -20,10 +22,13 @@ export interface IModelVendor<TSourceSetup = unknown, TLLMOptions = unknown, TAc
readonly SourceSetupComponent: React.ComponentType<{ sourceId: DModelSourceId }>;
readonly LLMOptionsComponent: React.ComponentType<{ llm: TDLLM }>;
// functions
readonly initializeSetup?: () => TSourceSetup;
/// abstraction interface ///
getAccess(setup?: Partial<TSourceSetup>): TAccess;
initializeSetup?(): TSourceSetup;
validateSetup?(setup: TSourceSetup): boolean;
getTransportAccess(setup?: Partial<TSourceSetup>): TAccess;
callChatGenerate(llm: TDLLM, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut>;
@@ -23,7 +23,7 @@ export function AnthropicSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorAnthropic.getAccess);
useSourceSetup(props.sourceId, ModelVendorAnthropic);
// derived state
const { anthropicKey, anthropicHost, heliconeKey } = access;
+3 -3
View File
@@ -22,7 +22,7 @@ export interface SourceSetupAnthropic {
heliconeKey: string;
}
export const ModelVendorAnthropic: IModelVendor<SourceSetupAnthropic, LLMOptionsOpenAI, AnthropicAccessSchema> = {
export const ModelVendorAnthropic: IModelVendor<SourceSetupAnthropic, AnthropicAccessSchema, LLMOptionsOpenAI> = {
id: 'anthropic',
name: 'Anthropic',
rank: 13,
@@ -36,14 +36,14 @@ export const ModelVendorAnthropic: IModelVendor<SourceSetupAnthropic, LLMOptions
LLMOptionsComponent: OpenAILLMOptions,
// functions
getAccess: (partialSetup): AnthropicAccessSchema => ({
getTransportAccess: (partialSetup): AnthropicAccessSchema => ({
dialect: 'anthropic',
anthropicKey: partialSetup?.anthropicKey || '',
anthropicHost: partialSetup?.anthropicHost || null,
heliconeKey: partialSetup?.heliconeKey || null,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return anthropicCallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, /*null, null,*/ maxTokens);
return anthropicCallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, /*null, null,*/ maxTokens);
},
callChatGenerateWF(): Promise<VChatMessageOrFunctionCallOut> {
throw new Error('Anthropic does not support "Functions" yet');
+1 -1
View File
@@ -18,7 +18,7 @@ export function AzureSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorAzure.getAccess);
useSourceSetup(props.sourceId, ModelVendorAzure);
// derived state
const { oaiKey: azureKey, oaiHost: azureEndpoint } = access;
+4 -4
View File
@@ -36,7 +36,7 @@ export interface SourceSetupAzure {
*
* Work in progress...
*/
export const ModelVendorAzure: IModelVendor<SourceSetupAzure, LLMOptionsOpenAI, OpenAIAccessSchema> = {
export const ModelVendorAzure: IModelVendor<SourceSetupAzure, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'azure',
name: 'Azure',
rank: 14,
@@ -50,7 +50,7 @@ export const ModelVendorAzure: IModelVendor<SourceSetupAzure, LLMOptionsOpenAI,
LLMOptionsComponent: OpenAILLMOptions,
// functions
getAccess: (partialSetup): OpenAIAccessSchema => ({
getTransportAccess: (partialSetup): OpenAIAccessSchema => ({
dialect: 'azure',
oaiKey: partialSetup?.azureKey || '',
oaiOrg: '',
@@ -59,9 +59,9 @@ export const ModelVendorAzure: IModelVendor<SourceSetupAzure, LLMOptionsOpenAI,
moderationCheck: false,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF(llm, messages: VChatMessageIn[], functions: VChatFunctionIn[] | null, forceFunctionName: string | null, maxTokens?: number): Promise<VChatMessageOrFunctionCallOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
},
};
+1 -1
View File
@@ -19,7 +19,7 @@ export function LocalAISourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorLocalAI.getAccess);
useSourceSetup(props.sourceId, ModelVendorLocalAI);
// derived state
const { oaiHost } = access;
+4 -4
View File
@@ -14,7 +14,7 @@ export interface SourceSetupLocalAI {
oaiHost: string; // use OpenAI-compatible non-default hosts (full origin path)
}
export const ModelVendorLocalAI: IModelVendor<SourceSetupLocalAI, LLMOptionsOpenAI, OpenAIAccessSchema> = {
export const ModelVendorLocalAI: IModelVendor<SourceSetupLocalAI, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'localai',
name: 'LocalAI',
rank: 20,
@@ -30,7 +30,7 @@ export const ModelVendorLocalAI: IModelVendor<SourceSetupLocalAI, LLMOptionsOpen
initializeSetup: () => ({
oaiHost: 'http://localhost:8080',
}),
getAccess: (partialSetup) => ({
getTransportAccess: (partialSetup) => ({
dialect: 'localai',
oaiKey: '',
oaiOrg: '',
@@ -39,9 +39,9 @@ export const ModelVendorLocalAI: IModelVendor<SourceSetupLocalAI, LLMOptionsOpen
moderationCheck: false,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF(llm, messages: VChatMessageIn[], functions: VChatFunctionIn[] | null, forceFunctionName: string | null, maxTokens?: number): Promise<VChatMessageOrFunctionCallOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
},
};
+61
View File
@@ -0,0 +1,61 @@
import * as React from 'react';
import { FormInputKey } from '~/common/components/forms/FormInputKey';
import { InlineError } from '~/common/components/InlineError';
import { Link } from '~/common/components/Link';
import { SetupFormRefetchButton } from '~/common/components/forms/SetupFormRefetchButton';
import { apiQuery } from '~/common/util/trpc.client';
import { DModelSourceId, useModelsStore, useSourceSetup } from '../../store-llms';
import { modelDescriptionToDLLM } from '../openai/OpenAISourceSetup';
import { ModelVendorMistral } from './mistral.vendor';
const MISTRAL_REG_LINK = 'https://console.mistral.ai/';
export function MistralSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceSetupValid, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorMistral);
// derived state
const { oaiKey: mistralKey } = access;
const needsUserKey = !ModelVendorMistral.hasBackendCap?.();
const shallFetchSucceed = !needsUserKey || (!!mistralKey && sourceSetupValid);
const showKeyError = !!mistralKey && !sourceSetupValid;
// fetch models
const { isFetching, refetch, isError, error } = apiQuery.llmOpenAI.listModels.useQuery({ access }, {
enabled: false,
onSuccess: models => source && useModelsStore.getState().setLLMs(
models.models.map(model => modelDescriptionToDLLM(model, source)),
props.sourceId,
),
staleTime: Infinity,
});
return <>
<FormInputKey
id='mistral-key' label='Mistral Key'
rightLabel={<>{needsUserKey
? !mistralKey && <Link level='body-sm' href={MISTRAL_REG_LINK} target='_blank'>request Key</Link>
: '✔️ already set in server'}
</>}
value={mistralKey} onChange={value => updateSetup({ oaiKey: value })}
required={needsUserKey} isError={showKeyError}
placeholder='...'
/>
<SetupFormRefetchButton
refetch={refetch} disabled={/*!shallFetchSucceed ||*/ isFetching} error={isError}
/>
{isError && <InlineError error={error} />}
</>;
}
+57
View File
@@ -0,0 +1,57 @@
import { backendCaps } from '~/modules/backend/state-backend';
import { MistralIcon } from '~/common/components/icons/MistralIcon';
import type { IModelVendor } from '../IModelVendor';
import type { OpenAIAccessSchema } from '../../transports/server/openai/openai.router';
import type { VChatMessageIn, VChatMessageOut } from '../../transports/chatGenerate';
import { LLMOptionsOpenAI, openAICallChatGenerate, SourceSetupOpenAI } from '../openai/openai.vendor';
import { OpenAILLMOptions } from '../openai/OpenAILLMOptions';
import { MistralSourceSetup } from './MistralSourceSetup';
// special symbols
export type SourceSetupMistral = Pick<SourceSetupOpenAI, 'oaiKey' | 'oaiHost'>;
/** Implementation Notes for the Mistral vendor
*/
export const ModelVendorMistral: IModelVendor<SourceSetupMistral, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'mistral',
name: 'Mistral',
rank: 15,
location: 'cloud',
instanceLimit: 1,
hasBackendCap: () => backendCaps().hasLlmMistral,
// components
Icon: MistralIcon,
SourceSetupComponent: MistralSourceSetup,
LLMOptionsComponent: OpenAILLMOptions,
// functions
initializeSetup: () => ({
oaiHost: 'https://api.mistral.ai/',
oaiKey: '',
}),
validateSetup: (setup) => {
return setup.oaiKey?.length >= 32;
},
getTransportAccess: (partialSetup): OpenAIAccessSchema => ({
dialect: 'mistral',
oaiKey: partialSetup?.oaiKey || '',
oaiOrg: '',
oaiHost: partialSetup?.oaiHost || '',
heliKey: '',
moderationCheck: false,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF() {
throw new Error('Mistral does not support "Functions" yet');
},
};
+1 -1
View File
@@ -22,7 +22,7 @@ export function OllamaSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorOllama.getAccess);
useSourceSetup(props.sourceId, ModelVendorOllama);
// derived state
const { ollamaHost } = access;
+3 -3
View File
@@ -18,7 +18,7 @@ export interface SourceSetupOllama {
}
export const ModelVendorOllama: IModelVendor<SourceSetupOllama, LLMOptionsOpenAI, OllamaAccessSchema> = {
export const ModelVendorOllama: IModelVendor<SourceSetupOllama, OllamaAccessSchema, LLMOptionsOpenAI> = {
id: 'ollama',
name: 'Ollama',
rank: 22,
@@ -32,12 +32,12 @@ export const ModelVendorOllama: IModelVendor<SourceSetupOllama, LLMOptionsOpenAI
LLMOptionsComponent: OpenAILLMOptions,
// functions
getAccess: (partialSetup): OllamaAccessSchema => ({
getTransportAccess: (partialSetup): OllamaAccessSchema => ({
dialect: 'ollama',
ollamaHost: partialSetup?.ollamaHost || '',
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return ollamaCallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, maxTokens);
return ollamaCallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, maxTokens);
},
callChatGenerateWF(): Promise<VChatMessageOrFunctionCallOut> {
throw new Error('Ollama does not support "Functions" yet');
@@ -18,7 +18,7 @@ export function OobaboogaSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorOoobabooga.getAccess);
useSourceSetup(props.sourceId, ModelVendorOoobabooga);
// derived state
const { oaiHost } = access;
+4 -4
View File
@@ -14,7 +14,7 @@ export interface SourceSetupOobabooga {
oaiHost: string; // use OpenAI-compatible non-default hosts (full origin path)
}
export const ModelVendorOoobabooga: IModelVendor<SourceSetupOobabooga, LLMOptionsOpenAI, OpenAIAccessSchema> = {
export const ModelVendorOoobabooga: IModelVendor<SourceSetupOobabooga, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'oobabooga',
name: 'Oobabooga',
rank: 25,
@@ -30,7 +30,7 @@ export const ModelVendorOoobabooga: IModelVendor<SourceSetupOobabooga, LLMOption
initializeSetup: (): SourceSetupOobabooga => ({
oaiHost: 'http://127.0.0.1:5000',
}),
getAccess: (partialSetup): OpenAIAccessSchema => ({
getTransportAccess: (partialSetup): OpenAIAccessSchema => ({
dialect: 'oobabooga',
oaiKey: '',
oaiOrg: '',
@@ -39,9 +39,9 @@ export const ModelVendorOoobabooga: IModelVendor<SourceSetupOobabooga, LLMOption
moderationCheck: false,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF(llm, messages: VChatMessageIn[], functions: VChatFunctionIn[] | null, forceFunctionName: string | null, maxTokens?: number): Promise<VChatMessageOrFunctionCallOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
},
};
+1 -1
View File
@@ -29,7 +29,7 @@ export function OpenAISourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorOpenAI.getAccess);
useSourceSetup(props.sourceId, ModelVendorOpenAI);
// derived state
const { oaiKey, oaiOrg, oaiHost, heliKey, moderationCheck } = access;
+4 -4
View File
@@ -28,7 +28,7 @@ export interface LLMOptionsOpenAI {
llmResponseTokens: number;
}
export const ModelVendorOpenAI: IModelVendor<SourceSetupOpenAI, LLMOptionsOpenAI, OpenAIAccessSchema> = {
export const ModelVendorOpenAI: IModelVendor<SourceSetupOpenAI, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'openai',
name: 'OpenAI',
rank: 10,
@@ -42,7 +42,7 @@ export const ModelVendorOpenAI: IModelVendor<SourceSetupOpenAI, LLMOptionsOpenAI
LLMOptionsComponent: OpenAILLMOptions,
// functions
getAccess: (partialSetup): OpenAIAccessSchema => ({
getTransportAccess: (partialSetup): OpenAIAccessSchema => ({
dialect: 'openai',
oaiKey: '',
oaiOrg: '',
@@ -52,11 +52,11 @@ export const ModelVendorOpenAI: IModelVendor<SourceSetupOpenAI, LLMOptionsOpenAI
...partialSetup,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
const access = this.getAccess(llm._source.setup);
const access = this.getTransportAccess(llm._source.setup);
return openAICallChatGenerate(access, llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF(llm, messages: VChatMessageIn[], functions: VChatFunctionIn[] | null, forceFunctionName: string | null, maxTokens?: number): Promise<VChatMessageOrFunctionCallOut> {
const access = this.getAccess(llm._source.setup);
const access = this.getTransportAccess(llm._source.setup);
return openAICallChatGenerate(access, llm.options, messages, functions, forceFunctionName, maxTokens);
},
};
@@ -19,7 +19,7 @@ export function OpenRouterSourceSetup(props: { sourceId: DModelSourceId }) {
// external state
const { source, sourceHasLLMs, access, updateSetup } =
useSourceSetup(props.sourceId, ModelVendorOpenRouter.getAccess);
useSourceSetup(props.sourceId, ModelVendorOpenRouter);
// derived state
const { oaiKey } = access;
@@ -51,15 +51,13 @@ export function OpenRouterSourceSetup(props: { sourceId: DModelSourceId }) {
return <>
{/*<Box sx={{ display: 'flex', gap: 1, alignItems: 'center' }}>*/}
{/*<OpenRouterIcon />*/}
<Typography level='body-sm'>
<Link href='https://openrouter.ai/keys' target='_blank'>OpenRouter</Link> is an independent, premium service
<Link href='https://openrouter.ai/keys' target='_blank'>OpenRouter</Link> is an independent service
granting access to <Link href='https://openrouter.ai/docs#models' target='_blank'>exclusive models</Link> such
as GPT-4 32k, Claude, and more, typically unavailable to the public. <Link
href='https://github.com/enricoros/big-agi/blob/main/docs/config-openrouter.md'>Configuration &amp; documentation</Link>.
as GPT-4 32k, Claude, and more. <Link
href='https://github.com/enricoros/big-agi/blob/main/docs/config-openrouter.md' target='_blank'>
Configuration &amp; documentation</Link>.
</Typography>
{/*</Box>*/}
<FormInputKey
id='openrouter-key' label='OpenRouter API Key'
@@ -73,10 +71,19 @@ export function OpenRouterSourceSetup(props: { sourceId: DModelSourceId }) {
placeholder='sk-or-...'
/>
<Typography level='body-sm'>
🎁 A selection of <Link href='https://openrouter.ai/docs#models' target='_blank'>OpenRouter models</Link> are
made available without charge. You can get an API key by using the Login button below.
</Typography>
<SetupFormRefetchButton
refetch={refetch} disabled={!shallFetchSucceed || isFetching} error={isError}
leftButton={
<Button color='neutral' variant={(needsUserKey && !keyValid) ? 'solid' : 'outlined'} onClick={handleOpenRouterLogin}>
<Button
color='neutral' variant={(needsUserKey && !keyValid) ? 'solid' : 'outlined'}
onClick={handleOpenRouterLogin}
endDecorator={(needsUserKey && !keyValid) ? '🎁' : undefined}
>
OpenRouter Login
</Button>
}
+5 -4
View File
@@ -32,12 +32,13 @@ export interface SourceSetupOpenRouter {
* [x] decide whether to do UI work to improve the appearance - prioritized models
* [x] works!
*/
export const ModelVendorOpenRouter: IModelVendor<SourceSetupOpenRouter, LLMOptionsOpenAI, OpenAIAccessSchema> = {
export const ModelVendorOpenRouter: IModelVendor<SourceSetupOpenRouter, OpenAIAccessSchema, LLMOptionsOpenAI> = {
id: 'openrouter',
name: 'OpenRouter',
rank: 12,
location: 'cloud',
instanceLimit: 1,
hasFreeModels: true,
hasBackendCap: () => backendCaps().hasLlmOpenRouter,
// components
@@ -50,7 +51,7 @@ export const ModelVendorOpenRouter: IModelVendor<SourceSetupOpenRouter, LLMOptio
oaiHost: 'https://openrouter.ai/api',
oaiKey: '',
}),
getAccess: (partialSetup): OpenAIAccessSchema => ({
getTransportAccess: (partialSetup): OpenAIAccessSchema => ({
dialect: 'openrouter',
oaiKey: partialSetup?.oaiKey || '',
oaiOrg: '',
@@ -59,9 +60,9 @@ export const ModelVendorOpenRouter: IModelVendor<SourceSetupOpenRouter, LLMOptio
moderationCheck: false,
}),
callChatGenerate(llm, messages: VChatMessageIn[], maxTokens?: number): Promise<VChatMessageOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, null, null, maxTokens);
},
callChatGenerateWF(llm, messages: VChatMessageIn[], functions: VChatFunctionIn[] | null, forceFunctionName: string | null, maxTokens?: number): Promise<VChatMessageOrFunctionCallOut> {
return openAICallChatGenerate(this.getAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
return openAICallChatGenerate(this.getTransportAccess(llm._source.setup), llm.options, messages, functions, forceFunctionName, maxTokens);
},
};
@@ -1,19 +1,21 @@
import { ModelVendorAnthropic } from './anthropic/anthropic.vendor';
import { ModelVendorAzure } from './azure/azure.vendor';
import { ModelVendorLocalAI } from './localai/localai.vendor';
import { ModelVendorMistral } from './mistral/mistral.vendor';
import { ModelVendorOllama } from './ollama/ollama.vendor';
import { ModelVendorOoobabooga } from './oobabooga/oobabooga.vendor';
import { ModelVendorOpenAI } from './openai/openai.vendor';
import { ModelVendorOpenRouter } from './openrouter/openrouter.vendor';
import type { IModelVendor, ModelVendorId, ModelVendorRegistryType } from './IModelVendor';
import { DLLMId, DModelSource, DModelSourceId, findLLMOrThrow } from '../store-llms';
import { IModelVendor, ModelVendorId } from './IModelVendor';
/** Vendor Instances Registry **/
const MODEL_VENDOR_REGISTRY: Record<ModelVendorId, IModelVendor> = {
/** Global: Vendor Instances Registry **/
const MODEL_VENDOR_REGISTRY: ModelVendorRegistryType = {
anthropic: ModelVendorAnthropic,
azure: ModelVendorAzure,
localai: ModelVendorLocalAI,
mistral: ModelVendorMistral,
ollama: ModelVendorOllama,
oobabooga: ModelVendorOoobabooga,
openai: ModelVendorOpenAI,
+3
View File
@@ -21,6 +21,9 @@ export const env = createEnv({
ANTHROPIC_API_KEY: z.string().optional(),
ANTHROPIC_API_HOST: z.string().url().optional(),
// LLM: Mistral
MISTRAL_API_KEY: z.string().optional(),
// LLM: Ollama
OLLAMA_API_HOST: z.string().url().optional(),