mirror of
https://github.com/enricoros/big-AGI.git
synced 2026-05-11 14:10:15 -07:00
1.4 KiB
1.4 KiB
Local LLM integration with localai
Integrate local Large Language Models (LLMs) with LocalAI.
Last updated Nov 7, 2023
Instructions
LocalAI installation and configuration
Follow the guide at: https://localai.io/basics/getting_started/
For instance with Use luna-ai-llama2 with docker compose:
- clone LocalAI
- get the model
- copy the prompt template
- start docker
- -> the server will be listening on
localhost:8080 - verify it works by going to http://localhost:8080/v1/models on your browser and seeing listed the model you downloaded
- -> the server will be listening on
Integrating LocalAI with big-AGI
- Go to Models > Add a model source of type: LocalAI
- Enter the address:
http://localhost:8080(default)- If running remotely, replace localhost with the IP of the machine. Make sure to use the IP:Port format
- Load the models
- Select model & Chat
NOTE: LocalAI does not list details about the mdoels. Every model is assumed to be capable of chatting, and with a context window of 4096 tokens. Please update the src/modules/llms/transports/server/openai/models.data.ts file with the mapping information between LocalAI model IDs and names/descriptions/tokens, etc.