feat(agents): support local LLMs

This commit is contained in:
ItzCrazyKns 2024-04-20 11:18:52 +05:30
parent 28a7175afc
commit d37a1a8020
No known key found for this signature in database
GPG Key ID: 8162927C7CCE3065
15 changed files with 135 additions and 100 deletions

View File

@ -9,16 +9,14 @@ Perplexica's design consists of two main domains:
- **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a self-contained environment that manages everything the user interacts with. - **Frontend (`ui` directory)**: This is a Next.js application holding all user interface components. It's a self-contained environment that manages everything the user interacts with.
- **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory holds the main `package.json` for backend dependency management. - **Backend (root and `src` directory)**: The backend logic is situated in the `src` folder, but the root directory holds the main `package.json` for backend dependency management.
Both the root directory (for backend configurations outside `src`) and the `ui` folder come with an `.env.example` file. These are templates for environment variables that you need to set up manually for the application to run correctly.
## Setting Up Your Environment ## Setting Up Your Environment
Before diving into coding, setting up your local environment is key. Here's what you need to do: Before diving into coding, setting up your local environment is key. Here's what you need to do:
### Backend ### Backend
1. In the root directory, locate the `.env.example` file. 1. In the root directory, locate the `sample.config.toml` file.
2. Rename it to `.env` and fill in the necessary environment variables specific to the backend. 2. Rename it to `config.toml` and fill in the necessary configuration fields specific to the backend.
3. Run `npm install` to install dependencies. 3. Run `npm install` to install dependencies.
4. Use `npm run dev` to start the backend in development mode. 4. Use `npm run dev` to start the backend in development mode.

View File

@ -25,6 +25,8 @@ Perplexica is an open-source AI-powered searching tool or an AI-powered search e
## Features ## Features
- **Local LLMs**: You can make use local LLMs such as LLama2 and Mixtral using Ollama.
- **Two Main Modes:** - **Two Main Modes:**
- **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page. - **Copilot Mode:** (In development) Boosts search by generating different queries to find more relevant internet sources. Like normal search instead of just using the context by SearxNG, it visits the top matches and tries to find relevant sources to the user's query directly from the page.
- **Normal Mode:** Processes your query and performs a web search. - **Normal Mode:** Processes your query and performs a web search.
@ -58,7 +60,14 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker.
4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields: 4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields:
- `OPENAI`: Your OpenAI API key. - `CHAT_MODEL`: The name of the LLM to use. Example: `llama2` for Ollama users & `gpt-3.5-turbo` for OpenAI users.
- `CHAT_MODEL_PROVIDER`: The chat model provider, either `openai` or `ollama`. Depending upon which provider you use you would have to fill in the following fields:
- `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models.**
- `OLLAMA`: Your Ollama API URL. **You need to fill this if you wish to use Ollama's models instead of OpenAI's.**
**Note**: (In development) You can change these and use different models after running Perplexica as well from the settings page.
- `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are unsure about it.) - `SIMILARITY_MEASURE`: The similarity measure to use (This is filled by default; you can leave it as is if you are unsure about it.)
5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute: 5. Ensure you are in the directory containing the `docker-compose.yaml` file and execute:
@ -84,7 +93,7 @@ For setups without Docker:
## Upcoming Features ## Upcoming Features
- [ ] Finalizing Copilot Mode - [ ] Finalizing Copilot Mode
- [ ] Adding support for multiple local LLMs and LLM providers such as Anthropic, Google, etc. - [X] Adding support for local LLMs
- [ ] Adding Discover and History Saving features - [ ] Adding Discover and History Saving features
- [x] Introducing various Focus Modes - [x] Introducing various Focus Modes

View File

@ -1,9 +1,12 @@
[GENERAL] [GENERAL]
PORT = 3001 # Port to run the server on PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # "cosine" or "dot" SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"
CHAT_MODEL_PROVIDER = "openai" # "openai" or "ollama"
CHAT_MODEL = "gpt-3.5-turbo" # Name of the model to use
[API_KEYS] [API_KEYS]
OPENAI = "sk-1234567890abcdef1234567890abcdef" # OpenAI API key OPENAI = "sk-1234567890abcdef1234567890abcdef" # OpenAI API key
[API_ENDPOINTS] [API_ENDPOINTS]
SEARXNG = "http://localhost:32768" # SearxNG API ULR SEARXNG = "http://localhost:32768" # SearxNG API ULR
OLLAMA = "http://localhost:11434" # Ollama API URL

View File

@ -11,7 +11,7 @@ import {
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { StreamEvent } from '@langchain/core/tracers/log_stream'; import type { StreamEvent } from '@langchain/core/tracers/log_stream';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings'; import type { Embeddings } from '@langchain/core/embeddings';

View File

@ -7,7 +7,7 @@ import { PromptTemplate } from '@langchain/core/prompts';
import formatChatHistoryAsString from '../utils/formatHistory'; import formatChatHistoryAsString from '../utils/formatHistory';
import { BaseMessage } from '@langchain/core/messages'; import { BaseMessage } from '@langchain/core/messages';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
const imageSearchChainPrompt = ` const imageSearchChainPrompt = `

View File

@ -11,7 +11,7 @@ import {
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { StreamEvent } from '@langchain/core/tracers/log_stream'; import type { StreamEvent } from '@langchain/core/tracers/log_stream';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings'; import type { Embeddings } from '@langchain/core/embeddings';

View File

@ -11,7 +11,7 @@ import {
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { StreamEvent } from '@langchain/core/tracers/log_stream'; import type { StreamEvent } from '@langchain/core/tracers/log_stream';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings'; import type { Embeddings } from '@langchain/core/embeddings';

View File

@ -11,7 +11,7 @@ import {
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { StreamEvent } from '@langchain/core/tracers/log_stream'; import type { StreamEvent } from '@langchain/core/tracers/log_stream';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings'; import type { Embeddings } from '@langchain/core/embeddings';

View File

@ -11,7 +11,7 @@ import {
} from '@langchain/core/runnables'; } from '@langchain/core/runnables';
import { StringOutputParser } from '@langchain/core/output_parsers'; import { StringOutputParser } from '@langchain/core/output_parsers';
import { Document } from '@langchain/core/documents'; import { Document } from '@langchain/core/documents';
import { searchSearxng } from '../core/searxng'; import { searchSearxng } from '../lib/searxng';
import type { StreamEvent } from '@langchain/core/tracers/log_stream'; import type { StreamEvent } from '@langchain/core/tracers/log_stream';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models'; import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings'; import type { Embeddings } from '@langchain/core/embeddings';

View File

@ -1,6 +1,6 @@
import fs from 'fs'; import fs from 'fs';
import path from 'path'; import path from 'path';
import toml, { JsonMap } from '@iarna/toml'; import toml from '@iarna/toml';
const configFileName = 'config.toml'; const configFileName = 'config.toml';
@ -8,18 +8,21 @@ interface Config {
GENERAL: { GENERAL: {
PORT: number; PORT: number;
SIMILARITY_MEASURE: string; SIMILARITY_MEASURE: string;
CHAT_MODEL_PROVIDER: string;
CHAT_MODEL: string;
}; };
API_KEYS: { API_KEYS: {
OPENAI: string; OPENAI: string;
}; };
API_ENDPOINTS: { API_ENDPOINTS: {
SEARXNG: string; SEARXNG: string;
OLLAMA: string;
}; };
} }
const loadConfig = () => const loadConfig = () =>
toml.parse( toml.parse(
fs.readFileSync(path.join(process.cwd(), `${configFileName}`), 'utf-8'), fs.readFileSync(path.join(__dirname, `../${configFileName}`), 'utf-8'),
) as any as Config; ) as any as Config;
export const getPort = () => loadConfig().GENERAL.PORT; export const getPort = () => loadConfig().GENERAL.PORT;
@ -27,6 +30,13 @@ export const getPort = () => loadConfig().GENERAL.PORT;
export const getSimilarityMeasure = () => export const getSimilarityMeasure = () =>
loadConfig().GENERAL.SIMILARITY_MEASURE; loadConfig().GENERAL.SIMILARITY_MEASURE;
export const getChatModelProvider = () =>
loadConfig().GENERAL.CHAT_MODEL_PROVIDER;
export const getChatModel = () => loadConfig().GENERAL.CHAT_MODEL;
export const getOpenaiApiKey = () => loadConfig().API_KEYS.OPENAI; export const getOpenaiApiKey = () => loadConfig().API_KEYS.OPENAI;
export const getSearxngApiEndpoint = () => loadConfig().API_ENDPOINTS.SEARXNG; export const getSearxngApiEndpoint = () => loadConfig().API_ENDPOINTS.SEARXNG;
export const getOllamaApiEndpoint = () => loadConfig().API_ENDPOINTS.OLLAMA;

View File

@ -1,69 +0,0 @@
import { z } from 'zod';
import { OpenAI } from '@langchain/openai';
import { RunnableSequence } from '@langchain/core/runnables';
import { StructuredOutputParser } from 'langchain/output_parsers';
import { PromptTemplate } from '@langchain/core/prompts';
const availableAgents = [
{
name: 'webSearch',
description:
'It is expert is searching the web for information and answer user queries',
},
/* {
name: 'academicSearch',
description:
'It is expert is searching the academic databases for information and answer user queries. It is particularly good at finding research papers and articles on topics like science, engineering, and technology. Use this instead of wolframAlphaSearch if the user query is not mathematical or scientific in nature',
},
{
name: 'youtubeSearch',
description:
'This model is expert at finding videos on youtube based on user queries',
},
{
name: 'wolframAlphaSearch',
description:
'This model is expert at finding answers to mathematical and scientific questions based on user queries.',
},
{
name: 'redditSearch',
description:
'This model is expert at finding posts and discussions on reddit based on user queries',
},
{
name: 'writingAssistant',
description:
'If there is no need for searching, this model is expert at generating text based on user queries',
}, */
];
const parser = StructuredOutputParser.fromZodSchema(
z.object({
agent: z.string().describe('The name of the selected agent'),
}),
);
const prompt = `
You are an AI model who is expert at finding suitable agents for user queries. The available agents are:
${availableAgents.map((agent) => `- ${agent.name}: ${agent.description}`).join('\n')}
Your task is to find the most suitable agent for the following query: {query}
{format_instructions}
`;
const chain = RunnableSequence.from([
PromptTemplate.fromTemplate(prompt),
new OpenAI({ temperature: 0 }),
parser,
]);
const pickSuitableAgent = async (query: string) => {
const res = await chain.invoke({
query,
format_instructions: parser.getFormatInstructions(),
});
return res.agent;
};
export default pickSuitableAgent;

58
src/lib/providers.ts Normal file
View File

@ -0,0 +1,58 @@
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai';
import { ChatOllama } from '@langchain/community/chat_models/ollama';
import { OllamaEmbeddings } from '@langchain/community/embeddings/ollama';
import { getOllamaApiEndpoint, getOpenaiApiKey } from '../config';
export const getAvailableProviders = async () => {
const openAIApiKey = getOpenaiApiKey();
const ollamaEndpoint = getOllamaApiEndpoint();
const models = {};
if (openAIApiKey) {
models['openai'] = {
'gpt-3.5-turbo': new ChatOpenAI({
openAIApiKey,
modelName: 'gpt-3.5-turbo',
temperature: 0.7,
}),
'gpt-4': new ChatOpenAI({
openAIApiKey,
modelName: 'gpt-4',
temperature: 0.7,
}),
embeddings: new OpenAIEmbeddings({
openAIApiKey,
modelName: 'text-embedding-3-large',
}),
};
}
if (ollamaEndpoint) {
try {
const response = await fetch(`${ollamaEndpoint}/api/tags`);
const { models: ollamaModels } = (await response.json()) as any;
models['ollama'] = ollamaModels.reduce((acc, model) => {
acc[model.model] = new ChatOllama({
baseUrl: ollamaEndpoint,
model: model.model,
temperature: 0.7,
});
return acc;
}, {});
if (Object.keys(models['ollama']).length > 0) {
models['ollama']['embeddings'] = new OllamaEmbeddings({
baseUrl: ollamaEndpoint,
model: models['ollama'][Object.keys(models['ollama'])[0]].model,
});
}
} catch (err) {
console.log(err);
}
}
return models;
};

View File

@ -1,18 +1,32 @@
import { WebSocket } from 'ws'; import { WebSocket } from 'ws';
import { handleMessage } from './messageHandler'; import { handleMessage } from './messageHandler';
import { ChatOpenAI, OpenAIEmbeddings } from '@langchain/openai'; import { getChatModel, getChatModelProvider } from '../config';
import { getOpenaiApiKey } from '../config'; import { getAvailableProviders } from '../lib/providers';
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { Embeddings } from '@langchain/core/embeddings';
export const handleConnection = (ws: WebSocket) => { export const handleConnection = async (ws: WebSocket) => {
const llm = new ChatOpenAI({ const models = await getAvailableProviders();
temperature: 0.7, const provider = getChatModelProvider();
openAIApiKey: getOpenaiApiKey(), const chatModel = getChatModel();
});
const embeddings = new OpenAIEmbeddings({ let llm: BaseChatModel | undefined;
openAIApiKey: getOpenaiApiKey(), let embeddings: Embeddings | undefined;
modelName: 'text-embedding-3-large',
}); if (models[provider] && models[provider][chatModel]) {
llm = models[provider][chatModel] as BaseChatModel | undefined;
embeddings = models[provider].embeddings as Embeddings | undefined;
}
if (!llm || !embeddings) {
ws.send(
JSON.stringify({
type: 'error',
data: 'Invalid LLM or embeddings model selected',
}),
);
ws.close();
}
ws.on( ws.on(
'message', 'message',

View File

@ -174,13 +174,25 @@ const ChatWindow = () => {
)} )}
</div> </div>
) : ( ) : (
<div className='flex flex-row items-center justify-center min-h-screen'> <div className="flex flex-row items-center justify-center min-h-screen">
<svg aria-hidden="true" className="w-8 h-8 text-[#202020] animate-spin fill-[#ffffff3b]" viewBox="0 0 100 101" fill="none" xmlns="http://www.w3.org/2000/svg"> <svg
<path d="M100 50.5908C100 78.2051 77.6142 100.591 50 100.591C22.3858 100.591 0 78.2051 0 50.5908C0 22.9766 22.3858 0.59082 50 0.59082C77.6142 0.59082 100 22.9766 100 50.5908ZM9.08144 50.5908C9.08144 73.1895 27.4013 91.5094 50 91.5094C72.5987 91.5094 90.9186 73.1895 90.9186 50.5908C90.9186 27.9921 72.5987 9.67226 50 9.67226C27.4013 9.67226 9.08144 27.9921 9.08144 50.5908Z" fill="currentColor" /> aria-hidden="true"
<path d="M93.9676 39.0409C96.393 38.4038 97.8624 35.9116 97.0079 33.5539C95.2932 28.8227 92.871 24.3692 89.8167 20.348C85.8452 15.1192 80.8826 10.7238 75.2124 7.41289C69.5422 4.10194 63.2754 1.94025 56.7698 1.05124C51.7666 0.367541 46.6976 0.446843 41.7345 1.27873C39.2613 1.69328 37.813 4.19778 38.4501 6.62326C39.0873 9.04874 41.5694 10.4717 44.0505 10.1071C47.8511 9.54855 51.7191 9.52689 55.5402 10.0491C60.8642 10.7766 65.9928 12.5457 70.6331 15.2552C75.2735 17.9648 79.3347 21.5619 82.5849 25.841C84.9175 28.9121 86.7997 32.2913 88.1811 35.8758C89.083 38.2158 91.5421 39.6781 93.9676 39.0409Z" fill="currentFill" /> className="w-8 h-8 text-[#202020] animate-spin fill-[#ffffff3b]"
viewBox="0 0 100 101"
fill="none"
xmlns="http://www.w3.org/2000/svg"
>
<path
d="M100 50.5908C100 78.2051 77.6142 100.591 50 100.591C22.3858 100.591 0 78.2051 0 50.5908C0 22.9766 22.3858 0.59082 50 0.59082C77.6142 0.59082 100 22.9766 100 50.5908ZM9.08144 50.5908C9.08144 73.1895 27.4013 91.5094 50 91.5094C72.5987 91.5094 90.9186 73.1895 90.9186 50.5908C90.9186 27.9921 72.5987 9.67226 50 9.67226C27.4013 9.67226 9.08144 27.9921 9.08144 50.5908Z"
fill="currentColor"
/>
<path
d="M93.9676 39.0409C96.393 38.4038 97.8624 35.9116 97.0079 33.5539C95.2932 28.8227 92.871 24.3692 89.8167 20.348C85.8452 15.1192 80.8826 10.7238 75.2124 7.41289C69.5422 4.10194 63.2754 1.94025 56.7698 1.05124C51.7666 0.367541 46.6976 0.446843 41.7345 1.27873C39.2613 1.69328 37.813 4.19778 38.4501 6.62326C39.0873 9.04874 41.5694 10.4717 44.0505 10.1071C47.8511 9.54855 51.7191 9.52689 55.5402 10.0491C60.8642 10.7766 65.9928 12.5457 70.6331 15.2552C75.2735 17.9648 79.3347 21.5619 82.5849 25.841C84.9175 28.9121 86.7997 32.2913 88.1811 35.8758C89.083 38.2158 91.5421 39.6781 93.9676 39.0409Z"
fill="currentFill"
/>
</svg> </svg>
</div> </div>
) );
}; };
export default ChatWindow; export default ChatWindow;