diff --git a/README.md b/README.md index 11335d1..7bcb2b1 100644 --- a/README.md +++ b/README.md @@ -57,7 +57,7 @@ There are mainly 2 ways of installing Perplexica - With Docker, Without Docker. 4. Rename the `sample.config.toml` file to `config.toml`. For Docker setups, you need only fill in the following fields: - - `CHAT_MODEL`: The name of the LLM to use. Example: `llama2` for Ollama users & `gpt-3.5-turbo` for OpenAI users. + - `CHAT_MODEL`: The name of the LLM to use. Like `llama2` (using Ollama), `gpt-3.5-turbo` (using OpenAI), etc. - `CHAT_MODEL_PROVIDER`: The chat model provider, either `openai` or `ollama`. Depending upon which provider you use you would have to fill in the following fields: - `OPENAI`: Your OpenAI API key. **You only need to fill this if you wish to use OpenAI's models.** diff --git a/ui/components/MessageInputActions.tsx b/ui/components/MessageInputActions.tsx index 8d7deea..9c00c4d 100644 --- a/ui/components/MessageInputActions.tsx +++ b/ui/components/MessageInputActions.tsx @@ -109,7 +109,7 @@ export const Focus = ({ leaveTo="opacity-0 translate-y-1" > -
+
{focusModes.map((mode, i) => ( setFocusMode(mode.key)}