mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
extensions/openai: Fix error when preparing cache for embedding models (#3995)
This commit is contained in:
parent
7a3ca2c68f
commit
9de2dfa887
@ -45,7 +45,7 @@ OPENAI_API_BASE=http://0.0.0.0:5001/v1
|
||||
If needed, replace 0.0.0.0 with the IP/port of your server.
|
||||
|
||||
|
||||
## Settings
|
||||
### Settings
|
||||
|
||||
To adjust your default settings, you can add the following to your `settings.yaml` file.
|
||||
|
||||
@ -56,6 +56,13 @@ openai-sd_webui_url: http://127.0.0.1:7861
|
||||
openai-debug: 1
|
||||
```
|
||||
|
||||
If you've configured the environment variables, please note that settings from `settings.yaml` won't take effect. For instance, if you set `openai-port: 5002` in `settings.yaml` but `OPENEDAI_PORT=5001` in the environment variables, the extension will use `5001` as the port number.
|
||||
|
||||
When using `cache_embedding_model.py` to preload the embedding model during Docker image building, consider the following:
|
||||
|
||||
- If you wish to use the default settings, leave the environment variables unset.
|
||||
- If you intend to change the default embedding model, ensure that you configure the environment variable `OPENEDAI_EMBEDDING_MODEL` to the desired model. Avoid setting `openai-embedding_model` in `settings.yaml` because those settings only take effect after the server starts.
|
||||
|
||||
### Models
|
||||
|
||||
This has been successfully tested with Alpaca, Koala, Vicuna, WizardLM and their variants, (ex. gpt4-x-alpaca, GPT4all-snoozy, stable-vicuna, wizard-vicuna, etc.) and many others. Models that have been trained for **Instruction Following** work best. If you test with other models please let me know how it goes. Less than satisfying results (so far) from: RWKV-4-Raven, llama, mpt-7b-instruct/chat.
|
||||
|
@ -6,7 +6,6 @@
|
||||
import os
|
||||
|
||||
import sentence_transformers
|
||||
from extensions.openai.script import params
|
||||
|
||||
st_model = os.environ.get("OPENEDAI_EMBEDDING_MODEL", params.get('embedding_model', 'all-mpnet-base-v2'))
|
||||
st_model = os.environ.get("OPENEDAI_EMBEDDING_MODEL", "all-mpnet-base-v2")
|
||||
model = sentence_transformers.SentenceTransformer(st_model)
|
||||
|
Loading…
Reference in New Issue
Block a user