mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-26 09:40:20 +01:00
Update README.md
This commit is contained in:
parent
82c50a09b2
commit
ed1d2c0d38
12
README.md
12
README.md
@ -58,14 +58,14 @@ After these steps, you should be able to start the web UI, but first you need to
|
|||||||
|
|
||||||
## Downloading models
|
## Downloading models
|
||||||
|
|
||||||
Models should be placed under `models/model-name`. For instance, `models/gpt-j-6B` for [gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main).
|
Models should be placed under `models/model-name`. For instance, `models/gpt-j-6B` for [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main).
|
||||||
|
|
||||||
#### Hugging Face
|
#### Hugging Face
|
||||||
|
|
||||||
[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some noteworthy examples:
|
[Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads) is the main place to download models. These are some noteworthy examples:
|
||||||
|
|
||||||
* [gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)
|
* [GPT-J 6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)
|
||||||
* [gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b/tree/main)
|
* [GPT-Neo](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads&search=eleutherai+%2F+gpt-neo)
|
||||||
* [OPT](https://huggingface.co/models?search=facebook/opt)
|
* [OPT](https://huggingface.co/models?search=facebook/opt)
|
||||||
* [GALACTICA](https://huggingface.co/models?search=facebook/galactica)
|
* [GALACTICA](https://huggingface.co/models?search=facebook/galactica)
|
||||||
* [\*-Erebus](https://huggingface.co/models?search=erebus)
|
* [\*-Erebus](https://huggingface.co/models?search=erebus)
|
||||||
@ -93,14 +93,14 @@ The 32-bit version is only relevant if you intend to run the model in CPU mode.
|
|||||||
After downloading the model, follow these steps:
|
After downloading the model, follow these steps:
|
||||||
|
|
||||||
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
|
1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
|
||||||
2. Place GPT-J-6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
|
2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
|
||||||
3. Download GPT-J-6B under `models/gpt-j-6B`:
|
3. Download GPT-J 6B under `models/gpt-j-6B`:
|
||||||
|
|
||||||
```
|
```
|
||||||
python download-model.py EleutherAI/gpt-j-6B
|
python download-model.py EleutherAI/gpt-j-6B
|
||||||
```
|
```
|
||||||
|
|
||||||
You don't really need all of GPT-J's files, just the tokenizer files, but you might as well download the whole thing. Those files will be automatically detected when you attempt to load GPT-4chan.
|
You don't really need all of GPT-J 6B's files, just the tokenizer files, but you might as well download the whole thing. Those files will be automatically detected when you attempt to load GPT-4chan.
|
||||||
|
|
||||||
#### Converting to pytorch (optional)
|
#### Converting to pytorch (optional)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user