mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Update README.md
This commit is contained in:
parent
261684f92e
commit
e0583f0ec2
@ -39,7 +39,7 @@ One way to make this process about 10x faster is to convert the models to pytorc
|
|||||||
|
|
||||||
The output model will be saved to `torch-dumps/model-name.pt`. This is the default way to load all models except for `gpt-neox-20b`, `opt-13b`, `OPT-13B-Erebus`, `gpt-j-6B`, and `flan-t5`. I don't remember why these models are exceptions.
|
The output model will be saved to `torch-dumps/model-name.pt`. This is the default way to load all models except for `gpt-neox-20b`, `opt-13b`, `OPT-13B-Erebus`, `gpt-j-6B`, and `flan-t5`. I don't remember why these models are exceptions.
|
||||||
|
|
||||||
If I get enough ⭐s on this repository, I will make the process of loading models more transparent and straightforward.
|
If I get enough ⭐s on this repository, I will make the process of loading models saner and more customizable.
|
||||||
|
|
||||||
## Starting the webui
|
## Starting the webui
|
||||||
|
|
||||||
@ -47,3 +47,7 @@ If I get enough ⭐s on this repository, I will make the process of loading mode
|
|||||||
python server.py
|
python server.py
|
||||||
|
|
||||||
Then browse to `http://localhost:7860/?__theme=dark`
|
Then browse to `http://localhost:7860/?__theme=dark`
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
Pull requests are welcome.
|
||||||
|
Loading…
Reference in New Issue
Block a user