mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-26 09:40:20 +01:00
Update README.md
This commit is contained in:
parent
e6959a5d9a
commit
0ff38c994e
13
README.md
13
README.md
@ -15,24 +15,23 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
|
|||||||
* Chat mode for conversation and role-playing
|
* Chat mode for conversation and role-playing
|
||||||
* Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, MPT, and INCITE
|
* Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, MPT, and INCITE
|
||||||
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
|
* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
|
||||||
* Nice HTML output for GPT-4chan
|
|
||||||
* Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering
|
* Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering
|
||||||
|
* Nice HTML output for GPT-4chan
|
||||||
* [Custom chat characters](docs/Chat-mode.md)
|
* [Custom chat characters](docs/Chat-mode.md)
|
||||||
* Advanced chat features (send images, get audio responses with TTS)
|
* Advanced chat features (send images, get audio responses with TTS)
|
||||||
* Very efficient text streaming
|
* Very efficient text streaming
|
||||||
* Parameter presets
|
* Parameter presets
|
||||||
|
* [LLaMA model](docs/LLaMA-model.md)
|
||||||
|
* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md)
|
||||||
|
* [LoRA (loading and training)](docs/Using-LoRAs.md)
|
||||||
|
* [llama.cpp](docs/llama.cpp-models.md)
|
||||||
|
* [RWKV model](docs/RWKV-model.md)
|
||||||
* 8-bit mode
|
* 8-bit mode
|
||||||
* Layers splitting across GPU(s), CPU, and disk
|
* Layers splitting across GPU(s), CPU, and disk
|
||||||
* CPU mode
|
* CPU mode
|
||||||
* [FlexGen](docs/FlexGen.md)
|
* [FlexGen](docs/FlexGen.md)
|
||||||
* [DeepSpeed ZeRO-3](docs/DeepSpeed.md)
|
* [DeepSpeed ZeRO-3](docs/DeepSpeed.md)
|
||||||
* API [with](https://github.com/oobabooga/text-generation-webui/blob/main/api-example-stream.py) streaming and [without](https://github.com/oobabooga/text-generation-webui/blob/main/api-example.py) streaming
|
* API [with](https://github.com/oobabooga/text-generation-webui/blob/main/api-example-stream.py) streaming and [without](https://github.com/oobabooga/text-generation-webui/blob/main/api-example.py) streaming
|
||||||
* [LLaMA model](docs/LLaMA-model.md)
|
|
||||||
* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md)
|
|
||||||
* [llama.cpp](docs/llama.cpp-models.md)
|
|
||||||
* [RWKV model](docs/RWKV-model.md)
|
|
||||||
* [LoRA (loading and training)](docs/Using-LoRAs.md)
|
|
||||||
* Softprompts
|
|
||||||
* [Extensions](docs/Extensions.md) - see the [user extensions list](https://github.com/oobabooga/text-generation-webui-extensions)
|
* [Extensions](docs/Extensions.md) - see the [user extensions list](https://github.com/oobabooga/text-generation-webui-extensions)
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
Loading…
Reference in New Issue
Block a user