diff --git a/README.md b/README.md index 1b71b33e..40fb62a4 100644 --- a/README.md +++ b/README.md @@ -15,24 +15,23 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github. * Chat mode for conversation and role-playing * Instruct mode compatible with various formats, including Alpaca, Vicuna, Open Assistant, Dolly, Koala, ChatGLM, MOSS, RWKV-Raven, Galactica, StableLM, WizardLM, Baize, MPT, and INCITE * [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal) -* Nice HTML output for GPT-4chan * Markdown output for [GALACTICA](https://github.com/paperswithcode/galai), including LaTeX rendering +* Nice HTML output for GPT-4chan * [Custom chat characters](docs/Chat-mode.md) * Advanced chat features (send images, get audio responses with TTS) * Very efficient text streaming * Parameter presets +* [LLaMA model](docs/LLaMA-model.md) +* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md) +* [LoRA (loading and training)](docs/Using-LoRAs.md) +* [llama.cpp](docs/llama.cpp-models.md) +* [RWKV model](docs/RWKV-model.md) * 8-bit mode * Layers splitting across GPU(s), CPU, and disk * CPU mode * [FlexGen](docs/FlexGen.md) * [DeepSpeed ZeRO-3](docs/DeepSpeed.md) * API [with](https://github.com/oobabooga/text-generation-webui/blob/main/api-example-stream.py) streaming and [without](https://github.com/oobabooga/text-generation-webui/blob/main/api-example.py) streaming -* [LLaMA model](docs/LLaMA-model.md) -* [4-bit GPTQ mode](docs/GPTQ-models-(4-bit-mode).md) -* [llama.cpp](docs/llama.cpp-models.md) -* [RWKV model](docs/RWKV-model.md) -* [LoRA (loading and training)](docs/Using-LoRAs.md) -* Softprompts * [Extensions](docs/Extensions.md) - see the [user extensions list](https://github.com/oobabooga/text-generation-webui-extensions) ## Installation