A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Go to file
2023-01-09 10:58:46 -03:00
models Add the default folders 2023-01-06 02:38:09 -03:00
presets Add a new preset 2023-01-07 15:13:00 -03:00
torch-dumps Add folder 2023-01-06 10:08:17 -03:00
convert-to-torch.py Update the description 2023-01-07 16:54:49 -03:00
download-model.py Fix the download script 2023-01-07 16:49:21 -03:00
html_generator.py Fix a bug with the greentexts 2023-01-07 01:20:10 -03:00
LICENSE Initial commit 2022-12-21 01:17:38 -03:00
README.md Implement CPU mode 2023-01-09 10:58:46 -03:00
requirements.txt Add script to download models 2023-01-06 19:57:31 -03:00
server.py Implement CPU mode 2023-01-09 10:58:46 -03:00
webui.png Update the screenshot 2023-01-07 18:29:32 -03:00

text-generation-webui

A gradio webui for running large language models locally. Supports gpt-j-6B, gpt-neox-20b, opt, galactica, and many others.

Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.

webui screenshot

Features

  • Switch between different models using a dropdown menu.
  • Generate nice HTML output for gpt4chan.
  • Generate Markdown output for GALACTICA, including LaTeX support.
  • Notebook mode that resembles OpenAI's playground.
  • Chat mode for conversation and role playing.
  • Load 13b/20b models in 8-bit mode.
  • Load parameter presets from text files.
  • Option to use the CPU instead of the GPU for generation.

Installation

Create a conda environment:

conda create -n textgen
conda activate textgen

Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

Install the requirements:

pip install -r requirements.txt

Downloading models

Models should be placed under models/model-name. For instance, models/gpt-j-6B for gpt-j-6B.

Hugging Face

Hugging Face is the main place to download models. These are some of my favorite:

The files that you need to download are the json, txt, and pytorch*.bin files. The remaining files are not necessary.

For your convenience, you can automatically download a model from HF using the script download-model.py. Its usage is very simple:

python download-model.py organization/model

For instance:

python download-model.py facebook/opt-1.3b

gpt4chan

gpt4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:

You also need to put GPT-J-6B's config.json file in the same folder: config.json

Converting to pytorch

The script convert-to-torch.py allows you to convert models to .pt format, which is about 10x faster to load:

python convert-to-torch.py models/model-name

The output model will be saved to torch-dumps/model-name.pt. When you load a new model, the webui first looks for this .pt file; if it is not found, it loads the model as usual from models/model-name.

Starting the webui

conda activate textgen
python server.py

Then browse to

http://localhost:7860/?__theme=dark

Optionally, you can use the following command-line flags:

--model model-name: Load this model by default.

--notebook: Launch the webui in notebook mode, where the output is written to the same text box as the input.

--chat: Launch the webui in chat mode.

--cpu: Use the CPU to generate text instead of the GPU.

Presets

Inference settings presets can be created under presets/ as text files. These files are detected automatically at startup.

Contributing

Pull requests are welcome.