A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Go to file
2023-01-06 00:05:11 -03:00
convert-to-torch.py Add file 2022-12-21 13:28:19 -03:00
LICENSE Initial commit 2022-12-21 01:17:38 -03:00
README.md Update the installation instructions (fixes #1) 2023-01-06 00:05:11 -03:00
server.py Add files 2022-12-21 13:27:31 -03:00
webui.png Add files via upload 2022-12-21 14:04:15 -03:00

text-generation-webui

A gradio webui for running large language models locally. Supports gpt-j-6B, gpt-neox-20b, opt, galactica, and many others.

webui screenshot

Installation

Create a conda environment:

conda create -n textgen
conda activate textgen

Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

Install the requirements:

pip install -r requirements.txt

Downloading models

Models should be placed under models/model-name.

Hugging Face

Hugging Face is the main place to download models. For instance, here you can find the files for the model gpt-j-6B.

The files that you need to download and put under models/gpt-j-6B are the json, txt, and pytorch*.bin files. The remaining files are not necessary.

GPT-4chan

GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:

Starting the webui

conda activate textgen
python server.py

Then browse to http://localhost:7860/?__theme=dark