A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Go to file
2022-12-21 14:37:50 -03:00
convert-to-torch.py Add file 2022-12-21 13:28:19 -03:00
environment.yml Initial commit 2022-12-21 13:17:06 -03:00
LICENSE Initial commit 2022-12-21 01:17:38 -03:00
README.md Update README.md 2022-12-21 14:37:50 -03:00
server.py Add files 2022-12-21 13:27:31 -03:00
webui.png Add files via upload 2022-12-21 14:04:15 -03:00

text-generation-webui

A gradio webui for running large language models locally. Supports gpt-j-6B, gpt-neox-20b, opt, galactica, and many others.

webui screenshot

Installation

conda env create -f environment.yml

Downloading models

Models should be placed under models/model-name.

Hugging Face

Hugging Face is the main place to download models. For instance, here you can find the files for the model gpt-j-6B.

The files that you need to download and put under models/gpt-j-6B are the json, txt, and pytorch*.bin files. The remaining files are not necessary.

GPT-4chan

GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:

Starting the webui

conda activate textgen
python server.py

Then browse to http://localhost:7860/?__theme=dark