mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-23 00:18:20 +01:00
Add more detailed installation instructions
This commit is contained in:
parent
aeff0d4cc1
commit
886c12dd77
18
README.md
18
README.md
@ -21,19 +21,29 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
Create a conda environment:
|
1. You need to have the conda environment manager installed into your system. If you don't have it already, get it here: [miniconda download](https://docs.conda.io/en/latest/miniconda.html).
|
||||||
|
|
||||||
|
2. Then open a terminal window and create a conda environment:
|
||||||
|
|
||||||
conda create -n textgen
|
conda create -n textgen
|
||||||
conda activate textgen
|
conda activate textgen
|
||||||
|
|
||||||
Install the appropriate pytorch for your GPU. For NVIDIA GPUs, this should work:
|
3. Install the appropriate pytorch. For NVIDIA GPUs, this should work:
|
||||||
|
|
||||||
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
|
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
|
||||||
|
|
||||||
Install the requirements:
|
For AMD GPUs, you need the ROCm version of pytorch. For running exclusively on the CPU, you just need the stock pytorch and this should probably work:
|
||||||
|
|
||||||
|
conda install pytorch torchvision torchaudio -c pytorch
|
||||||
|
|
||||||
|
4. Clone or download this repository, and then `cd` into its directory from your terminal window.
|
||||||
|
|
||||||
|
5. Install the required Python libraries:
|
||||||
|
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
|
||||||
|
After these steps, you should be able to start the webui, but first you need to download some model to load.
|
||||||
|
|
||||||
## Downloading models
|
## Downloading models
|
||||||
|
|
||||||
Models should be placed under `models/model-name`. For instance, `models/gpt-j-6B` for [gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main).
|
Models should be placed under `models/model-name`. For instance, `models/gpt-j-6B` for [gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main).
|
||||||
@ -75,6 +85,8 @@ Then follow these steps to install:
|
|||||||
python download-model.py EleutherAI/gpt-j-6B
|
python download-model.py EleutherAI/gpt-j-6B
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You don't really need all of GPT-J's files, just the tokenizer files, but you might as well download the whole thing. Those files will be automatically detected when you attempt to load gpt4chan.
|
||||||
|
|
||||||
#### Converting to pytorch (optional)
|
#### Converting to pytorch (optional)
|
||||||
|
|
||||||
The script `convert-to-torch.py` allows you to convert models to .pt format, which is about 10x faster to load:
|
The script `convert-to-torch.py` allows you to convert models to .pt format, which is about 10x faster to load:
|
||||||
|
Loading…
Reference in New Issue
Block a user