llama.cpp is the best backend in two important scenarios:
1) You don't have a GPU.
2) You want to run a model that doesn't fit into your GPU.
## Setting up the models
#### Pre-converted
Download the ggml model directly into your `text-generation-webui/models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`. It's a single file.
`q4_K_M` quantization is recommended.
#### Convert Llama yourself
Follow the instructions in the llama.cpp README to generate a ggml: https://github.com/ggerganov/llama.cpp#prepare-data--run
## GPU acceleration
Enabled with the `--n-gpu-layers` parameter.
* If you have enough VRAM, use a high number like `--n-gpu-layers 1000` to offload all layers to the GPU.
* Otherwise, start with a low number like `--n-gpu-layers 10` and then gradually increase it until you run out of memory.