2023-04-22 07:34:13 +02:00
|
|
|
## Using llama.cpp in the web UI
|
|
|
|
|
2023-04-22 19:56:48 +02:00
|
|
|
#### Pre-converted models
|
2023-04-22 07:34:13 +02:00
|
|
|
|
2023-05-11 14:47:36 +02:00
|
|
|
Place the model in the `models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`.
|
2023-04-22 07:34:13 +02:00
|
|
|
|
2023-04-22 19:56:48 +02:00
|
|
|
#### Convert LLaMA yourself
|
2023-04-22 07:34:13 +02:00
|
|
|
|
2023-04-22 19:56:48 +02:00
|
|
|
Follow the instructions in the llama.cpp README to generate the `ggml-model-q4_0.bin` file: https://github.com/ggerganov/llama.cpp#usage
|
2023-04-22 07:34:13 +02:00
|
|
|
|
|
|
|
## Performance
|
|
|
|
|
|
|
|
This was the performance of llama-7b int4 on my i5-12400F:
|
|
|
|
|
|
|
|
> Output generated in 33.07 seconds (6.05 tokens/s, 200 tokens, context 17)
|
|
|
|
|
2023-04-22 19:56:48 +02:00
|
|
|
You can change the number of threads with `--threads N`.
|