mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-23 21:17:54 +01:00
readme : server compile flag (#1874)
Explicitly include the server make instructions for C++ noobsl like me ;)
This commit is contained in:
parent
37e257c48e
commit
9dda13e5e1
@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
|
||||
To get started right away, run the following command, making sure to use the correct path for the model you have:
|
||||
|
||||
#### Unix-based systems (Linux, macOS, etc.):
|
||||
Make sure to build with the server option on
|
||||
```bash
|
||||
LLAMA_BUILD_SERVER=1 make
|
||||
```
|
||||
|
||||
```bash
|
||||
./server -m models/7B/ggml-model.bin --ctx_size 2048
|
||||
|
Loading…
Reference in New Issue
Block a user