mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 07:30:17 +01:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4
.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
75 lines
2.3 KiB
Markdown
75 lines
2.3 KiB
Markdown
## Overview
|
|
|
|
The `rpc-server` allows running `ggml` backend on a remote host.
|
|
The RPC backend communicates with one or several instances of `rpc-server` and offloads computations to them.
|
|
This can be used for distributed LLM inference with `llama.cpp` in the following way:
|
|
|
|
```mermaid
|
|
flowchart TD
|
|
rpcb---|TCP|srva
|
|
rpcb---|TCP|srvb
|
|
rpcb-.-|TCP|srvn
|
|
subgraph hostn[Host N]
|
|
srvn[rpc-server]-.-backend3["Backend (CUDA,Metal,etc.)"]
|
|
end
|
|
subgraph hostb[Host B]
|
|
srvb[rpc-server]---backend2["Backend (CUDA,Metal,etc.)"]
|
|
end
|
|
subgraph hosta[Host A]
|
|
srva[rpc-server]---backend["Backend (CUDA,Metal,etc.)"]
|
|
end
|
|
subgraph host[Main Host]
|
|
ggml[llama.cpp]---rpcb[RPC backend]
|
|
end
|
|
style hostn stroke:#66,stroke-width:2px,stroke-dasharray: 5 5
|
|
```
|
|
|
|
Each host can run a different backend, e.g. one with CUDA and another with Metal.
|
|
You can also run multiple `rpc-server` instances on the same host, each with a different backend.
|
|
|
|
## Usage
|
|
|
|
On each host, build the corresponding backend with `cmake` and add `-DLLAMA_RPC=ON` to the build options.
|
|
For example, to build the CUDA backend with RPC support:
|
|
|
|
```bash
|
|
mkdir build-rpc-cuda
|
|
cd build-rpc-cuda
|
|
cmake .. -DLLAMA_CUDA=ON -DLLAMA_RPC=ON
|
|
cmake --build . --config Release
|
|
```
|
|
|
|
Then, start the `rpc-server` with the backend:
|
|
|
|
```bash
|
|
$ bin/rpc-server -p 50052
|
|
create_backend: using CUDA backend
|
|
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
|
|
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
|
|
ggml_cuda_init: found 1 CUDA devices:
|
|
Device 0: NVIDIA T1200 Laptop GPU, compute capability 7.5, VMM: yes
|
|
Starting RPC server on 0.0.0.0:50052
|
|
```
|
|
|
|
When using the CUDA backend, you can specify the device with the `CUDA_VISIBLE_DEVICES` environment variable, e.g.:
|
|
```bash
|
|
$ CUDA_VISIBLE_DEVICES=0 bin/rpc-server -p 50052
|
|
```
|
|
This way you can run multiple `rpc-server` instances on the same host, each with a different CUDA device.
|
|
|
|
|
|
On the main host build `llama.cpp` only with `-DLLAMA_RPC=ON`:
|
|
|
|
```bash
|
|
mkdir build-rpc
|
|
cd build-rpc
|
|
cmake .. -DLLAMA_RPC=ON
|
|
cmake --build . --config Release
|
|
```
|
|
|
|
Finally, use the `--rpc` option to specify the host and port of each `rpc-server`:
|
|
|
|
```bash
|
|
$ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99
|
|
```
|