mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 05:48:47 +01:00
server : update readme to document the new /health
endpoint (#4866)
* added /health endpoint to the server * added comments on the additional /health endpoint * Better handling of server state When the model is being loaded, the server state is `LOADING_MODEL`. If model-loading fails, the server state becomes `ERROR`, otherwise it becomes `READY`. The `/health` endpoint provides more granular messages now according to the server_state value. * initialized server_state * fixed a typo * starting http server before initializing the model * Update server.cpp * Update server.cpp * fixes * fixes * fixes * made ServerState atomic and turned two-line spaces into one-line * updated `server` readme to document the `/health` endpoint too
This commit is contained in:
parent
5c1980d8d4
commit
7a9f75c38b
@ -110,6 +110,10 @@ node index.js
|
|||||||
```
|
```
|
||||||
|
|
||||||
## API Endpoints
|
## API Endpoints
|
||||||
|
- **GET** `/health`: Returns the current state of the server:
|
||||||
|
- `{"status": "loading model"}` if the model is still being loaded.
|
||||||
|
- `{"status": "error"}` if the model failed to load.
|
||||||
|
- `{"status": "ok"}` if the model is successfully loaded and the server is ready for further requests mentioned below.
|
||||||
|
|
||||||
- **POST** `/completion`: Given a `prompt`, it returns the predicted completion.
|
- **POST** `/completion`: Given a `prompt`, it returns the predicted completion.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user