2024-03-07 10:41:53 +01:00
# include "utils.hpp"
2024-09-09 23:36:09 +02:00
# include "arg.h"
2023-05-21 19:51:18 +02:00
# include "common.h"
json-schema-to-grammar improvements (+ added to server) (#5978)
* json: fix arrays (disallow `[,1]`)
* json: support tuple types (`[number, string]`)
* json: support additionalProperties (`{[k: string]: [string,number][]}`)
* json: support required / optional properties
* json: add support for pattern
* json: resolve $ref (and support https schema urls)
* json: fix $ref resolution
* join: support union types (mostly for nullable types I think)
* json: support allOf + nested anyOf
* json: support any (`{}` or `{type: object}`)
* json: fix merge
* json: temp fix for escapes
* json: spaces in output and unrestricted output spaces
* json: add typings
* json:fix typo
* Create ts-type-to-grammar.sh
* json: fix _format_literal (json.dumps already escapes quotes)
* json: merge lit sequences and handle negatives
{"type": "string", "pattern": "^({\"question\": \"[^\"]+\", \"response\": \"[^\"]+\"}\\n)+$"}
* json: handle pattern repetitions
* Update json-schema-to-grammar.mjs
* Create regex-to-grammar.py
* json: extract repeated regexp patterns to subrule
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* json: handle schema from pydantic Optional fields
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* Update ts-type-to-grammar.sh
* Update ts-type-to-grammar.sh
* json: simplify nullable fields handling
* json: accept duplicate identical rules
* json: revert space to 1 at most
* json: reuse regexp pattern subrules
* json: handle uuid string format
* json: fix literal escapes
* json: add --allow-fetch
* json: simplify range escapes
* json: support negative ranges in patterns
* Delete commit.txt
* json: custom regex parser, adds dot support & JS-portable
* json: rm trailing spaces
* Update json-schema-to-grammar.mjs
* json: updated server & chat `( cd examples/server && ./deps.sh )`
* json: port fixes from mjs to python
* Update ts-type-to-grammar.sh
* json: support prefixItems alongside array items
* json: add date format + fix uuid
* json: add date, time, date-time formats
* json: preserve order of props from TS defs
* json: port schema converter to C++, wire in ./server
* json: nits
* Update json-schema-to-grammar.cpp
* Update json-schema-to-grammar.cpp
* Update json-schema-to-grammar.cpp
* json: fix mjs implementation + align outputs
* Update json-schema-to-grammar.mjs.hpp
* json: test C++, JS & Python versions
* json: nits + regen deps
* json: cleanup test
* json: revert from c++17 to 11
* json: nit fixes
* json: dirty include for test
* json: fix zig build
* json: pass static command to std::system in tests (fixed temp files)
* json: fix top-level $refs
* json: don't use c++20 designated initializers
* nit
* json: basic support for reserved names `{number:{number:{root:number}}}`
* Revamp test cmake to allow args (WORKING_DIRECTORY needed for JSON test)
* json: re-ran server deps.sh
* json: simplify test
* json: support mix of additional props & required/optional
* json: add tests for some expected failures
* json: fix type=const in c++, add failure expectations for non-str const&enum
* json: test (& simplify output of) empty schema
* json: check parsing in test + fix value & string refs
* json: add server tests for OAI JSON response_format
* json: test/fix top-level anyOf
* json: improve grammar parsing failures
* json: test/fix additional props corner cases
* json: fix string patterns (was missing quotes)
* json: ws nit
* json: fix json handling in server when there's no response_format
* json: catch schema conversion errors in server
* json: don't complain about unknown format type in server if unset
* json: cleaner build of test
* json: create examples/json-schema-pydantic-example.py
* json: fix date pattern
* json: move json.hpp & json-schema-to-grammar.{cpp,h} to common
* json: indent 4 spaces
* json: fix naming of top-level c++ function (+ drop unused one)
* json: avoid using namespace std
* json: fix zig build
* Update server.feature
* json: iostream -> fprintf
* json: space before & refs for consistency
* json: nits
2024-03-21 12:50:43 +01:00
# include "json-schema-to-grammar.h"
2023-05-21 19:51:18 +02:00
# include "llama.h"
2024-11-25 15:31:38 +01:00
# include "log.h"
# include "sampling.h"
# include "speculative.h"
2023-05-21 19:51:18 +02:00
2024-05-08 21:53:08 +02:00
// Change JSON_ASSERT from assert() to GGML_ASSERT:
# define JSON_ASSERT GGML_ASSERT
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
# include "json.hpp"
2024-08-16 17:19:05 +02:00
// mime type for sending response
# define MIMETYPE_JSON "application / json; charset=utf-8"
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2023-07-04 16:05:27 +02:00
// auto generated files (update with ./deps.sh)
# include "index.html.hpp"
2024-09-13 14:23:11 +02:00
# include "loading.html.hpp"
2023-07-04 16:05:27 +02:00
2024-03-07 10:41:53 +01:00
# include <atomic>
2023-12-29 15:24:12 +01:00
# include <condition_variable>
2024-03-07 10:41:53 +01:00
# include <cstddef>
2024-09-15 19:46:12 +02:00
# include <cinttypes>
# include <deque>
# include <memory>
2024-03-07 10:41:53 +01:00
# include <mutex>
2024-02-18 17:23:16 +01:00
# include <signal.h>
2024-09-15 19:46:12 +02:00
# include <thread>
2024-09-02 17:11:51 +02:00
# include <unordered_map>
2024-09-15 19:46:12 +02:00
# include <unordered_set>
2023-09-01 15:34:50 +02:00
2024-09-15 19:46:12 +02:00
using json = nlohmann : : ordered_json ;
2023-07-02 23:38:44 +02:00
2024-02-29 21:42:11 +01:00
enum stop_type {
2024-12-06 11:14:32 +01:00
STOP_TYPE_NONE ,
STOP_TYPE_EOS ,
STOP_TYPE_WORD ,
STOP_TYPE_LIMIT ,
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
} ;
2024-09-06 23:21:29 +02:00
// state diagram: https://github.com/ggerganov/llama.cpp/pull/9283
2024-02-29 21:42:11 +01:00
enum slot_state {
2024-03-07 10:41:53 +01:00
SLOT_STATE_IDLE ,
2024-10-24 21:51:22 +02:00
SLOT_STATE_STARTED , // TODO: this state is only used for setting up the initial prompt processing; maybe merge it with launch_slot_with_task in the future
2024-09-06 23:21:29 +02:00
SLOT_STATE_PROCESSING_PROMPT ,
SLOT_STATE_DONE_PROMPT ,
SLOT_STATE_GENERATING ,
2024-03-07 10:41:53 +01:00
} ;
enum server_state {
SERVER_STATE_LOADING_MODEL , // Server is starting up, model not fully loaded yet
SERVER_STATE_READY , // Server is ready and model is loaded
} ;
enum server_task_type {
2024-12-07 20:21:09 +01:00
SERVER_TASK_TYPE_COMPLETION ,
SERVER_TASK_TYPE_EMBEDDING ,
SERVER_TASK_TYPE_RERANK ,
SERVER_TASK_TYPE_INFILL ,
2024-03-07 10:41:53 +01:00
SERVER_TASK_TYPE_CANCEL ,
SERVER_TASK_TYPE_NEXT_RESPONSE ,
2024-04-08 14:43:30 +02:00
SERVER_TASK_TYPE_METRICS ,
SERVER_TASK_TYPE_SLOT_SAVE ,
SERVER_TASK_TYPE_SLOT_RESTORE ,
SERVER_TASK_TYPE_SLOT_ERASE ,
2024-08-06 17:33:39 +02:00
SERVER_TASK_TYPE_SET_LORA ,
2024-03-07 10:41:53 +01:00
} ;
2024-12-06 11:14:32 +01:00
// https://community.openai.com/t/openai-chat-list-of-error-codes-and-types/357791/11
enum error_type {
ERROR_TYPE_INVALID_REQUEST ,
ERROR_TYPE_AUTHENTICATION ,
ERROR_TYPE_SERVER ,
ERROR_TYPE_NOT_FOUND ,
ERROR_TYPE_PERMISSION ,
ERROR_TYPE_UNAVAILABLE , // custom error
ERROR_TYPE_NOT_SUPPORTED , // custom error
} ;
2024-02-29 21:42:11 +01:00
struct slot_params {
bool stream = true ;
2024-11-25 20:50:07 +01:00
bool cache_prompt = true ; // remember the prompt to avoid reprocessing all prompt
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-10-12 15:14:27 +02:00
int32_t n_keep = 0 ; // number of tokens to keep from initial prompt
int32_t n_discard = 0 ; // number of tokens after n_keep that may be discarded when shifting context, 0 defaults to half
int32_t n_predict = - 1 ; // new tokens to predict
2024-10-18 06:32:19 +02:00
int32_t n_indent = 0 ; // mininum line indentation for the generated text in number of whitespace characters
2024-10-12 15:14:27 +02:00
int64_t t_max_prompt_ms = - 1 ; // TODO: implement
int64_t t_max_predict_ms = - 1 ; // if positive, limit the generation phase to this time limit
2023-07-02 23:38:44 +02:00
2024-02-29 21:42:11 +01:00
std : : vector < std : : string > antiprompt ;
2024-12-06 11:14:32 +01:00
bool timings_per_token = false ;
2024-12-07 20:21:09 +01:00
bool ignore_eos = false ;
2024-11-25 15:31:38 +01:00
struct common_params_sampling sampling ;
struct common_params_speculative speculative ;
2024-12-06 11:14:32 +01:00
// OAI-compat fields
bool verbose = false ;
bool oaicompat = false ;
bool oaicompat_chat = true ;
std : : string oaicompat_model ;
std : : string oaicompat_cmpl_id ;
2024-12-07 17:02:05 +01:00
json to_json ( ) const {
2024-12-06 11:14:32 +01:00
std : : vector < std : : string > samplers ;
samplers . reserve ( sampling . samplers . size ( ) ) ;
for ( const auto & sampler : sampling . samplers ) {
samplers . emplace_back ( common_sampler_type_to_str ( sampler ) ) ;
}
return json {
{ " n_predict " , n_predict } , // Server configured n_predict
2024-12-07 17:02:05 +01:00
{ " seed " , sampling . seed } ,
2024-12-06 11:14:32 +01:00
{ " temperature " , sampling . temp } ,
{ " dynatemp_range " , sampling . dynatemp_range } ,
{ " dynatemp_exponent " , sampling . dynatemp_exponent } ,
{ " top_k " , sampling . top_k } ,
{ " top_p " , sampling . top_p } ,
{ " min_p " , sampling . min_p } ,
{ " xtc_probability " , sampling . xtc_probability } ,
{ " xtc_threshold " , sampling . xtc_threshold } ,
{ " typical_p " , sampling . typ_p } ,
{ " repeat_last_n " , sampling . penalty_last_n } ,
{ " repeat_penalty " , sampling . penalty_repeat } ,
{ " presence_penalty " , sampling . penalty_present } ,
{ " frequency_penalty " , sampling . penalty_freq } ,
{ " dry_multiplier " , sampling . dry_multiplier } ,
{ " dry_base " , sampling . dry_base } ,
{ " dry_allowed_length " , sampling . dry_allowed_length } ,
{ " dry_penalty_last_n " , sampling . dry_penalty_last_n } ,
{ " dry_sequence_breakers " , sampling . dry_sequence_breakers } ,
{ " mirostat " , sampling . mirostat } ,
{ " mirostat_tau " , sampling . mirostat_tau } ,
{ " mirostat_eta " , sampling . mirostat_eta } ,
{ " penalize_nl " , sampling . penalize_nl } ,
{ " stop " , antiprompt } ,
{ " max_tokens " , n_predict } , // User configured n_predict
{ " n_keep " , n_keep } ,
{ " n_discard " , n_discard } ,
{ " ignore_eos " , sampling . ignore_eos } ,
{ " stream " , stream } ,
2024-12-07 20:21:09 +01:00
{ " logit_bias " , format_logit_bias ( sampling . logit_bias ) } ,
2024-12-06 11:14:32 +01:00
{ " n_probs " , sampling . n_probs } ,
{ " min_keep " , sampling . min_keep } ,
{ " grammar " , sampling . grammar } ,
{ " samplers " , samplers } ,
{ " speculative.n_max " , speculative . n_max } ,
{ " speculative.n_min " , speculative . n_min } ,
{ " speculative.p_min " , speculative . p_min } ,
{ " timings_per_token " , timings_per_token } ,
} ;
}
} ;
2024-12-07 20:21:09 +01:00
struct server_task {
int id = - 1 ; // to be filled by server_queue
int index = - 1 ; // used when there are multiple prompts (batch request)
server_task_type type ;
// used by SERVER_TASK_TYPE_CANCEL
int id_target = - 1 ;
// used by SERVER_TASK_TYPE_INFERENCE
slot_params params ;
llama_tokens prompt_tokens ;
int id_selected_slot = - 1 ;
// used by SERVER_TASK_TYPE_SLOT_SAVE, SERVER_TASK_TYPE_SLOT_RESTORE, SERVER_TASK_TYPE_SLOT_ERASE
struct slot_action {
int slot_id ;
std : : string filename ;
std : : string filepath ;
} ;
slot_action slot_action ;
// used by SERVER_TASK_TYPE_METRICS
bool metrics_reset_bucket = false ;
server_task ( server_task_type type ) : type ( type ) { }
static slot_params params_from_json_cmpl (
const llama_model * model ,
const common_params & params_base ,
const json & data ) {
slot_params params ;
// Sampling parameter defaults are loaded from the global server context (but individual requests can still override them)
slot_params defaults ;
defaults . sampling = params_base . sampling ;
defaults . speculative = params_base . speculative ;
// enabling this will output extra debug information in the HTTP responses from the server
params . verbose = params_base . verbosity > 9 ;
params . timings_per_token = json_value ( data , " timings_per_token " , false ) ;
params . stream = json_value ( data , " stream " , false ) ;
params . cache_prompt = json_value ( data , " cache_prompt " , true ) ;
params . n_predict = json_value ( data , " n_predict " , json_value ( data , " max_tokens " , defaults . n_predict ) ) ;
params . n_indent = json_value ( data , " n_indent " , defaults . n_indent ) ;
params . n_keep = json_value ( data , " n_keep " , defaults . n_keep ) ;
params . n_discard = json_value ( data , " n_discard " , defaults . n_discard ) ;
//params.t_max_prompt_ms = json_value(data, "t_max_prompt_ms", defaults.t_max_prompt_ms); // TODO: implement
params . t_max_predict_ms = json_value ( data , " t_max_predict_ms " , defaults . t_max_predict_ms ) ;
params . sampling . top_k = json_value ( data , " top_k " , defaults . sampling . top_k ) ;
params . sampling . top_p = json_value ( data , " top_p " , defaults . sampling . top_p ) ;
params . sampling . min_p = json_value ( data , " min_p " , defaults . sampling . min_p ) ;
params . sampling . xtc_probability = json_value ( data , " xtc_probability " , defaults . sampling . xtc_probability ) ;
params . sampling . xtc_threshold = json_value ( data , " xtc_threshold " , defaults . sampling . xtc_threshold ) ;
params . sampling . typ_p = json_value ( data , " typical_p " , defaults . sampling . typ_p ) ;
params . sampling . temp = json_value ( data , " temperature " , defaults . sampling . temp ) ;
params . sampling . dynatemp_range = json_value ( data , " dynatemp_range " , defaults . sampling . dynatemp_range ) ;
params . sampling . dynatemp_exponent = json_value ( data , " dynatemp_exponent " , defaults . sampling . dynatemp_exponent ) ;
params . sampling . penalty_last_n = json_value ( data , " repeat_last_n " , defaults . sampling . penalty_last_n ) ;
params . sampling . penalty_repeat = json_value ( data , " repeat_penalty " , defaults . sampling . penalty_repeat ) ;
params . sampling . penalty_freq = json_value ( data , " frequency_penalty " , defaults . sampling . penalty_freq ) ;
params . sampling . penalty_present = json_value ( data , " presence_penalty " , defaults . sampling . penalty_present ) ;
params . sampling . dry_multiplier = json_value ( data , " dry_multiplier " , defaults . sampling . dry_multiplier ) ;
params . sampling . dry_base = json_value ( data , " dry_base " , defaults . sampling . dry_base ) ;
params . sampling . dry_allowed_length = json_value ( data , " dry_allowed_length " , defaults . sampling . dry_allowed_length ) ;
params . sampling . dry_penalty_last_n = json_value ( data , " dry_penalty_last_n " , defaults . sampling . dry_penalty_last_n ) ;
params . sampling . mirostat = json_value ( data , " mirostat " , defaults . sampling . mirostat ) ;
params . sampling . mirostat_tau = json_value ( data , " mirostat_tau " , defaults . sampling . mirostat_tau ) ;
params . sampling . mirostat_eta = json_value ( data , " mirostat_eta " , defaults . sampling . mirostat_eta ) ;
params . sampling . penalize_nl = json_value ( data , " penalize_nl " , defaults . sampling . penalize_nl ) ;
params . sampling . seed = json_value ( data , " seed " , defaults . sampling . seed ) ;
params . sampling . n_probs = json_value ( data , " n_probs " , defaults . sampling . n_probs ) ;
params . sampling . min_keep = json_value ( data , " min_keep " , defaults . sampling . min_keep ) ;
params . speculative . n_min = json_value ( data , " speculative.n_min " , defaults . speculative . n_min ) ;
params . speculative . n_max = json_value ( data , " speculative.n_max " , defaults . speculative . n_max ) ;
params . speculative . p_min = json_value ( data , " speculative.p_min " , defaults . speculative . p_min ) ;
params . speculative . n_min = std : : min ( params . speculative . n_max , params . speculative . n_min ) ;
params . speculative . n_min = std : : max ( params . speculative . n_min , 2 ) ;
params . speculative . n_max = std : : max ( params . speculative . n_max , 0 ) ;
if ( params . sampling . dry_base < 1.0f ) {
params . sampling . dry_base = defaults . sampling . dry_base ;
}
// sequence breakers for DRY
{
// Currently, this is not compatible with TextGen WebUI, Koboldcpp and SillyTavern format
// Ref: https://github.com/oobabooga/text-generation-webui/blob/d1af7a41ade7bd3c3a463bfa640725edb818ebaf/extensions/openai/typing.py#L39
if ( data . contains ( " dry_sequence_breakers " ) ) {
params . sampling . dry_sequence_breakers = json_value ( data , " dry_sequence_breakers " , std : : vector < std : : string > ( ) ) ;
if ( params . sampling . dry_sequence_breakers . empty ( ) ) {
throw std : : runtime_error ( " Error: dry_sequence_breakers must be a non-empty array of strings " ) ;
}
}
}
// process "json_schema" and "grammar"
if ( data . contains ( " json_schema " ) & & ! data . at ( " json_schema " ) . is_null ( ) & & data . contains ( " grammar " ) & & ! data . at ( " grammar " ) . is_null ( ) ) {
throw std : : runtime_error ( " Either \" json_schema \" or \" grammar \" can be specified, but not both " ) ;
}
if ( data . contains ( " json_schema " ) & & ! data . contains ( " grammar " ) ) {
try {
auto schema = json_value ( data , " json_schema " , json : : object ( ) ) ;
params . sampling . grammar = json_schema_to_grammar ( schema ) ;
} catch ( const std : : exception & e ) {
throw std : : runtime_error ( std : : string ( " \" json_schema \" : " ) + e . what ( ) ) ;
}
} else {
params . sampling . grammar = json_value ( data , " grammar " , defaults . sampling . grammar ) ;
}
{
params . sampling . logit_bias . clear ( ) ;
params . ignore_eos = json_value ( data , " ignore_eos " , false ) ;
const auto & logit_bias = data . find ( " logit_bias " ) ;
if ( logit_bias ! = data . end ( ) & & logit_bias - > is_array ( ) ) {
const int n_vocab = llama_n_vocab ( model ) ;
for ( const auto & el : * logit_bias ) {
// TODO: we may want to throw errors here, in case "el" is incorrect
if ( el . is_array ( ) & & el . size ( ) = = 2 ) {
float bias ;
if ( el [ 1 ] . is_number ( ) ) {
bias = el [ 1 ] . get < float > ( ) ;
} else if ( el [ 1 ] . is_boolean ( ) & & ! el [ 1 ] . get < bool > ( ) ) {
bias = - INFINITY ;
} else {
continue ;
}
if ( el [ 0 ] . is_number_integer ( ) ) {
llama_token tok = el [ 0 ] . get < llama_token > ( ) ;
if ( tok > = 0 & & tok < n_vocab ) {
params . sampling . logit_bias . push_back ( { tok , bias } ) ;
}
} else if ( el [ 0 ] . is_string ( ) ) {
auto toks = common_tokenize ( model , el [ 0 ] . get < std : : string > ( ) , false ) ;
for ( auto tok : toks ) {
params . sampling . logit_bias . push_back ( { tok , bias } ) ;
}
}
}
}
}
}
{
params . antiprompt . clear ( ) ;
const auto & stop = data . find ( " stop " ) ;
if ( stop ! = data . end ( ) & & stop - > is_array ( ) ) {
for ( const auto & word : * stop ) {
if ( ! word . empty ( ) ) {
params . antiprompt . push_back ( word ) ;
}
}
}
}
{
const auto & samplers = data . find ( " samplers " ) ;
if ( samplers ! = data . end ( ) ) {
if ( samplers - > is_array ( ) ) {
std : : vector < std : : string > sampler_names ;
for ( const auto & name : * samplers ) {
if ( name . is_string ( ) ) {
sampler_names . emplace_back ( name ) ;
}
}
params . sampling . samplers = common_sampler_types_from_names ( sampler_names , false ) ;
} else if ( samplers - > is_string ( ) ) {
std : : string sampler_string ;
for ( const auto & name : * samplers ) {
sampler_string + = name ;
}
params . sampling . samplers = common_sampler_types_from_chars ( sampler_string ) ;
}
} else {
params . sampling . samplers = defaults . sampling . samplers ;
}
}
std : : string model_name = params_base . model_alias . empty ( ) ? DEFAULT_OAICOMPAT_MODEL : params_base . model_alias ;
params . oaicompat_model = json_value ( data , " model " , model_name ) ;
return params ;
}
// utility function
static std : : unordered_set < int > get_list_id ( const std : : vector < server_task > & tasks ) {
std : : unordered_set < int > ids ( tasks . size ( ) ) ;
for ( size_t i = 0 ; i < tasks . size ( ) ; i + + ) {
ids . insert ( tasks [ i ] . id ) ;
}
return ids ;
}
} ;
2024-12-06 11:14:32 +01:00
struct result_timings {
int32_t prompt_n = - 1 ;
double prompt_ms ;
double prompt_per_token_ms ;
double prompt_per_second ;
int32_t predicted_n = - 1 ;
double predicted_ms ;
double predicted_per_token_ms ;
double predicted_per_second ;
2024-12-07 20:21:09 +01:00
json to_json ( ) const {
2024-12-06 11:14:32 +01:00
return {
{ " prompt_n " , prompt_n } ,
{ " prompt_ms " , prompt_ms } ,
{ " prompt_per_token_ms " , prompt_per_token_ms } ,
{ " prompt_per_second " , prompt_per_second } ,
{ " predicted_n " , predicted_n } ,
{ " predicted_ms " , predicted_ms } ,
{ " predicted_per_token_ms " , predicted_per_token_ms } ,
{ " predicted_per_second " , predicted_per_second } ,
} ;
}
} ;
struct server_task_result {
int id = - 1 ;
int id_slot = - 1 ;
virtual bool is_error ( ) {
// only used by server_task_result_error
return false ;
}
virtual bool is_stop ( ) {
2024-12-08 20:38:51 +01:00
// only used by server_task_result_cmpl_*
2024-12-06 11:14:32 +01:00
return false ;
}
virtual int get_index ( ) {
return - 1 ;
}
virtual json to_json ( ) = 0 ;
virtual ~ server_task_result ( ) = default ;
} ;
// using shared_ptr for polymorphism of server_task_result
using server_task_result_ptr = std : : unique_ptr < server_task_result > ;
inline std : : string stop_type_to_str ( stop_type type ) {
switch ( type ) {
case STOP_TYPE_EOS : return " eos " ;
case STOP_TYPE_WORD : return " word " ;
case STOP_TYPE_LIMIT : return " limit " ;
default : return " none " ;
}
}
struct completion_token_output {
llama_token tok ;
std : : string text_to_send ;
struct token_prob {
llama_token tok ;
std : : string tok_str ;
float prob ;
} ;
std : : vector < token_prob > probs ;
json to_json ( ) const {
json probs_for_token = json : : array ( ) ;
for ( const auto & p : probs ) {
probs_for_token . push_back ( json {
{ " tok_str " , p . tok_str } ,
{ " prob " , p . prob } ,
} ) ;
}
return probs_for_token ;
}
static json probs_vector_to_json ( const std : : vector < completion_token_output > & probs ) {
json out = json : : array ( ) ;
for ( const auto & prob : probs ) {
const std : : string tok_str = prob . text_to_send ;
out . push_back ( json {
{ " content " , tok_str } ,
{ " probs " , prob . to_json ( ) } ,
} ) ;
}
return out ;
}
} ;
struct server_task_result_cmpl_final : server_task_result {
int index = 0 ;
std : : string content ;
bool stream ;
result_timings timings ;
std : : string prompt ;
bool truncated ;
int32_t n_decoded ;
int32_t n_prompt_tokens ;
int32_t n_tokens_cached ;
int32_t has_new_line ;
std : : string stopping_word ;
stop_type stop = STOP_TYPE_NONE ;
std : : vector < completion_token_output > probs_output ;
slot_params generation_params ;
// OAI-compat fields
bool verbose = false ;
bool oaicompat = false ;
bool oaicompat_chat = true ; // TODO: support oaicompat for non-chat
std : : string oaicompat_model ;
std : : string oaicompat_cmpl_id ;
virtual int get_index ( ) override {
return index ;
}
2024-12-08 20:38:51 +01:00
virtual bool is_stop ( ) override {
return true ; // in stream mode, final responses are considered stop
}
2024-12-06 11:14:32 +01:00
virtual json to_json ( ) override {
2024-12-08 20:38:51 +01:00
return oaicompat
? ( stream ? to_json_oaicompat_chat_stream ( ) : to_json_oaicompat_chat ( ) )
: to_json_non_oaicompat ( ) ;
2024-12-06 11:14:32 +01:00
}
json to_json_non_oaicompat ( ) {
json res = json {
{ " index " , index } ,
2024-12-08 20:38:51 +01:00
{ " content " , stream ? " " : content } , // in stream mode, content is already in last partial chunk
2024-12-06 11:14:32 +01:00
{ " id_slot " , id_slot } ,
{ " stop " , true } ,
{ " model " , oaicompat_model } ,
{ " tokens_predicted " , n_decoded } ,
{ " tokens_evaluated " , n_prompt_tokens } ,
{ " generation_settings " , generation_params . to_json ( ) } ,
{ " prompt " , prompt } ,
{ " has_new_line " , has_new_line } ,
{ " truncated " , truncated } ,
{ " stop_type " , stop_type_to_str ( stop ) } ,
{ " stopping_word " , stopping_word } ,
{ " tokens_cached " , n_tokens_cached } ,
{ " timings " , timings . to_json ( ) } ,
} ;
if ( ! probs_output . empty ( ) ) {
res [ " completion_probabilities " ] = completion_token_output : : probs_vector_to_json ( probs_output ) ;
}
return res ;
}
json to_json_oaicompat_chat ( ) {
std : : string finish_reason = " length " ;
if ( stop = = STOP_TYPE_WORD | | stop = = STOP_TYPE_EOS ) {
finish_reason = " stop " ;
}
json choices = json : : array ( { json {
{ " finish_reason " , finish_reason } ,
{ " index " , 0 } ,
{ " message " , json {
{ " content " , content } ,
{ " role " , " assistant " }
}
} } } ) ;
std : : time_t t = std : : time ( 0 ) ;
json res = json {
{ " choices " , choices } ,
{ " created " , t } ,
{ " model " , oaicompat_model } ,
{ " object " , " chat.completion " } ,
{ " usage " , json {
{ " completion_tokens " , n_decoded } ,
{ " prompt_tokens " , n_prompt_tokens } ,
{ " total_tokens " , n_decoded + n_prompt_tokens }
} } ,
{ " id " , oaicompat_cmpl_id }
} ;
// extra fields for debugging purposes
if ( verbose ) {
res [ " __verbose " ] = to_json_non_oaicompat ( ) ;
}
if ( timings . prompt_n > = 0 ) {
res . push_back ( { " timings " , timings . to_json ( ) } ) ;
}
return res ;
}
2024-12-08 20:38:51 +01:00
json to_json_oaicompat_chat_stream ( ) {
std : : time_t t = std : : time ( 0 ) ;
std : : string finish_reason = " length " ;
if ( stop = = STOP_TYPE_WORD | | stop = = STOP_TYPE_EOS ) {
finish_reason = " stop " ;
}
json choices = json : : array ( { json { { " finish_reason " , finish_reason } ,
{ " index " , 0 } ,
{ " delta " , json : : object ( ) } } } ) ;
json ret = json {
{ " choices " , choices } ,
{ " created " , t } ,
{ " id " , oaicompat_cmpl_id } ,
{ " model " , oaicompat_model } ,
{ " object " , " chat.completion.chunk " } ,
{ " usage " , json {
{ " completion_tokens " , n_decoded } ,
{ " prompt_tokens " , n_prompt_tokens } ,
{ " total_tokens " , n_decoded + n_prompt_tokens } ,
} } ,
} ;
if ( timings . prompt_n > = 0 ) {
ret . push_back ( { " timings " , timings . to_json ( ) } ) ;
}
return ret ;
}
2024-12-06 11:14:32 +01:00
} ;
struct server_task_result_cmpl_partial : server_task_result {
int index = 0 ;
std : : string content ;
int32_t n_decoded ;
int32_t n_prompt_tokens ;
std : : vector < completion_token_output > probs_output ;
result_timings timings ;
// OAI-compat fields
bool verbose = false ;
bool oaicompat = false ;
bool oaicompat_chat = true ; // TODO: support oaicompat for non-chat
std : : string oaicompat_model ;
std : : string oaicompat_cmpl_id ;
virtual int get_index ( ) override {
return index ;
}
virtual bool is_stop ( ) override {
2024-12-08 20:38:51 +01:00
return false ; // in stream mode, partial responses are not considered stop
2024-12-06 11:14:32 +01:00
}
virtual json to_json ( ) override {
2024-12-08 20:38:51 +01:00
return oaicompat ? to_json_oaicompat ( ) : to_json_non_oaicompat ( ) ;
}
json to_json_non_oaicompat ( ) {
2024-12-06 11:14:32 +01:00
// non-OAI-compat JSON
json res = json {
{ " index " , index } ,
{ " content " , content } ,
2024-12-08 20:38:51 +01:00
{ " stop " , false } ,
2024-12-06 11:14:32 +01:00
{ " id_slot " , id_slot } ,
{ " tokens_predicted " , n_decoded } ,
{ " tokens_evaluated " , n_prompt_tokens } ,
} ;
// populate the timings object when needed (usually for the last response or with timings_per_token enabled)
if ( timings . prompt_n > 0 ) {
res . push_back ( { " timings " , timings . to_json ( ) } ) ;
}
if ( ! probs_output . empty ( ) ) {
res [ " completion_probabilities " ] = completion_token_output : : probs_vector_to_json ( probs_output ) ;
}
return res ;
}
json to_json_oaicompat ( ) {
bool first = n_decoded = = 0 ;
std : : time_t t = std : : time ( 0 ) ;
json choices ;
2024-12-08 20:38:51 +01:00
if ( first ) {
if ( content . empty ( ) ) {
choices = json : : array ( { json { { " finish_reason " , nullptr } ,
2024-12-06 11:14:32 +01:00
{ " index " , 0 } ,
2024-12-08 20:38:51 +01:00
{ " delta " , json { { " role " , " assistant " } } } } } ) ;
2024-12-06 11:14:32 +01:00
} else {
2024-12-08 20:38:51 +01:00
// We have to send this as two updates to conform to openai behavior
json initial_ret = json { { " choices " , json : : array ( { json {
{ " finish_reason " , nullptr } ,
{ " index " , 0 } ,
{ " delta " , json {
{ " role " , " assistant " }
} } } } ) } ,
{ " created " , t } ,
{ " id " , oaicompat_cmpl_id } ,
{ " model " , oaicompat_model } ,
{ " object " , " chat.completion.chunk " } } ;
json second_ret = json {
{ " choices " , json : : array ( { json { { " finish_reason " , nullptr } ,
{ " index " , 0 } ,
{ " delta " , json {
{ " content " , content } } }
} } ) } ,
{ " created " , t } ,
{ " id " , oaicompat_cmpl_id } ,
{ " model " , oaicompat_model } ,
{ " object " , " chat.completion.chunk " } } ;
return std : : vector < json > ( { initial_ret , second_ret } ) ;
2024-12-06 11:14:32 +01:00
}
2024-12-08 20:38:51 +01:00
} else {
choices = json : : array ( { json {
{ " finish_reason " , nullptr } ,
{ " index " , 0 } ,
{ " delta " ,
json {
{ " content " , content } ,
} } ,
} } ) ;
2024-12-06 11:14:32 +01:00
}
json ret = json {
{ " choices " , choices } ,
{ " created " , t } ,
{ " id " , oaicompat_cmpl_id } ,
{ " model " , oaicompat_model } ,
{ " object " , " chat.completion.chunk " }
} ;
if ( timings . prompt_n > = 0 ) {
ret . push_back ( { " timings " , timings . to_json ( ) } ) ;
}
return std : : vector < json > ( { ret } ) ;
}
} ;
struct server_task_result_embd : server_task_result {
int index = 0 ;
std : : vector < float > embedding ;
virtual int get_index ( ) override {
return index ;
}
virtual json to_json ( ) override {
return json {
{ " index " , index } ,
{ " embedding " , embedding } ,
} ;
}
} ;
struct server_task_result_rerank : server_task_result {
int index = 0 ;
float score = - 1e6 ;
virtual int get_index ( ) override {
return index ;
}
virtual json to_json ( ) override {
return json {
{ " index " , index } ,
{ " score " , score } ,
} ;
}
} ;
// this function maybe used outside of server_task_result_error
static json format_error_response ( const std : : string & message , const enum error_type type ) {
std : : string type_str ;
int code = 500 ;
switch ( type ) {
case ERROR_TYPE_INVALID_REQUEST :
type_str = " invalid_request_error " ;
code = 400 ;
break ;
case ERROR_TYPE_AUTHENTICATION :
type_str = " authentication_error " ;
code = 401 ;
break ;
case ERROR_TYPE_NOT_FOUND :
type_str = " not_found_error " ;
code = 404 ;
break ;
case ERROR_TYPE_SERVER :
type_str = " server_error " ;
code = 500 ;
break ;
case ERROR_TYPE_PERMISSION :
type_str = " permission_error " ;
code = 403 ;
break ;
case ERROR_TYPE_NOT_SUPPORTED :
type_str = " not_supported_error " ;
code = 501 ;
break ;
case ERROR_TYPE_UNAVAILABLE :
type_str = " unavailable_error " ;
code = 503 ;
break ;
}
return json {
{ " code " , code } ,
{ " message " , message } ,
{ " type " , type_str } ,
} ;
}
struct server_task_result_error : server_task_result {
int index = 0 ;
error_type err_type = ERROR_TYPE_SERVER ;
std : : string err_msg ;
virtual bool is_error ( ) override {
return true ;
}
virtual json to_json ( ) override {
return format_error_response ( err_msg , err_type ) ;
}
} ;
struct server_task_result_metrics : server_task_result {
int n_idle_slots ;
int n_processing_slots ;
int n_tasks_deferred ;
int64_t t_start ;
int32_t kv_cache_tokens_count ;
int32_t kv_cache_used_cells ;
// TODO: somehow reuse server_metrics in the future, instead of duplicating the fields
uint64_t n_prompt_tokens_processed_total = 0 ;
uint64_t t_prompt_processing_total = 0 ;
uint64_t n_tokens_predicted_total = 0 ;
uint64_t t_tokens_generation_total = 0 ;
uint64_t n_prompt_tokens_processed = 0 ;
uint64_t t_prompt_processing = 0 ;
uint64_t n_tokens_predicted = 0 ;
uint64_t t_tokens_generation = 0 ;
uint64_t n_decode_total = 0 ;
uint64_t n_busy_slots_total = 0 ;
2024-12-07 20:21:09 +01:00
// while we can also use std::vector<server_slot> this requires copying the slot object which can be quite messy
// therefore, we use json to temporarily store the slot.to_json() result
2024-12-06 11:14:32 +01:00
json slots_data = json : : array ( ) ;
virtual json to_json ( ) override {
return json {
{ " idle " , n_idle_slots } ,
{ " processing " , n_processing_slots } ,
{ " deferred " , n_tasks_deferred } ,
{ " t_start " , t_start } ,
{ " n_prompt_tokens_processed_total " , n_prompt_tokens_processed_total } ,
{ " t_tokens_generation_total " , t_tokens_generation_total } ,
{ " n_tokens_predicted_total " , n_tokens_predicted_total } ,
{ " t_prompt_processing_total " , t_prompt_processing_total } ,
{ " n_prompt_tokens_processed " , n_prompt_tokens_processed } ,
{ " t_prompt_processing " , t_prompt_processing } ,
{ " n_tokens_predicted " , n_tokens_predicted } ,
{ " t_tokens_generation " , t_tokens_generation } ,
{ " n_decode_total " , n_decode_total } ,
{ " n_busy_slots_total " , n_busy_slots_total } ,
{ " kv_cache_tokens_count " , kv_cache_tokens_count } ,
{ " kv_cache_used_cells " , kv_cache_used_cells } ,
{ " slots " , slots_data } ,
} ;
}
} ;
struct server_task_result_slot_save_load : server_task_result {
std : : string filename ;
bool is_save ; // true = save, false = load
size_t n_tokens ;
size_t n_bytes ;
double t_ms ;
virtual json to_json ( ) override {
if ( is_save ) {
return json {
{ " id_slot " , id_slot } ,
{ " filename " , filename } ,
{ " n_saved " , n_tokens } ,
{ " n_written " , n_bytes } ,
{ " timings " , {
{ " save_ms " , t_ms }
} } ,
} ;
} else {
return json {
{ " id_slot " , id_slot } ,
{ " filename " , filename } ,
{ " n_restored " , n_tokens } ,
{ " n_read " , n_bytes } ,
{ " timings " , {
{ " restore_ms " , t_ms }
} } ,
} ;
}
}
} ;
struct server_task_result_slot_erase : server_task_result {
size_t n_erased ;
virtual json to_json ( ) override {
return json {
{ " id_slot " , id_slot } ,
{ " n_erased " , n_erased } ,
} ;
}
} ;
struct server_task_result_apply_lora : server_task_result {
virtual json to_json ( ) override {
return json { { " success " , true } } ;
}
2024-02-29 21:42:11 +01:00
} ;
struct server_slot {
2023-10-22 21:53:08 +02:00
int id ;
2024-03-07 10:41:53 +01:00
int id_task = - 1 ;
2024-09-02 17:11:51 +02:00
2024-12-07 20:21:09 +01:00
// only used for completion/embedding/infill/rerank
server_task_type task_type = SERVER_TASK_TYPE_COMPLETION ;
2024-12-07 10:52:44 +01:00
llama_batch batch_spec = { } ;
2024-11-25 15:31:38 +01:00
2024-12-07 17:02:05 +01:00
llama_context * ctx = nullptr ;
2024-11-25 15:31:38 +01:00
llama_context * ctx_dft = nullptr ;
common_speculative * spec = nullptr ;
2024-09-02 17:11:51 +02:00
// the index relative to completion multi-task request
size_t index = 0 ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2023-10-22 21:53:08 +02:00
struct slot_params params ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-07 10:41:53 +01:00
slot_state state = SLOT_STATE_IDLE ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2023-10-22 21:53:08 +02:00
// used to determine the slot that has been used the longest
int64_t t_last_used = - 1 ;
2023-10-20 20:07:23 +02:00
2023-10-22 21:53:08 +02:00
// generation props
int32_t n_ctx = 0 ; // context size per slot
int32_t n_past = 0 ;
int32_t n_decoded = 0 ;
int32_t n_remaining = - 1 ;
int32_t i_batch = - 1 ;
2024-03-13 18:54:21 +01:00
int32_t n_predict = - 1 ; // TODO: disambiguate from params.n_predict
2023-10-20 20:07:23 +02:00
2024-10-24 21:51:22 +02:00
// n_prompt_tokens may not be equal to prompt_tokens.size(), because prompt maybe truncated
2024-02-29 21:42:11 +01:00
int32_t n_prompt_tokens = 0 ;
int32_t n_prompt_tokens_processed = 0 ;
2023-10-22 21:53:08 +02:00
2024-10-24 21:51:22 +02:00
// input prompt tokens
llama_tokens prompt_tokens ;
2024-03-07 10:41:53 +01:00
2024-10-18 06:32:19 +02:00
size_t last_nl_pos = 0 ;
2023-10-22 21:53:08 +02:00
std : : string generated_text ;
2024-10-24 21:51:22 +02:00
llama_tokens cache_tokens ;
2023-10-22 21:53:08 +02:00
std : : vector < completion_token_output > generated_token_probs ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2023-10-22 21:53:08 +02:00
bool has_next_token = true ;
2024-10-12 15:14:27 +02:00
bool has_new_line = false ;
2024-03-07 10:41:53 +01:00
bool truncated = false ;
2024-12-06 11:14:32 +01:00
stop_type stop ;
2023-10-22 21:53:08 +02:00
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
std : : string stopping_word ;
2023-10-22 21:53:08 +02:00
// sampling
json-schema-to-grammar improvements (+ added to server) (#5978)
* json: fix arrays (disallow `[,1]`)
* json: support tuple types (`[number, string]`)
* json: support additionalProperties (`{[k: string]: [string,number][]}`)
* json: support required / optional properties
* json: add support for pattern
* json: resolve $ref (and support https schema urls)
* json: fix $ref resolution
* join: support union types (mostly for nullable types I think)
* json: support allOf + nested anyOf
* json: support any (`{}` or `{type: object}`)
* json: fix merge
* json: temp fix for escapes
* json: spaces in output and unrestricted output spaces
* json: add typings
* json:fix typo
* Create ts-type-to-grammar.sh
* json: fix _format_literal (json.dumps already escapes quotes)
* json: merge lit sequences and handle negatives
{"type": "string", "pattern": "^({\"question\": \"[^\"]+\", \"response\": \"[^\"]+\"}\\n)+$"}
* json: handle pattern repetitions
* Update json-schema-to-grammar.mjs
* Create regex-to-grammar.py
* json: extract repeated regexp patterns to subrule
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* json: handle schema from pydantic Optional fields
* Update json-schema-to-grammar.py
* Update json-schema-to-grammar.py
* Update ts-type-to-grammar.sh
* Update ts-type-to-grammar.sh
* json: simplify nullable fields handling
* json: accept duplicate identical rules
* json: revert space to 1 at most
* json: reuse regexp pattern subrules
* json: handle uuid string format
* json: fix literal escapes
* json: add --allow-fetch
* json: simplify range escapes
* json: support negative ranges in patterns
* Delete commit.txt
* json: custom regex parser, adds dot support & JS-portable
* json: rm trailing spaces
* Update json-schema-to-grammar.mjs
* json: updated server & chat `( cd examples/server && ./deps.sh )`
* json: port fixes from mjs to python
* Update ts-type-to-grammar.sh
* json: support prefixItems alongside array items
* json: add date format + fix uuid
* json: add date, time, date-time formats
* json: preserve order of props from TS defs
* json: port schema converter to C++, wire in ./server
* json: nits
* Update json-schema-to-grammar.cpp
* Update json-schema-to-grammar.cpp
* Update json-schema-to-grammar.cpp
* json: fix mjs implementation + align outputs
* Update json-schema-to-grammar.mjs.hpp
* json: test C++, JS & Python versions
* json: nits + regen deps
* json: cleanup test
* json: revert from c++17 to 11
* json: nit fixes
* json: dirty include for test
* json: fix zig build
* json: pass static command to std::system in tests (fixed temp files)
* json: fix top-level $refs
* json: don't use c++20 designated initializers
* nit
* json: basic support for reserved names `{number:{number:{root:number}}}`
* Revamp test cmake to allow args (WORKING_DIRECTORY needed for JSON test)
* json: re-ran server deps.sh
* json: simplify test
* json: support mix of additional props & required/optional
* json: add tests for some expected failures
* json: fix type=const in c++, add failure expectations for non-str const&enum
* json: test (& simplify output of) empty schema
* json: check parsing in test + fix value & string refs
* json: add server tests for OAI JSON response_format
* json: test/fix top-level anyOf
* json: improve grammar parsing failures
* json: test/fix additional props corner cases
* json: fix string patterns (was missing quotes)
* json: ws nit
* json: fix json handling in server when there's no response_format
* json: catch schema conversion errors in server
* json: don't complain about unknown format type in server if unset
* json: cleaner build of test
* json: create examples/json-schema-pydantic-example.py
* json: fix date pattern
* json: move json.hpp & json-schema-to-grammar.{cpp,h} to common
* json: indent 4 spaces
* json: fix naming of top-level c++ function (+ drop unused one)
* json: avoid using namespace std
* json: fix zig build
* Update server.feature
* json: iostream -> fprintf
* json: space before & refs for consistency
* json: nits
2024-03-21 12:50:43 +01:00
json json_schema ;
2023-10-22 21:53:08 +02:00
2024-10-10 22:57:42 +02:00
struct common_sampler * smpl = nullptr ;
2024-09-07 14:16:19 +02:00
llama_token sampled ;
2023-10-22 21:53:08 +02:00
// stats
2024-10-12 15:06:31 +02:00
size_t n_sent_text = 0 ; // number of sent text character
2024-02-29 21:42:11 +01:00
size_t n_sent_token_probs = 0 ;
2023-10-22 21:53:08 +02:00
int64_t t_start_process_prompt ;
2024-03-07 10:41:53 +01:00
int64_t t_start_generation ;
2023-10-22 21:53:08 +02:00
double t_prompt_processing ; // ms
2024-10-12 15:06:31 +02:00
double t_token_generation ; // ms
2023-10-22 21:53:08 +02:00
2024-09-06 23:21:29 +02:00
std : : function < void ( int ) > callback_on_release ;
2023-10-22 21:53:08 +02:00
void reset ( ) {
2024-09-15 19:46:12 +02:00
SLT_DBG ( * this , " %s " , " \n " ) ;
2024-03-07 10:41:53 +01:00
n_prompt_tokens = 0 ;
2024-10-18 06:32:19 +02:00
last_nl_pos = 0 ;
2024-03-07 10:41:53 +01:00
generated_text = " " ;
2024-10-12 15:14:27 +02:00
has_new_line = false ;
2024-03-07 10:41:53 +01:00
truncated = false ;
2024-12-06 11:14:32 +01:00
stop = STOP_TYPE_NONE ;
2024-03-07 10:41:53 +01:00
stopping_word = " " ;
n_past = 0 ;
n_sent_text = 0 ;
n_sent_token_probs = 0 ;
2024-12-07 20:21:09 +01:00
task_type = SERVER_TASK_TYPE_COMPLETION ;
2024-01-30 19:17:30 +01:00
2023-10-22 21:53:08 +02:00
generated_token_probs . clear ( ) ;
}
2024-12-07 20:21:09 +01:00
bool is_non_causal ( ) const {
return task_type = = SERVER_TASK_TYPE_EMBEDDING | | task_type = = SERVER_TASK_TYPE_RERANK ;
}
2024-11-25 15:31:38 +01:00
bool has_budget ( const common_params & global_params ) {
2024-02-29 21:42:11 +01:00
if ( params . n_predict = = - 1 & & global_params . n_predict = = - 1 ) {
2024-01-07 07:45:26 +01:00
return true ; // limitless
}
2023-10-22 21:53:08 +02:00
n_remaining = - 1 ;
2024-01-07 07:45:26 +01:00
2024-02-29 21:42:11 +01:00
if ( params . n_predict ! = - 1 ) {
2023-10-22 21:53:08 +02:00
n_remaining = params . n_predict - n_decoded ;
2024-02-29 21:42:11 +01:00
} else if ( global_params . n_predict ! = - 1 ) {
2023-10-22 21:53:08 +02:00
n_remaining = global_params . n_predict - n_decoded ;
}
2024-01-07 07:45:26 +01:00
return n_remaining > 0 ; // no budget
2023-10-22 21:53:08 +02:00
}
bool is_processing ( ) const {
2024-09-06 23:21:29 +02:00
return state ! = SLOT_STATE_IDLE ;
2023-10-22 21:53:08 +02:00
}
2024-11-25 15:31:38 +01:00
bool can_speculate ( ) const {
return ctx_dft & & params . speculative . n_max > 0 & & params . cache_prompt ;
}
2024-09-15 19:46:12 +02:00
void add_token ( const completion_token_output & token ) {
2024-09-06 23:21:29 +02:00
if ( ! is_processing ( ) ) {
2024-09-15 19:46:12 +02:00
SLT_WRN ( * this , " %s " , " slot is not processing \n " ) ;
2023-10-22 21:53:08 +02:00
return ;
}
generated_token_probs . push_back ( token ) ;
}
void release ( ) {
2024-09-06 23:21:29 +02:00
if ( is_processing ( ) ) {
2024-09-15 19:46:12 +02:00
SLT_INF ( * this , " stop processing: n_past = %d, truncated = %d \n " , n_past , truncated ) ;
2024-11-02 17:34:56 +01:00
t_last_used = ggml_time_us ( ) ;
2024-03-07 10:41:53 +01:00
t_token_generation = ( ggml_time_us ( ) - t_start_generation ) / 1e3 ;
2024-09-06 23:21:29 +02:00
state = SLOT_STATE_IDLE ;
callback_on_release ( id ) ;
2023-10-22 21:53:08 +02:00
}
}
2024-12-06 11:14:32 +01:00
result_timings get_timings ( ) const {
result_timings timings ;
timings . prompt_n = n_prompt_tokens_processed ;
timings . prompt_ms = t_prompt_processing ;
timings . prompt_per_token_ms = t_prompt_processing / n_prompt_tokens_processed ;
timings . prompt_per_second = 1e3 / t_prompt_processing * n_prompt_tokens_processed ;
timings . predicted_n = n_decoded ;
timings . predicted_ms = t_token_generation ;
timings . predicted_per_token_ms = t_token_generation / n_decoded ;
timings . predicted_per_second = 1e3 / t_token_generation * n_decoded ;
return timings ;
2023-10-22 21:53:08 +02:00
}
2024-12-06 11:14:32 +01:00
size_t find_stopping_strings ( const std : : string & text , const size_t last_token_size , bool is_full_stop ) {
2024-03-07 10:41:53 +01:00
size_t stop_pos = std : : string : : npos ;
for ( const std : : string & word : params . antiprompt ) {
size_t pos ;
2024-12-06 11:14:32 +01:00
if ( is_full_stop ) {
2024-03-07 10:41:53 +01:00
const size_t tmp = word . size ( ) + last_token_size ;
const size_t from_pos = text . size ( ) > tmp ? text . size ( ) - tmp : 0 ;
pos = text . find ( word , from_pos ) ;
} else {
2024-12-06 11:14:32 +01:00
// otherwise, partial stop
2024-03-07 10:41:53 +01:00
pos = find_partial_stop_string ( word , text ) ;
}
if ( pos ! = std : : string : : npos & & ( stop_pos = = std : : string : : npos | | pos < stop_pos ) ) {
2024-12-06 11:14:32 +01:00
if ( is_full_stop ) {
stop = STOP_TYPE_WORD ;
2024-03-07 10:41:53 +01:00
stopping_word = word ;
has_next_token = false ;
}
stop_pos = pos ;
}
}
return stop_pos ;
}
2023-11-25 10:29:06 +01:00
void print_timings ( ) const {
2024-09-15 19:46:12 +02:00
const double t_prompt = t_prompt_processing / n_prompt_tokens_processed ;
const double n_prompt_second = 1e3 / t_prompt_processing * n_prompt_tokens_processed ;
const double t_gen = t_token_generation / n_decoded ;
const double n_gen_second = 1e3 / t_token_generation * n_decoded ;
SLT_INF ( * this ,
" \n "
2024-12-13 23:21:49 +01:00
" prompt eval time = %10.2f ms / %5d tokens (%8.2f ms per token, %8.2f tokens per second) \n "
" eval time = %10.2f ms / %5d tokens (%8.2f ms per token, %8.2f tokens per second) \n "
" total time = %10.2f ms / %5d tokens \n " ,
2024-09-15 19:46:12 +02:00
t_prompt_processing , n_prompt_tokens_processed , t_prompt , n_prompt_second ,
t_token_generation , n_decoded , t_gen , n_gen_second ,
t_prompt_processing + t_token_generation , n_prompt_tokens_processed + n_decoded ) ;
2023-07-04 16:05:27 +02:00
}
2024-12-07 17:02:05 +01:00
json to_json ( ) const {
return json {
{ " id " , id } ,
{ " id_task " , id_task } ,
{ " n_ctx " , n_ctx } ,
{ " speculative " , can_speculate ( ) } ,
{ " is_processing " , is_processing ( ) } ,
2024-12-07 20:21:09 +01:00
{ " non_causal " , is_non_causal ( ) } ,
2024-12-07 17:02:05 +01:00
{ " params " , params . to_json ( ) } ,
{ " prompt " , common_detokenize ( ctx , prompt_tokens ) } ,
{ " next_token " ,
{
{ " has_next_token " , has_next_token } ,
{ " has_new_line " , has_new_line } ,
{ " n_remain " , n_remaining } ,
{ " n_decoded " , n_decoded } ,
{ " stopping_word " , stopping_word } ,
}
} ,
} ;
}
2023-10-22 21:53:08 +02:00
} ;
2024-02-29 21:42:11 +01:00
struct server_metrics {
2024-03-09 16:34:15 +01:00
int64_t t_start = 0 ;
2024-03-08 12:25:04 +01:00
2024-02-25 13:49:43 +01:00
uint64_t n_prompt_tokens_processed_total = 0 ;
2024-03-08 12:25:04 +01:00
uint64_t t_prompt_processing_total = 0 ;
2024-02-25 13:49:43 +01:00
uint64_t n_tokens_predicted_total = 0 ;
2024-03-08 12:25:04 +01:00
uint64_t t_tokens_generation_total = 0 ;
2024-02-25 13:49:43 +01:00
uint64_t n_prompt_tokens_processed = 0 ;
uint64_t t_prompt_processing = 0 ;
2024-03-07 10:41:53 +01:00
uint64_t n_tokens_predicted = 0 ;
uint64_t t_tokens_generation = 0 ;
2024-02-25 13:49:43 +01:00
2024-09-06 23:21:29 +02:00
uint64_t n_decode_total = 0 ;
uint64_t n_busy_slots_total = 0 ;
2024-03-09 16:34:15 +01:00
void init ( ) {
t_start = ggml_time_us ( ) ;
}
void on_prompt_eval ( const server_slot & slot ) {
2024-02-29 21:42:11 +01:00
n_prompt_tokens_processed_total + = slot . n_prompt_tokens_processed ;
n_prompt_tokens_processed + = slot . n_prompt_tokens_processed ;
t_prompt_processing + = slot . t_prompt_processing ;
2024-03-08 12:25:04 +01:00
t_prompt_processing_total + = slot . t_prompt_processing ;
2024-02-25 13:49:43 +01:00
}
2024-03-09 16:34:15 +01:00
void on_prediction ( const server_slot & slot ) {
2024-03-08 12:25:04 +01:00
n_tokens_predicted_total + = slot . n_decoded ;
n_tokens_predicted + = slot . n_decoded ;
t_tokens_generation + = slot . t_token_generation ;
t_tokens_generation_total + = slot . t_token_generation ;
2024-02-25 13:49:43 +01:00
}
2024-09-06 23:21:29 +02:00
void on_decoded ( const std : : vector < server_slot > & slots ) {
n_decode_total + + ;
for ( const auto & slot : slots ) {
if ( slot . is_processing ( ) ) {
n_busy_slots_total + + ;
}
}
}
2024-02-25 13:49:43 +01:00
void reset_bucket ( ) {
n_prompt_tokens_processed = 0 ;
t_prompt_processing = 0 ;
n_tokens_predicted = 0 ;
t_tokens_generation = 0 ;
}
} ;
2024-03-07 10:41:53 +01:00
struct server_queue {
int id = 0 ;
bool running ;
// queues
2024-09-02 17:11:51 +02:00
std : : deque < server_task > queue_tasks ;
std : : deque < server_task > queue_tasks_deferred ;
2024-03-07 10:41:53 +01:00
std : : mutex mutex_tasks ;
std : : condition_variable condition_tasks ;
// callback functions
2024-11-06 12:29:01 +01:00
std : : function < void ( server_task ) > callback_new_task ;
std : : function < void ( void ) > callback_update_slots ;
2024-03-07 10:41:53 +01:00
// Add a new task to the end of the queue
2024-09-02 17:11:51 +02:00
int post ( server_task task , bool front = false ) {
2024-03-07 10:41:53 +01:00
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
2024-12-07 20:21:09 +01:00
GGML_ASSERT ( task . id ! = - 1 ) ;
2024-09-15 19:46:12 +02:00
QUE_DBG ( " new task, id = %d, front = %d \n " , task . id , front ) ;
2024-09-02 17:11:51 +02:00
if ( front ) {
queue_tasks . push_front ( std : : move ( task ) ) ;
} else {
queue_tasks . push_back ( std : : move ( task ) ) ;
}
2024-03-07 10:41:53 +01:00
condition_tasks . notify_one ( ) ;
return task . id ;
}
2024-09-02 17:11:51 +02:00
// multi-task version of post()
int post ( std : : vector < server_task > & tasks , bool front = false ) {
2024-09-06 14:06:04 +02:00
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
2024-09-02 17:11:51 +02:00
for ( auto & task : tasks ) {
if ( task . id = = - 1 ) {
task . id = id + + ;
}
2024-09-15 19:46:12 +02:00
QUE_DBG ( " new task, id = %d/%d, front = %d \n " , task . id , ( int ) tasks . size ( ) , front ) ;
2024-09-02 17:11:51 +02:00
if ( front ) {
queue_tasks . push_front ( std : : move ( task ) ) ;
} else {
queue_tasks . push_back ( std : : move ( task ) ) ;
}
}
condition_tasks . notify_one ( ) ;
return 0 ;
}
2024-03-07 10:41:53 +01:00
// Add a new task, but defer until one slot is available
void defer ( server_task task ) {
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
2024-09-15 19:46:12 +02:00
QUE_DBG ( " defer task, id = %d \n " , task . id ) ;
2024-03-07 10:41:53 +01:00
queue_tasks_deferred . push_back ( std : : move ( task ) ) ;
2024-09-06 23:21:29 +02:00
condition_tasks . notify_one ( ) ;
2024-03-07 10:41:53 +01:00
}
2024-09-02 17:11:51 +02:00
// Get the next id for creating a new task
2024-03-07 10:41:53 +01:00
int get_new_id ( ) {
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
int new_id = id + + ;
return new_id ;
}
// Register function to process a new task
2024-11-06 12:29:01 +01:00
void on_new_task ( std : : function < void ( server_task ) > callback ) {
2024-03-07 10:41:53 +01:00
callback_new_task = std : : move ( callback ) ;
}
// Register the function to be called when all slots data is ready to be processed
2024-03-11 10:56:41 +01:00
void on_update_slots ( std : : function < void ( void ) > callback ) {
callback_update_slots = std : : move ( callback ) ;
2024-03-07 10:41:53 +01:00
}
2024-09-06 23:21:29 +02:00
// Call when the state of one slot is changed, it will move one task from deferred to main queue
void pop_deferred_task ( ) {
2024-03-07 10:41:53 +01:00
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
2024-09-06 23:21:29 +02:00
if ( ! queue_tasks_deferred . empty ( ) ) {
queue_tasks . emplace_back ( std : : move ( queue_tasks_deferred . front ( ) ) ) ;
queue_tasks_deferred . pop_front ( ) ;
2024-03-07 10:41:53 +01:00
}
2024-09-06 23:21:29 +02:00
condition_tasks . notify_one ( ) ;
2024-03-07 10:41:53 +01:00
}
// end the start_loop routine
void terminate ( ) {
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
running = false ;
condition_tasks . notify_all ( ) ;
}
/**
* Main loop consists of these steps :
* - Wait until a new task arrives
* - Process the task ( i . e . maybe copy data into slot )
* - Check if multitask is finished
2024-03-11 10:56:41 +01:00
* - Update all slots
2024-03-07 10:41:53 +01:00
*/
void start_loop ( ) {
running = true ;
while ( true ) {
2024-09-15 19:46:12 +02:00
QUE_DBG ( " %s " , " processing new tasks \n " ) ;
2024-03-07 10:41:53 +01:00
while ( true ) {
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
if ( queue_tasks . empty ( ) ) {
lock . unlock ( ) ;
break ;
}
server_task task = queue_tasks . front ( ) ;
2024-09-06 23:21:29 +02:00
queue_tasks . pop_front ( ) ;
2024-03-07 10:41:53 +01:00
lock . unlock ( ) ;
2024-09-15 19:46:12 +02:00
QUE_DBG ( " processing task, id = %d \n " , task . id ) ;
2024-11-06 12:29:01 +01:00
callback_new_task ( std : : move ( task ) ) ;
2024-03-07 10:41:53 +01:00
}
// all tasks in the current loop is processed, slots data is now ready
2024-09-15 19:46:12 +02:00
QUE_DBG ( " %s " , " update slots \n " ) ;
2024-03-07 10:41:53 +01:00
2024-03-11 10:56:41 +01:00
callback_update_slots ( ) ;
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
QUE_DBG ( " %s " , " waiting for new tasks \n " ) ;
2024-03-07 10:41:53 +01:00
{
std : : unique_lock < std : : mutex > lock ( mutex_tasks ) ;
if ( queue_tasks . empty ( ) ) {
if ( ! running ) {
2024-09-15 19:46:12 +02:00
QUE_DBG ( " %s " , " terminate \n " ) ;
2024-03-07 10:41:53 +01:00
return ;
}
condition_tasks . wait ( lock , [ & ] {
return ( ! queue_tasks . empty ( ) | | ! running ) ;
} ) ;
}
}
}
}
} ;
struct server_response {
// for keeping track of all tasks waiting for the result
2024-09-02 17:11:51 +02:00
std : : unordered_set < int > waiting_task_ids ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
// the main result queue (using ptr for polymorphism)
std : : vector < server_task_result_ptr > queue_results ;
2024-03-07 10:41:53 +01:00
std : : mutex mutex_results ;
std : : condition_variable condition_results ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
// add the id_task to the list of tasks waiting for response
void add_waiting_task_id ( int id_task ) {
2024-09-19 11:44:53 +02:00
SRV_DBG ( " add task %d to waiting list. current waiting = %d (before add) \n " , id_task , ( int ) waiting_task_ids . size ( ) ) ;
2024-03-07 10:41:53 +01:00
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
waiting_task_ids . insert ( id_task ) ;
}
2024-09-02 17:11:51 +02:00
void add_waiting_tasks ( const std : : vector < server_task > & tasks ) {
2024-09-19 11:44:53 +02:00
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
for ( const auto & task : tasks ) {
SRV_DBG ( " add task %d to waiting list. current waiting = %d (before add) \n " , task . id , ( int ) waiting_task_ids . size ( ) ) ;
waiting_task_ids . insert ( task . id ) ;
2024-09-02 17:11:51 +02:00
}
}
2024-03-07 10:41:53 +01:00
// when the request is finished, we can remove task associated with it
void remove_waiting_task_id ( int id_task ) {
2024-09-19 11:44:53 +02:00
SRV_DBG ( " remove task %d from waiting list. current waiting = %d (before remove) \n " , id_task , ( int ) waiting_task_ids . size ( ) ) ;
2024-03-07 10:41:53 +01:00
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
waiting_task_ids . erase ( id_task ) ;
}
2024-09-19 11:44:53 +02:00
void remove_waiting_task_ids ( const std : : unordered_set < int > & id_tasks ) {
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
for ( const auto & id_task : id_tasks ) {
SRV_DBG ( " remove task %d from waiting list. current waiting = %d (before remove) \n " , id_task , ( int ) waiting_task_ids . size ( ) ) ;
waiting_task_ids . erase ( id_task ) ;
}
}
2024-09-02 17:11:51 +02:00
// This function blocks the thread until there is a response for one of the id_tasks
2024-12-06 11:14:32 +01:00
server_task_result_ptr recv ( const std : : unordered_set < int > & id_tasks ) {
2024-03-07 10:41:53 +01:00
while ( true ) {
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
condition_results . wait ( lock , [ & ] {
return ! queue_results . empty ( ) ;
} ) ;
for ( int i = 0 ; i < ( int ) queue_results . size ( ) ; i + + ) {
2024-12-06 11:14:32 +01:00
if ( id_tasks . find ( queue_results [ i ] - > id ) ! = id_tasks . end ( ) ) {
server_task_result_ptr res = std : : move ( queue_results [ i ] ) ;
2024-03-07 10:41:53 +01:00
queue_results . erase ( queue_results . begin ( ) + i ) ;
return res ;
}
}
}
// should never reach here
}
2024-09-02 17:11:51 +02:00
// single-task version of recv()
2024-12-06 11:14:32 +01:00
server_task_result_ptr recv ( int id_task ) {
2024-09-02 17:11:51 +02:00
std : : unordered_set < int > id_tasks = { id_task } ;
return recv ( id_tasks ) ;
2024-03-07 10:41:53 +01:00
}
// Send a new result to a waiting id_task
2024-12-06 11:14:32 +01:00
void send ( server_task_result_ptr & & result ) {
SRV_DBG ( " sending result for task id = %d \n " , result - > id ) ;
2024-03-07 10:41:53 +01:00
std : : unique_lock < std : : mutex > lock ( mutex_results ) ;
for ( const auto & id_task : waiting_task_ids ) {
2024-12-06 11:14:32 +01:00
if ( result - > id = = id_task ) {
SRV_DBG ( " task id = %d pushed to result queue \n " , result - > id ) ;
2024-09-15 19:46:12 +02:00
2024-12-06 11:14:32 +01:00
queue_results . emplace_back ( std : : move ( result ) ) ;
2024-03-07 10:41:53 +01:00
condition_results . notify_all ( ) ;
return ;
}
}
}
} ;
struct server_context {
2024-11-25 15:31:38 +01:00
common_params params_base ;
2024-03-07 10:41:53 +01:00
llama_model * model = nullptr ;
llama_context * ctx = nullptr ;
2024-10-10 22:57:42 +02:00
std : : vector < common_lora_adapter_container > loras ;
2023-10-22 21:53:08 +02:00
2024-11-25 15:31:38 +01:00
llama_model * model_dft = nullptr ;
llama_context_params cparams_dft ;
2023-10-22 21:53:08 +02:00
2024-09-09 17:10:46 +02:00
llama_batch batch = { } ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
bool clean_kv_cache = true ;
bool add_bos_token = true ;
2024-08-12 09:21:50 +02:00
bool has_eos_token = false ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
int32_t n_ctx ; // total context for all clients / slots
2023-10-22 21:53:08 +02:00
// slots / clients
2024-02-29 21:42:11 +01:00
std : : vector < server_slot > slots ;
2024-02-05 09:10:22 +01:00
json default_generation_settings_for_props ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
server_queue queue_tasks ;
server_response queue_results ;
2023-07-04 16:05:27 +02:00
2024-02-29 21:42:11 +01:00
server_metrics metrics ;
2024-02-25 13:49:43 +01:00
2024-06-08 09:50:31 +02:00
// Necessary similarity of prompt for slot selection
float slot_prompt_similarity = 0.0f ;
2024-03-07 10:41:53 +01:00
~ server_context ( ) {
if ( ctx ) {
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
llama_free ( ctx ) ;
ctx = nullptr ;
2023-05-21 19:51:18 +02:00
}
2024-03-07 10:41:53 +01:00
if ( model ) {
2023-06-24 10:47:58 +02:00
llama_free_model ( model ) ;
model = nullptr ;
}
2024-05-11 10:13:02 +02:00
2024-11-25 15:31:38 +01:00
if ( model_dft ) {
llama_free_model ( model_dft ) ;
model_dft = nullptr ;
}
2024-05-14 16:11:24 +02:00
// Clear any sampling context
for ( server_slot & slot : slots ) {
2024-11-25 15:31:38 +01:00
common_sampler_free ( slot . smpl ) ;
slot . smpl = nullptr ;
llama_free ( slot . ctx_dft ) ;
slot . ctx_dft = nullptr ;
common_speculative_free ( slot . spec ) ;
slot . spec = nullptr ;
llama_batch_free ( slot . batch_spec ) ;
2024-05-14 16:11:24 +02:00
}
2024-05-11 10:13:02 +02:00
llama_batch_free ( batch ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2024-11-25 15:31:38 +01:00
bool load_model ( const common_params & params ) {
SRV_INF ( " loading model '%s' \n " , params . model . c_str ( ) ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-11-25 15:31:38 +01:00
params_base = params ;
common_init_result llama_init = common_init_from_params ( params_base ) ;
2024-08-05 18:14:10 +02:00
model = llama_init . model ;
2024-09-15 19:46:12 +02:00
ctx = llama_init . context ;
loras = llama_init . lora_adapters ;
2024-03-07 10:41:53 +01:00
if ( model = = nullptr ) {
2024-11-25 15:31:38 +01:00
SRV_ERR ( " failed to load model, '%s' \n " , params_base . model . c_str ( ) ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
return false ;
2023-05-21 19:51:18 +02:00
}
2023-10-22 21:53:08 +02:00
2023-09-28 21:42:38 +02:00
n_ctx = llama_n_ctx ( ctx ) ;
2023-10-22 21:53:08 +02:00
2024-08-15 09:23:23 +02:00
add_bos_token = llama_add_bos_token ( model ) ;
has_eos_token = ! llama_add_eos_token ( model ) ;
2024-08-16 17:19:05 +02:00
2024-11-25 15:31:38 +01:00
if ( ! params_base . speculative . model . empty ( ) ) {
SRV_INF ( " loading draft model '%s' \n " , params_base . speculative . model . c_str ( ) ) ;
auto params_dft = params_base ;
2024-11-25 19:30:06 +01:00
params_dft . devices = params_base . speculative . devices ;
2024-11-25 15:31:38 +01:00
params_dft . model = params_base . speculative . model ;
2024-12-03 10:20:00 +01:00
params_dft . n_ctx = params_base . speculative . n_ctx = = 0 ? params_base . n_ctx / params_base . n_parallel : params_base . speculative . n_ctx ;
2024-11-25 15:31:38 +01:00
params_dft . n_gpu_layers = params_base . speculative . n_gpu_layers ;
2024-12-03 10:20:00 +01:00
params_dft . n_parallel = 1 ;
2024-11-25 15:31:38 +01:00
common_init_result llama_init_dft = common_init_from_params ( params_dft ) ;
model_dft = llama_init_dft . model ;
if ( model_dft = = nullptr ) {
SRV_ERR ( " failed to load draft model, '%s' \n " , params_base . speculative . model . c_str ( ) ) ;
return false ;
}
if ( ! common_speculative_are_compatible ( ctx , llama_init_dft . context ) ) {
SRV_ERR ( " the draft model '%s' is not compatible with the target model '%s' \n " , params_base . speculative . model . c_str ( ) , params_base . model . c_str ( ) ) ;
llama_free ( llama_init_dft . context ) ;
llama_free_model ( llama_init_dft . model ) ;
return false ;
}
2024-12-03 10:20:00 +01:00
const int n_ctx_dft = llama_n_ctx ( llama_init_dft . context ) ;
cparams_dft = common_context_params_to_llama ( params_dft ) ;
cparams_dft . n_batch = n_ctx_dft ;
// force F16 KV cache for the draft model for extra performance
cparams_dft . type_k = GGML_TYPE_F16 ;
cparams_dft . type_v = GGML_TYPE_F16 ;
2024-11-25 15:31:38 +01:00
// the context is not needed - we will create one for each slot
llama_free ( llama_init_dft . context ) ;
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
return true ;
2023-05-21 19:51:18 +02:00
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-07 10:41:53 +01:00
bool validate_model_chat_template ( ) const {
2024-11-13 12:15:23 +01:00
std : : vector < char > model_template ( 2048 , 0 ) ; // longest known template is about 1200 bytes
std : : string template_key = " tokenizer.chat_template " ;
int32_t res = llama_model_meta_val_str ( model , template_key . c_str ( ) , model_template . data ( ) , model_template . size ( ) ) ;
if ( res > = 0 ) {
llama_chat_message chat [ ] = { { " user " , " test " } } ;
std : : string tmpl = std : : string ( model_template . data ( ) , model_template . size ( ) ) ;
int32_t chat_res = llama_chat_apply_template ( model , tmpl . c_str ( ) , chat , 1 , true , nullptr , 0 ) ;
return chat_res > 0 ;
}
return false ;
2024-02-22 09:33:24 +01:00
}
2024-03-09 16:34:15 +01:00
void init ( ) {
2024-11-25 15:31:38 +01:00
const int32_t n_ctx_slot = n_ctx / params_base . n_parallel ;
2023-10-22 21:53:08 +02:00
2024-11-25 15:31:38 +01:00
SRV_INF ( " initializing slots, n_slots = %d \n " , params_base . n_parallel ) ;
2024-03-09 16:34:15 +01:00
2024-11-25 15:31:38 +01:00
for ( int i = 0 ; i < params_base . n_parallel ; i + + ) {
2024-02-29 21:42:11 +01:00
server_slot slot ;
2023-10-22 21:53:08 +02:00
slot . id = i ;
2024-12-07 17:02:05 +01:00
slot . ctx = ctx ;
2023-10-22 21:53:08 +02:00
slot . n_ctx = n_ctx_slot ;
2024-11-25 15:31:38 +01:00
slot . n_predict = params_base . n_predict ;
if ( model_dft ) {
slot . batch_spec = llama_batch_init ( params_base . speculative . n_max + 1 , 0 , 1 ) ;
slot . ctx_dft = llama_new_context_with_model ( model_dft , cparams_dft ) ;
if ( slot . ctx_dft = = nullptr ) {
SRV_ERR ( " %s " , " failed to create draft context \n " ) ;
return ;
}
slot . spec = common_speculative_init ( slot . ctx_dft ) ;
if ( slot . spec = = nullptr ) {
SRV_ERR ( " %s " , " failed to create speculator \n " ) ;
return ;
}
}
2023-10-22 21:53:08 +02:00
2024-09-15 19:46:12 +02:00
SLT_INF ( slot , " new slot n_ctx_slot = %d \n " , slot . n_ctx ) ;
2024-01-27 14:38:05 +01:00
2024-11-25 15:31:38 +01:00
slot . params . sampling = params_base . sampling ;
2024-07-11 02:08:17 +02:00
2024-09-06 23:21:29 +02:00
slot . callback_on_release = [ this ] ( int ) {
queue_tasks . pop_deferred_task ( ) ;
} ;
2024-01-27 14:38:05 +01:00
slot . reset ( ) ;
2023-10-22 21:53:08 +02:00
slots . push_back ( slot ) ;
}
2024-12-07 17:02:05 +01:00
default_generation_settings_for_props = slots [ 0 ] . to_json ( ) ;
2024-02-05 09:10:22 +01:00
2024-08-14 08:51:02 +02:00
// the update_slots() logic will always submit a maximum of n_batch or n_parallel tokens
2024-03-13 18:54:21 +01:00
// note that n_batch can be > n_ctx (e.g. for non-causal attention models such as BERT where the KV cache is not used)
{
const int32_t n_batch = llama_n_batch ( ctx ) ;
llama : greatly reduce output buffer memory usage (#6122)
* llama : greatly reduce logits memory usage
* llama : more compact state saving and reloading
* llama : fix lctx.n_outputs not being set before building graph
* perplexity : adapt to the logits API changes
* perplexity : fix Winogrande, use correct logits for second choice start
The first logits used to evaluate the second choice were not from
the end of the common prefix; instead, they were the logits from the end
of the first choice. This has been corrected.
The previous implementation sometimes had outliers in the scores of
choices for some tasks, and the logic to skip choices words
in the log-likelihood evaluation probably was an attempt to reduce those,
but it was complex and didn't quite seem to be the right thing.
This is simpler now, and the outlier scores aren't there anymore.
* perplexity : normalize spaces and punctuation in Winogrande sentences
* llama : fix embedding conditions
* llama : fix llama_get_embeddings_ith when the resulting id is 0
* llama : fix wrong n_outputs in llama_set_inputs
A mismatch happened when using a smaller n_ubatch than n_batch and then using
llama_batch_get_one(). The decision of what n_outputs should be now almost
fully depends on how lctx.n_outputs is set in llama_decode_internal.
The conditions are simpler this way.
* llama : when saving the state, recalculate n_outputs
This ensures the correct number of outputs for the entire previous batch
is stored in the session file, even when n_ubatch is smaller than n_batch.
* llama : fix not-skipping outputs of non-causal models
* llama : fix running a batch with n_outputs == 0
It previously worked because lctx.inp_out_ids was not initialized,
so it pointed to some garbage address which was somehow still valid when I
ran my tests.
* llama : keep same graph topology even when n_outputs == 0
* ggml : saner ggml_can_repeat with empty tensors
* ggml : future-proof ggml_is_empty by using GGML_MAX_DIMS - 1
* ggml : do not multi-thread ops returning empty tensors
* ggml : make ggml_is_empty public and work with views
* llama : use a vector for ctx->output_ids
* llama : rework reallocation logic for llama_output_reserve
Now comparing the actual size with the new total size of the output buffer
to allow more efficient enabling and disabling of the embeddings
and/or logits output in the future.
* ggml : skip empty tensors in all backends
* llama : fix llama_output_reserve nullptr deref when new_size is 0
* perplexity : make Winogrande work as it does on master
The problems with the Winogrande implementation will
need to be fixed in a separate PR to ease review.
* llama : clearer error messages for invalid logits or embeddings ids
* llama : assert all models that can have inp_out_ids
Since the graph topology is now constant, this presence check
can be done even when there are no outputs.
* llama : assert logits and embd buffers exist before writing to them
* llama : handle errors from llama_output_reserve at call sites
* perplexity : make hellaswag and multiple-choice outputs identical to master
Due to how the KV cache is updated, the logprobs for tokens in a batch
are very slightly affected by the other tokens present in the batch,
so to make hellaswag and multiple-choice return exactly the same results
as on master, the last token of each sequence needs to be evaluated
even though its output is not used at all.
This will probably be changed back in the future to make these benchmarks
a tiny bit faster.
* perplexity : fix division by zero when using less than 100 multiple-choice tasks
* llama : allow loading state saved with a different ctx size
When loading a session file, the context size is now only required to be
at least enough to load the KV cells contained in that session file,
instead of requiring to use exactly the same context size as when saving.
Doing this enables the use-case of extending or shrinking the context size
of a saved session.
This breaks existing session files because the meaning of kv_buf_size
is slightly changed (previously it was the size of the whole KV cache,
now it's only the size of the saved part of it). This allows for
finer-grained sanity checks when loading in an effort to keep kv_buf_size
useful even when the kv_size is changed.
* llama : minor
ggml-ci
* readme : update recent API changes, and warn about Vulkan
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-03-26 15:46:41 +01:00
// only a single seq_id per token is needed
2024-11-25 15:31:38 +01:00
batch = llama_batch_init ( std : : max ( n_batch , params_base . n_parallel ) , 0 , 1 ) ;
2024-03-13 18:54:21 +01:00
}
2024-03-09 16:34:15 +01:00
metrics . init ( ) ;
2023-10-22 21:53:08 +02:00
}
2024-06-08 09:50:31 +02:00
server_slot * get_slot_by_id ( int id ) {
2024-03-07 10:41:53 +01:00
for ( server_slot & slot : slots ) {
2024-06-08 09:50:31 +02:00
if ( slot . id = = id ) {
2023-10-22 21:53:08 +02:00
return & slot ;
}
2024-06-08 09:50:31 +02:00
}
return nullptr ;
}
2024-11-01 14:33:14 +01:00
server_slot * get_available_slot ( const server_task & task ) {
2024-06-08 09:50:31 +02:00
server_slot * ret = nullptr ;
// find the slot that has at least n% prompt similarity
2024-11-01 14:33:14 +01:00
if ( ret = = nullptr & & slot_prompt_similarity ! = 0.0f ) {
2024-11-02 17:34:56 +01:00
int lcs_len = 0 ;
2024-06-08 09:50:31 +02:00
float similarity = 0 ;
for ( server_slot & slot : slots ) {
// skip the slot if it is not available
2024-09-06 23:21:29 +02:00
if ( slot . is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
continue ;
}
2024-10-24 21:51:22 +02:00
// skip the slot if it does not contains cached tokens
2024-11-01 14:33:14 +01:00
if ( slot . cache_tokens . empty ( ) ) {
2024-06-12 13:42:29 +02:00
continue ;
}
2024-11-01 14:33:14 +01:00
// length of the Longest Common Subsequence between the current slot's prompt and the input prompt
2024-11-25 08:58:41 +01:00
int cur_lcs_len = common_lcs ( slot . cache_tokens , task . prompt_tokens ) ;
2024-06-08 09:50:31 +02:00
2024-11-01 14:33:14 +01:00
// fraction of the common subsequence length compared to the current slot's prompt length
2024-11-02 17:34:56 +01:00
float cur_similarity = static_cast < float > ( cur_lcs_len ) / static_cast < int > ( slot . cache_tokens . size ( ) ) ;
2024-06-08 09:50:31 +02:00
// select the current slot if the criteria match
2024-11-02 17:34:56 +01:00
if ( cur_lcs_len > lcs_len & & cur_similarity > slot_prompt_similarity ) {
lcs_len = cur_lcs_len ;
similarity = cur_similarity ;
2024-06-08 09:50:31 +02:00
ret = & slot ;
}
}
2023-10-20 20:07:23 +02:00
2024-06-08 09:50:31 +02:00
if ( ret ! = nullptr ) {
2024-11-02 17:34:56 +01:00
SLT_DBG ( * ret , " selected slot by lcs similarity, lcs_len = %d, similarity = %f \n " , lcs_len , similarity ) ;
2023-10-22 21:53:08 +02:00
}
}
2023-10-20 20:07:23 +02:00
2024-06-08 09:50:31 +02:00
// find the slot that has been least recently used
if ( ret = = nullptr ) {
int64_t t_last = ggml_time_us ( ) ;
for ( server_slot & slot : slots ) {
// skip the slot if it is not available
2024-09-06 23:21:29 +02:00
if ( slot . is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
continue ;
}
// select the current slot if the criteria match
if ( slot . t_last_used < t_last ) {
t_last = slot . t_last_used ;
ret = & slot ;
}
}
if ( ret ! = nullptr ) {
2024-09-15 19:46:12 +02:00
SLT_DBG ( * ret , " selected slot by lru, t_last = % " PRId64 " \n " , t_last ) ;
2024-06-08 09:50:31 +02:00
}
}
return ret ;
2023-08-08 15:29:19 +02:00
}
2024-03-11 10:56:41 +01:00
bool launch_slot_with_task ( server_slot & slot , const server_task & task ) {
2024-12-07 20:21:09 +01:00
slot . reset ( ) ;
slot . id_task = task . id ;
slot . index = task . index ;
slot . task_type = task . type ;
slot . params = std : : move ( task . params ) ;
slot . prompt_tokens = std : : move ( task . prompt_tokens ) ;
2024-11-25 15:31:38 +01:00
2024-12-07 20:21:09 +01:00
SLT_DBG ( slot , " launching slot : %s \n " , safe_json_to_str ( slot . to_json ( ) ) . c_str ( ) ) ;
2024-03-07 10:41:53 +01:00
if ( slot . n_predict > 0 & & slot . params . n_predict > slot . n_predict ) {
2024-02-18 17:30:09 +01:00
// Might be better to reject the request with a 400 ?
2024-03-07 10:41:53 +01:00
slot . params . n_predict = slot . n_predict ;
2024-09-15 19:46:12 +02:00
SLT_WRN ( slot , " n_predict = %d exceeds server configuration, setting to %d " , slot . n_predict , slot . n_predict ) ;
2024-02-18 17:30:09 +01:00
}
2024-12-07 20:21:09 +01:00
if ( slot . params . ignore_eos & & has_eos_token ) {
slot . params . sampling . logit_bias . push_back ( { llama_token_eos ( model ) , - INFINITY } ) ;
2023-10-22 21:53:08 +02:00
}
2023-10-12 08:29:04 +02:00
2023-10-02 09:42:02 +02:00
{
2024-09-07 14:16:19 +02:00
if ( slot . smpl ! = nullptr ) {
2024-10-10 22:57:42 +02:00
common_sampler_free ( slot . smpl ) ;
2024-03-07 10:41:53 +01:00
}
2024-09-07 14:16:19 +02:00
2024-11-25 15:31:38 +01:00
slot . smpl = common_sampler_init ( model , slot . params . sampling ) ;
2024-09-07 14:16:19 +02:00
if ( slot . smpl = = nullptr ) {
2024-03-11 10:56:41 +01:00
// for now, the only error that may happen here is invalid grammar
send_error ( task , " Failed to parse grammar " , ERROR_TYPE_INVALID_REQUEST ) ;
return false ;
}
2023-10-02 09:42:02 +02:00
}
2024-11-25 15:31:38 +01:00
if ( slot . ctx_dft ) {
llama_batch_free ( slot . batch_spec ) ;
slot . batch_spec = llama_batch_init ( slot . params . speculative . n_max + 1 , 0 , 1 ) ;
}
2024-10-24 21:51:22 +02:00
slot . state = SLOT_STATE_STARTED ;
2023-10-12 08:29:04 +02:00
2024-09-15 19:46:12 +02:00
SLT_INF ( slot , " %s " , " processing task \n " ) ;
2023-10-02 09:42:02 +02:00
2023-10-22 21:53:08 +02:00
return true ;
}
void kv_cache_clear ( ) {
2024-09-15 19:46:12 +02:00
SRV_DBG ( " %s " , " clearing KV cache \n " ) ;
2024-03-07 10:41:53 +01:00
2023-10-22 21:53:08 +02:00
// clear the entire KV cache
2023-10-29 18:31:40 +01:00
llama_kv_cache_clear ( ctx ) ;
2023-10-22 21:53:08 +02:00
clean_kv_cache = false ;
2023-10-02 09:42:02 +02:00
}
2023-08-23 09:12:12 +02:00
2024-03-07 10:41:53 +01:00
bool process_token ( completion_token_output & result , server_slot & slot ) {
2023-10-22 21:53:08 +02:00
// remember which tokens were sampled - used for repetition penalties during sampling
2024-11-25 15:31:38 +01:00
const std : : string token_str = common_token_to_piece ( ctx , result . tok , params_base . special ) ;
2023-10-22 21:53:08 +02:00
slot . sampled = result . tok ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2023-10-22 21:53:08 +02:00
// search stop word and delete it
slot . generated_text + = token_str ;
slot . has_next_token = true ;
2023-12-13 20:57:15 +01:00
// check if there is incomplete UTF-8 character at the end
bool incomplete = false ;
2024-03-07 10:41:53 +01:00
for ( unsigned i = 1 ; i < 5 & & i < = slot . generated_text . size ( ) ; + + i ) {
2023-12-13 20:57:15 +01:00
unsigned char c = slot . generated_text [ slot . generated_text . size ( ) - i ] ;
2024-03-07 10:41:53 +01:00
if ( ( c & 0xC0 ) = = 0x80 ) {
2023-12-13 20:57:15 +01:00
// continuation byte: 10xxxxxx
continue ;
}
2024-03-07 10:41:53 +01:00
if ( ( c & 0xE0 ) = = 0xC0 ) {
2023-12-13 20:57:15 +01:00
// 2-byte character: 110xxxxx ...
incomplete = i < 2 ;
2024-03-07 10:41:53 +01:00
} else if ( ( c & 0xF0 ) = = 0xE0 ) {
2023-12-13 20:57:15 +01:00
// 3-byte character: 1110xxxx ...
incomplete = i < 3 ;
2024-03-07 10:41:53 +01:00
} else if ( ( c & 0xF8 ) = = 0xF0 ) {
2023-12-13 20:57:15 +01:00
// 4-byte character: 11110xxx ...
incomplete = i < 4 ;
2023-10-22 21:53:08 +02:00
}
2023-12-13 20:57:15 +01:00
// else 1-byte character or invalid byte
break ;
2023-05-21 19:51:18 +02:00
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-07 10:41:53 +01:00
if ( ! incomplete ) {
2024-02-29 21:42:11 +01:00
size_t pos = std : : min ( slot . n_sent_text , slot . generated_text . size ( ) ) ;
2024-03-07 10:41:53 +01:00
2023-10-22 21:53:08 +02:00
const std : : string str_test = slot . generated_text . substr ( pos ) ;
2024-10-16 10:35:53 +02:00
bool send_text = true ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
size_t stop_pos = slot . find_stopping_strings ( str_test , token_str . size ( ) , true ) ;
2024-03-07 10:41:53 +01:00
if ( stop_pos ! = std : : string : : npos ) {
2023-10-22 21:53:08 +02:00
slot . generated_text . erase (
slot . generated_text . begin ( ) + pos + stop_pos ,
slot . generated_text . end ( ) ) ;
2024-02-29 21:42:11 +01:00
pos = std : : min ( slot . n_sent_text , slot . generated_text . size ( ) ) ;
2024-10-16 10:35:53 +02:00
} else if ( slot . has_next_token ) {
2024-12-06 11:14:32 +01:00
stop_pos = slot . find_stopping_strings ( str_test , token_str . size ( ) , false ) ;
2024-10-16 10:35:53 +02:00
send_text = stop_pos = = std : : string : : npos ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-09-28 18:04:36 +02:00
2023-10-22 21:53:08 +02:00
// check if there is any token to predict
2024-10-16 10:35:53 +02:00
if ( send_text ) {
2023-10-22 21:53:08 +02:00
// no send the stop word in the response
result . text_to_send = slot . generated_text . substr ( pos , std : : string : : npos ) ;
2024-02-29 21:42:11 +01:00
slot . n_sent_text + = result . text_to_send . size ( ) ;
2023-10-22 21:53:08 +02:00
// add the token to slot queue and cache
}
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
slot . add_token ( result ) ;
2024-03-07 10:41:53 +01:00
if ( slot . params . stream ) {
2023-10-22 21:53:08 +02:00
send_partial_response ( slot , result ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-05-21 19:51:18 +02:00
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-07 10:41:53 +01:00
if ( incomplete ) {
2023-10-22 21:53:08 +02:00
slot . has_next_token = true ;
2023-06-20 00:12:39 +02:00
}
2023-10-22 21:53:08 +02:00
// check the limits
2024-11-25 15:31:38 +01:00
if ( slot . n_decoded > 0 & & slot . has_next_token & & ! slot . has_budget ( params_base ) ) {
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_LIMIT ;
2023-10-22 21:53:08 +02:00
slot . has_next_token = false ;
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
SLT_DBG ( slot , " stopped by limit, n_decoded = %d, n_predict = %d \n " , slot . n_decoded , slot . params . n_predict ) ;
2023-10-22 21:53:08 +02:00
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-10-18 06:32:19 +02:00
if ( slot . has_new_line ) {
// if we have already seen a new line, we stop after a certain time limit
if ( slot . params . t_max_predict_ms > 0 & & ( ggml_time_us ( ) - slot . t_start_generation > 1000.0f * slot . params . t_max_predict_ms ) ) {
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_LIMIT ;
2024-10-18 06:32:19 +02:00
slot . has_next_token = false ;
SLT_DBG ( slot , " stopped by time limit, n_decoded = %d, t_max_predict_ms = %d ms \n " , slot . n_decoded , ( int ) slot . params . t_max_predict_ms ) ;
}
// require that each new line has a whitespace prefix (i.e. indentation) of at least slot.params.n_indent
if ( slot . params . n_indent > 0 ) {
// check the current indentation
// TODO: improve by not doing it more than once for each new line
if ( slot . last_nl_pos > 0 ) {
size_t pos = slot . last_nl_pos ;
int n_indent = 0 ;
while ( pos < slot . generated_text . size ( ) & & ( slot . generated_text [ pos ] = = ' ' | | slot . generated_text [ pos ] = = ' \t ' ) ) {
n_indent + + ;
pos + + ;
}
if ( pos < slot . generated_text . size ( ) & & n_indent < slot . params . n_indent ) {
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_LIMIT ;
2024-10-18 06:32:19 +02:00
slot . has_next_token = false ;
2024-10-12 15:14:27 +02:00
2024-10-18 06:32:19 +02:00
// cut the last line
slot . generated_text . erase ( pos , std : : string : : npos ) ;
SLT_DBG ( slot , " stopped by indentation limit, n_decoded = %d, n_indent = %d \n " , slot . n_decoded , n_indent ) ;
}
}
// find the next new line
{
const size_t pos = slot . generated_text . find ( ' \n ' , slot . last_nl_pos ) ;
if ( pos ! = std : : string : : npos ) {
slot . last_nl_pos = pos + 1 ;
}
}
}
2024-10-12 15:14:27 +02:00
}
// check if there is a new line in the generated text
if ( result . text_to_send . find ( ' \n ' ) ! = std : : string : : npos ) {
slot . has_new_line = true ;
}
2024-09-23 22:23:54 +02:00
// if context shift is disabled, we stop when it reaches the context limit
2024-10-12 15:06:31 +02:00
if ( slot . n_past > = slot . n_ctx ) {
2024-09-23 22:23:54 +02:00
slot . truncated = true ;
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_LIMIT ;
2024-09-23 22:23:54 +02:00
slot . has_next_token = false ;
2024-10-12 15:06:31 +02:00
SLT_DBG ( slot , " stopped due to running out of context capacity, n_past = %d, n_prompt_tokens = %d, n_decoded = %d, n_ctx = %d \n " ,
slot . n_decoded , slot . n_prompt_tokens , slot . n_past , slot . n_ctx ) ;
2024-09-23 22:23:54 +02:00
}
2024-04-21 13:50:41 +02:00
if ( llama_token_is_eog ( model , result . tok ) ) {
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_EOS ;
2023-10-22 21:53:08 +02:00
slot . has_next_token = false ;
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
SLT_DBG ( slot , " %s " , " stopped by EOS \n " ) ;
}
const auto n_ctx_train = llama_n_ctx_train ( model ) ;
2024-10-12 15:06:31 +02:00
if ( slot . params . n_predict < 1 & & slot . n_predict < 1 & & slot . n_prompt_tokens + slot . n_decoded > = n_ctx_train ) {
2024-04-26 12:15:30 +02:00
slot . truncated = true ;
2024-12-06 11:14:32 +01:00
slot . stop = STOP_TYPE_LIMIT ;
2024-04-26 12:15:30 +02:00
slot . has_next_token = false ; // stop prediction
2024-09-15 19:46:12 +02:00
SLT_WRN ( slot ,
2024-10-12 15:06:31 +02:00
" n_predict (%d) is set for infinite generation. "
2024-09-15 19:46:12 +02:00
" Limiting generated tokens to n_ctx_train (%d) to avoid EOS-less generation infinite loop \n " ,
slot . params . n_predict , n_ctx_train ) ;
2024-04-26 12:15:30 +02:00
}
2024-10-12 07:21:51 +02:00
SLT_DBG ( slot , " n_decoded = %d, n_remaining = %d, next token: %5d '%s' \n " , slot . n_decoded , slot . n_remaining , result . tok , token_str . c_str ( ) ) ;
2023-10-22 21:53:08 +02:00
return slot . has_next_token ; // continue
}
2023-08-08 15:29:19 +02:00
2024-03-11 10:56:41 +01:00
void send_error ( const server_task & task , const std : : string & error , const enum error_type type = ERROR_TYPE_SERVER ) {
2024-09-02 17:11:51 +02:00
send_error ( task . id , error , type ) ;
2024-03-11 10:56:41 +01:00
}
void send_error ( const server_slot & slot , const std : : string & error , const enum error_type type = ERROR_TYPE_SERVER ) {
2024-09-02 17:11:51 +02:00
send_error ( slot . id_task , error , type ) ;
2024-03-11 10:56:41 +01:00
}
2024-09-02 17:11:51 +02:00
void send_error ( const int id_task , const std : : string & error , const enum error_type type = ERROR_TYPE_SERVER ) {
2024-09-15 19:46:12 +02:00
SRV_ERR ( " task id = %d, error: %s \n " , id_task , error . c_str ( ) ) ;
2023-10-22 21:53:08 +02:00
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_error > ( ) ;
res - > id = id_task ;
res - > err_type = type ;
res - > err_msg = error ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
2024-03-07 10:41:53 +01:00
}
2024-12-07 17:02:05 +01:00
void send_partial_response ( server_slot & slot , const completion_token_output & tkn ) {
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_cmpl_partial > ( ) ;
2024-12-07 17:02:05 +01:00
res - > id = slot . id_task ;
res - > index = slot . index ;
res - > content = tkn . text_to_send ;
2023-10-22 21:53:08 +02:00
2024-12-06 11:14:32 +01:00
res - > n_decoded = slot . n_decoded ;
res - > n_prompt_tokens = slot . n_prompt_tokens ;
res - > verbose = slot . params . verbose ;
res - > oaicompat = slot . params . oaicompat ;
res - > oaicompat_chat = slot . params . oaicompat_chat ;
res - > oaicompat_model = slot . params . oaicompat_model ;
res - > oaicompat_cmpl_id = slot . params . oaicompat_cmpl_id ;
// populate res.probs_output
2024-11-25 15:31:38 +01:00
if ( slot . params . sampling . n_probs > 0 ) {
2024-10-24 21:51:22 +02:00
const llama_tokens to_send_toks = common_tokenize ( ctx , tkn . text_to_send , false ) ;
2024-12-07 17:02:05 +01:00
2024-03-07 10:41:53 +01:00
const size_t probs_pos = std : : min ( slot . n_sent_token_probs , slot . generated_token_probs . size ( ) ) ;
const size_t probs_stop_pos = std : : min ( slot . n_sent_token_probs + to_send_toks . size ( ) , slot . generated_token_probs . size ( ) ) ;
std : : vector < completion_token_output > probs_output ;
if ( probs_pos < probs_stop_pos ) {
2024-12-06 11:14:32 +01:00
res - > probs_output = std : : vector < completion_token_output > (
2024-03-07 10:41:53 +01:00
slot . generated_token_probs . begin ( ) + probs_pos ,
slot . generated_token_probs . begin ( ) + probs_stop_pos ) ;
2023-07-02 23:38:44 +02:00
}
2023-10-22 21:53:08 +02:00
}
2023-08-08 15:29:19 +02:00
2024-12-06 11:14:32 +01:00
// populate timings if this is final response or timings_per_token is enabled
if ( slot . stop ! = STOP_TYPE_NONE | | slot . params . timings_per_token ) {
res - > timings = slot . get_timings ( ) ;
2023-11-25 10:29:06 +01:00
}
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
}
void send_final_response ( server_slot & slot ) {
auto res = std : : make_unique < server_task_result_cmpl_final > ( ) ;
res - > id = slot . id_task ;
res - > id_slot = slot . id ;
2023-10-22 21:53:08 +02:00
2024-12-06 11:14:32 +01:00
res - > index = slot . index ;
res - > content = slot . generated_text ;
res - > timings = slot . get_timings ( ) ;
res - > prompt = common_detokenize ( ctx , slot . prompt_tokens , true ) ;
2023-10-22 21:53:08 +02:00
2024-12-06 11:14:32 +01:00
res - > truncated = slot . truncated ;
res - > n_decoded = slot . n_decoded ;
res - > n_prompt_tokens = slot . n_prompt_tokens ;
res - > n_tokens_cached = slot . n_past ;
res - > has_new_line = slot . has_new_line ;
res - > stopping_word = slot . stopping_word ;
res - > stop = slot . stop ;
res - > verbose = slot . params . verbose ;
2024-12-08 20:38:51 +01:00
res - > stream = slot . params . stream ;
2024-12-06 11:14:32 +01:00
res - > oaicompat = slot . params . oaicompat ;
res - > oaicompat_chat = slot . params . oaicompat_chat ;
res - > oaicompat_model = slot . params . oaicompat_model ;
res - > oaicompat_cmpl_id = slot . params . oaicompat_cmpl_id ;
// populate res.probs_output
2024-11-25 15:31:38 +01:00
if ( slot . params . sampling . n_probs > 0 ) {
2024-12-06 11:14:32 +01:00
if ( ! slot . params . stream & & slot . stop = = STOP_TYPE_WORD ) {
2024-10-24 21:51:22 +02:00
const llama_tokens stop_word_toks = common_tokenize ( ctx , slot . stopping_word , false ) ;
2024-03-07 10:41:53 +01:00
2024-05-04 11:06:40 +02:00
size_t safe_offset = std : : min ( slot . generated_token_probs . size ( ) , stop_word_toks . size ( ) ) ;
2024-12-06 11:14:32 +01:00
res - > probs_output = std : : vector < completion_token_output > (
2024-03-07 10:41:53 +01:00
slot . generated_token_probs . begin ( ) ,
2024-05-04 11:06:40 +02:00
slot . generated_token_probs . end ( ) - safe_offset ) ;
2024-03-07 10:41:53 +01:00
} else {
2024-12-06 11:14:32 +01:00
res - > probs_output = std : : vector < completion_token_output > (
2024-03-07 10:41:53 +01:00
slot . generated_token_probs . begin ( ) ,
slot . generated_token_probs . end ( ) ) ;
2023-10-05 16:02:55 +02:00
}
2023-05-21 19:51:18 +02:00
}
2024-12-06 11:14:32 +01:00
res - > generation_params = slot . params ; // copy the parameters
2023-11-25 10:29:06 +01:00
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
2023-10-22 21:53:08 +02:00
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-07 10:41:53 +01:00
void send_embedding ( const server_slot & slot , const llama_batch & batch ) {
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_embd > ( ) ;
res - > id = slot . id_task ;
res - > index = slot . index ;
2023-10-22 21:53:08 +02:00
const int n_embd = llama_n_embd ( model ) ;
2024-03-04 21:31:20 +01:00
2024-03-09 13:27:58 +01:00
std : : vector < float > embd_res ( n_embd , 0.0f ) ;
2024-03-07 10:41:53 +01:00
for ( int i = 0 ; i < batch . n_tokens ; + + i ) {
2024-11-06 12:29:01 +01:00
if ( ! batch . logits [ i ] | | batch . seq_id [ i ] [ 0 ] ! = slot . id ) {
2024-03-07 10:41:53 +01:00
continue ;
}
2024-03-04 21:31:20 +01:00
2024-03-07 10:41:53 +01:00
const float * embd = llama_get_embeddings_seq ( ctx , batch . seq_id [ i ] [ 0 ] ) ;
if ( embd = = NULL ) {
embd = llama_get_embeddings_ith ( ctx , i ) ;
}
2024-03-04 21:31:20 +01:00
2024-03-07 10:41:53 +01:00
if ( embd = = NULL ) {
2024-09-15 19:46:12 +02:00
SLT_ERR ( slot , " failed to get embeddings, token = %d, seq_id = %d \n " , batch . token [ i ] , batch . seq_id [ i ] [ 0 ] ) ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
res - > embedding = std : : vector < float > ( n_embd , 0.0f ) ;
2024-03-07 10:41:53 +01:00
continue ;
2024-03-04 21:31:20 +01:00
}
2024-03-07 10:41:53 +01:00
2024-10-10 22:57:42 +02:00
common_embd_normalize ( embd , embd_res . data ( ) , n_embd ) ;
2024-12-06 11:14:32 +01:00
res - > embedding = embd_res ;
2023-05-21 19:51:18 +02:00
}
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
SLT_DBG ( slot , " %s " , " sending embeddings \n " ) ;
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
2023-10-22 21:53:08 +02:00
}
2023-05-21 19:51:18 +02:00
2024-09-28 16:42:03 +02:00
void send_rerank ( const server_slot & slot , const llama_batch & batch ) {
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_rerank > ( ) ;
res - > id = slot . id_task ;
res - > index = slot . index ;
2024-09-28 16:42:03 +02:00
for ( int i = 0 ; i < batch . n_tokens ; + + i ) {
2024-11-06 12:29:01 +01:00
if ( ! batch . logits [ i ] | | batch . seq_id [ i ] [ 0 ] ! = slot . id ) {
2024-09-28 16:42:03 +02:00
continue ;
}
const float * embd = llama_get_embeddings_seq ( ctx , batch . seq_id [ i ] [ 0 ] ) ;
if ( embd = = NULL ) {
embd = llama_get_embeddings_ith ( ctx , i ) ;
}
if ( embd = = NULL ) {
SLT_ERR ( slot , " failed to get embeddings, token = %d, seq_id = %d \n " , batch . token [ i ] , batch . seq_id [ i ] [ 0 ] ) ;
2024-12-06 11:14:32 +01:00
res - > score = - 1e6 ;
2024-09-28 16:42:03 +02:00
continue ;
}
2024-12-06 11:14:32 +01:00
res - > score = embd [ 0 ] ;
2024-09-28 16:42:03 +02:00
}
2024-12-06 11:14:32 +01:00
SLT_DBG ( slot , " sending rerank result, res.score = %f \n " , res - > score ) ;
2024-09-28 16:42:03 +02:00
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
2024-09-28 16:42:03 +02:00
}
2024-09-02 17:11:51 +02:00
//
// Functions to create new task(s) and receive result(s)
//
2024-02-06 09:16:23 +01:00
2024-09-02 17:11:51 +02:00
void cancel_tasks ( const std : : unordered_set < int > & id_tasks ) {
std : : vector < server_task > cancel_tasks ;
cancel_tasks . reserve ( id_tasks . size ( ) ) ;
for ( const auto & id_task : id_tasks ) {
2024-09-15 19:46:12 +02:00
SRV_WRN ( " cancel task, id_task = %d \n " , id_task ) ;
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_CANCEL ) ;
2024-09-02 17:11:51 +02:00
task . id_target = id_task ;
cancel_tasks . push_back ( task ) ;
queue_results . remove_waiting_task_id ( id_task ) ;
}
// push to beginning of the queue, so it has highest priority
queue_tasks . post ( cancel_tasks , true ) ;
}
2024-12-07 20:21:09 +01:00
// receive the results from task(s)
2024-12-06 11:14:32 +01:00
void receive_multi_results (
2024-09-15 19:46:12 +02:00
const std : : unordered_set < int > & id_tasks ,
2024-12-06 11:14:32 +01:00
const std : : function < void ( std : : vector < server_task_result_ptr > & ) > & result_handler ,
2024-09-15 19:46:12 +02:00
const std : : function < void ( json ) > & error_handler ) {
2024-12-06 11:14:32 +01:00
std : : vector < server_task_result_ptr > results ( id_tasks . size ( ) ) ;
2024-09-02 17:11:51 +02:00
for ( size_t i = 0 ; i < id_tasks . size ( ) ; i + + ) {
2024-12-06 11:14:32 +01:00
server_task_result_ptr result = queue_results . recv ( id_tasks ) ;
2024-09-02 17:11:51 +02:00
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
error_handler ( result - > to_json ( ) ) ;
2024-09-02 17:11:51 +02:00
cancel_tasks ( id_tasks ) ;
2024-09-23 22:23:54 +02:00
return ;
2024-09-02 17:11:51 +02:00
}
2023-11-30 23:25:04 +01:00
2024-12-06 11:14:32 +01:00
GGML_ASSERT (
dynamic_cast < server_task_result_cmpl_final * > ( result . get ( ) ) ! = nullptr
| | dynamic_cast < server_task_result_embd * > ( result . get ( ) ) ! = nullptr
| | dynamic_cast < server_task_result_rerank * > ( result . get ( ) ) ! = nullptr
) ;
const size_t idx = result - > get_index ( ) ;
2024-09-28 16:42:03 +02:00
GGML_ASSERT ( idx < results . size ( ) & & " index out of range " ) ;
2024-12-06 11:14:32 +01:00
results [ idx ] = std : : move ( result ) ;
2024-01-26 13:42:20 +01:00
}
2024-09-02 17:11:51 +02:00
result_handler ( results ) ;
}
2024-01-26 13:42:20 +01:00
2024-12-07 20:21:09 +01:00
// receive the results from task(s), in stream mode
2024-09-15 19:46:12 +02:00
void receive_cmpl_results_stream (
2024-12-06 11:14:32 +01:00
const std : : unordered_set < int > & id_tasks ,
const std : : function < bool ( server_task_result_ptr & ) > & result_handler ,
const std : : function < void ( json ) > & error_handler ) {
2024-09-02 17:11:51 +02:00
size_t n_finished = 0 ;
while ( true ) {
2024-12-06 11:14:32 +01:00
server_task_result_ptr result = queue_results . recv ( id_tasks ) ;
if ( result - > is_error ( ) ) {
error_handler ( result - > to_json ( ) ) ;
2024-09-02 17:11:51 +02:00
cancel_tasks ( id_tasks ) ;
2024-12-06 11:14:32 +01:00
return ;
2024-09-02 17:11:51 +02:00
}
2024-01-26 13:42:20 +01:00
2024-12-08 20:38:51 +01:00
GGML_ASSERT (
dynamic_cast < server_task_result_cmpl_partial * > ( result . get ( ) ) ! = nullptr
| | dynamic_cast < server_task_result_cmpl_final * > ( result . get ( ) ) ! = nullptr
) ;
2024-12-06 11:14:32 +01:00
if ( ! result_handler ( result ) ) {
2024-09-02 17:11:51 +02:00
cancel_tasks ( id_tasks ) ;
break ;
}
2023-11-30 23:25:04 +01:00
2024-12-06 11:14:32 +01:00
if ( result - > is_stop ( ) ) {
2024-09-02 17:11:51 +02:00
if ( + + n_finished = = id_tasks . size ( ) ) {
break ;
}
}
2023-11-30 23:25:04 +01:00
}
}
2024-09-02 17:11:51 +02:00
//
// Functions to process the task
//
2024-11-06 12:29:01 +01:00
void process_single_task ( server_task task ) {
2024-03-07 10:41:53 +01:00
switch ( task . type ) {
2024-12-07 20:21:09 +01:00
case SERVER_TASK_TYPE_COMPLETION :
case SERVER_TASK_TYPE_INFILL :
case SERVER_TASK_TYPE_EMBEDDING :
case SERVER_TASK_TYPE_RERANK :
2024-01-26 13:42:20 +01:00
{
2024-12-07 20:21:09 +01:00
const int id_slot = task . id_selected_slot ;
2024-06-08 09:50:31 +02:00
2024-11-01 14:33:14 +01:00
server_slot * slot = id_slot ! = - 1 ? get_slot_by_id ( id_slot ) : get_available_slot ( task ) ;
2024-06-08 09:50:31 +02:00
2024-03-07 10:41:53 +01:00
if ( slot = = nullptr ) {
// if no slot is available, we defer this task for processing later
2024-09-15 19:46:12 +02:00
SRV_DBG ( " no slot is available, defer task, id_task = %d \n " , task . id ) ;
2024-03-07 10:41:53 +01:00
queue_tasks . defer ( task ) ;
2024-01-13 18:31:26 +01:00
break ;
2023-10-22 21:53:08 +02:00
}
2024-09-06 23:21:29 +02:00
if ( slot - > is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
// if requested slot is unavailable, we defer this task for processing later
2024-09-15 19:46:12 +02:00
SRV_DBG ( " requested slot is unavailable, defer task, id_task = %d \n " , task . id ) ;
2024-06-08 09:50:31 +02:00
queue_tasks . defer ( task ) ;
break ;
}
2024-03-07 10:41:53 +01:00
2024-03-11 10:56:41 +01:00
if ( ! launch_slot_with_task ( * slot , task ) ) {
2024-09-15 19:46:12 +02:00
SRV_ERR ( " failed to launch slot with task, id_task = %d \n " , task . id ) ;
2023-10-22 21:53:08 +02:00
break ;
}
2024-03-07 10:41:53 +01:00
} break ;
case SERVER_TASK_TYPE_CANCEL :
{
// release slot linked with the task id
for ( auto & slot : slots ) {
if ( slot . id_task = = task . id_target ) {
slot . release ( ) ;
break ;
}
2024-02-24 12:28:55 +01:00
}
2024-03-07 10:41:53 +01:00
} break ;
case SERVER_TASK_TYPE_NEXT_RESPONSE :
{
// do nothing
} break ;
case SERVER_TASK_TYPE_METRICS :
{
json slots_data = json : : array ( ) ;
int n_idle_slots = 0 ;
int n_processing_slots = 0 ;
for ( server_slot & slot : slots ) {
2024-12-07 17:02:05 +01:00
json slot_data = slot . to_json ( ) ;
2024-03-07 10:41:53 +01:00
2024-11-04 16:33:29 +01:00
if ( slot . is_processing ( ) ) {
2024-03-07 10:41:53 +01:00
n_processing_slots + + ;
2024-11-04 16:33:29 +01:00
} else {
n_idle_slots + + ;
2024-03-07 10:41:53 +01:00
}
slots_data . push_back ( slot_data ) ;
}
2024-09-15 19:46:12 +02:00
SRV_DBG ( " n_idle_slots = %d, n_processing_slots = %d \n " , n_idle_slots , n_processing_slots ) ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_metrics > ( ) ;
res - > id = task . id ;
2024-12-07 17:02:05 +01:00
res - > slots_data = std : : move ( slots_data ) ;
2024-12-06 11:14:32 +01:00
res - > n_idle_slots = n_idle_slots ;
res - > n_processing_slots = n_processing_slots ;
res - > n_tasks_deferred = queue_tasks . queue_tasks_deferred . size ( ) ;
res - > t_start = metrics . t_start ;
2024-02-25 13:49:43 +01:00
2024-12-06 11:14:32 +01:00
res - > kv_cache_tokens_count = llama_get_kv_cache_token_count ( ctx ) ;
res - > kv_cache_used_cells = llama_get_kv_cache_used_cells ( ctx ) ;
2024-02-25 13:49:43 +01:00
2024-12-06 11:14:32 +01:00
res - > n_prompt_tokens_processed_total = metrics . n_prompt_tokens_processed_total ;
res - > t_prompt_processing_total = metrics . t_prompt_processing_total ;
res - > n_tokens_predicted_total = metrics . n_tokens_predicted_total ;
res - > t_tokens_generation_total = metrics . t_tokens_generation_total ;
2024-02-25 13:49:43 +01:00
2024-12-06 11:14:32 +01:00
res - > n_prompt_tokens_processed = metrics . n_prompt_tokens_processed ;
res - > t_prompt_processing = metrics . t_prompt_processing ;
res - > n_tokens_predicted = metrics . n_tokens_predicted ;
res - > t_tokens_generation = metrics . t_tokens_generation ;
2024-09-06 23:21:29 +02:00
2024-12-06 11:14:32 +01:00
res - > n_decode_total = metrics . n_decode_total ;
res - > n_busy_slots_total = metrics . n_busy_slots_total ;
2024-03-07 10:41:53 +01:00
2024-12-07 20:21:09 +01:00
if ( task . metrics_reset_bucket ) {
2024-03-08 12:25:04 +01:00
metrics . reset_bucket ( ) ;
}
2024-12-06 11:14:32 +01:00
queue_results . send ( std : : move ( res ) ) ;
2024-03-07 10:41:53 +01:00
} break ;
2024-04-08 14:43:30 +02:00
case SERVER_TASK_TYPE_SLOT_SAVE :
{
2024-12-07 20:21:09 +01:00
int id_slot = task . slot_action . slot_id ;
2024-06-08 09:50:31 +02:00
server_slot * slot = get_slot_by_id ( id_slot ) ;
2024-04-08 14:43:30 +02:00
if ( slot = = nullptr ) {
send_error ( task , " Invalid slot ID " , ERROR_TYPE_INVALID_REQUEST ) ;
break ;
}
2024-09-06 23:21:29 +02:00
if ( slot - > is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
// if requested slot is unavailable, we defer this task for processing later
2024-09-15 19:46:12 +02:00
SRV_DBG ( " requested slot is unavailable, defer task, id_task = %d \n " , task . id ) ;
2024-06-08 09:50:31 +02:00
queue_tasks . defer ( task ) ;
break ;
}
2024-04-08 14:43:30 +02:00
const size_t token_count = slot - > cache_tokens . size ( ) ;
const int64_t t_start = ggml_time_us ( ) ;
2024-12-07 20:21:09 +01:00
std : : string filename = task . slot_action . filename ;
std : : string filepath = task . slot_action . filepath ;
2024-04-08 14:43:30 +02:00
2024-11-06 12:29:01 +01:00
const size_t nwrite = llama_state_seq_save_file ( ctx , filepath . c_str ( ) , slot - > id , slot - > cache_tokens . data ( ) , token_count ) ;
2024-04-08 14:43:30 +02:00
const int64_t t_end = ggml_time_us ( ) ;
const double t_save_ms = ( t_end - t_start ) / 1000.0 ;
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_slot_save_load > ( ) ;
res - > id = task . id ;
res - > id_slot = id_slot ;
res - > filename = filename ;
res - > is_save = true ;
res - > n_tokens = token_count ;
res - > n_bytes = nwrite ;
res - > t_ms = t_save_ms ;
queue_results . send ( std : : move ( res ) ) ;
2024-04-08 14:43:30 +02:00
} break ;
case SERVER_TASK_TYPE_SLOT_RESTORE :
{
2024-12-07 20:21:09 +01:00
int id_slot = task . slot_action . slot_id ;
2024-06-08 09:50:31 +02:00
server_slot * slot = get_slot_by_id ( id_slot ) ;
2024-04-08 14:43:30 +02:00
if ( slot = = nullptr ) {
send_error ( task , " Invalid slot ID " , ERROR_TYPE_INVALID_REQUEST ) ;
break ;
}
2024-09-06 23:21:29 +02:00
if ( slot - > is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
// if requested slot is unavailable, we defer this task for processing later
2024-09-15 19:46:12 +02:00
SRV_DBG ( " requested slot is unavailable, defer task, id_task = %d \n " , task . id ) ;
2024-06-08 09:50:31 +02:00
queue_tasks . defer ( task ) ;
break ;
}
2024-04-08 14:43:30 +02:00
const int64_t t_start = ggml_time_us ( ) ;
2024-12-07 20:21:09 +01:00
std : : string filename = task . slot_action . filename ;
std : : string filepath = task . slot_action . filepath ;
2024-04-08 14:43:30 +02:00
slot - > cache_tokens . resize ( slot - > n_ctx ) ;
size_t token_count = 0 ;
2024-11-06 12:29:01 +01:00
size_t nread = llama_state_seq_load_file ( ctx , filepath . c_str ( ) , slot - > id , slot - > cache_tokens . data ( ) , slot - > cache_tokens . size ( ) , & token_count ) ;
2024-04-08 14:43:30 +02:00
if ( nread = = 0 ) {
slot - > cache_tokens . resize ( 0 ) ;
send_error ( task , " Unable to restore slot, no available space in KV cache or invalid slot save file " , ERROR_TYPE_INVALID_REQUEST ) ;
break ;
}
slot - > cache_tokens . resize ( token_count ) ;
const int64_t t_end = ggml_time_us ( ) ;
const double t_restore_ms = ( t_end - t_start ) / 1000.0 ;
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_slot_save_load > ( ) ;
res - > id = task . id ;
res - > id_slot = id_slot ;
res - > filename = filename ;
res - > is_save = false ;
res - > n_tokens = token_count ;
res - > n_bytes = nread ;
res - > t_ms = t_restore_ms ;
queue_results . send ( std : : move ( res ) ) ;
2024-04-08 14:43:30 +02:00
} break ;
case SERVER_TASK_TYPE_SLOT_ERASE :
{
2024-12-07 20:21:09 +01:00
int id_slot = task . slot_action . slot_id ;
2024-06-08 09:50:31 +02:00
server_slot * slot = get_slot_by_id ( id_slot ) ;
2024-04-08 14:43:30 +02:00
if ( slot = = nullptr ) {
send_error ( task , " Invalid slot ID " , ERROR_TYPE_INVALID_REQUEST ) ;
break ;
}
2024-09-06 23:21:29 +02:00
if ( slot - > is_processing ( ) ) {
2024-06-08 09:50:31 +02:00
// if requested slot is unavailable, we defer this task for processing later
2024-09-15 19:46:12 +02:00
SRV_DBG ( " requested slot is unavailable, defer task, id_task = %d \n " , task . id ) ;
2024-06-08 09:50:31 +02:00
queue_tasks . defer ( task ) ;
break ;
}
2024-04-08 14:43:30 +02:00
// Erase token cache
const size_t n_erased = slot - > cache_tokens . size ( ) ;
2024-11-06 12:29:01 +01:00
llama_kv_cache_seq_rm ( ctx , slot - > id , - 1 , - 1 ) ;
2024-04-08 14:43:30 +02:00
slot - > cache_tokens . clear ( ) ;
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_slot_erase > ( ) ;
res - > id = task . id ;
res - > id_slot = id_slot ;
res - > n_erased = n_erased ;
queue_results . send ( std : : move ( res ) ) ;
2024-04-08 14:43:30 +02:00
} break ;
2024-08-06 17:33:39 +02:00
case SERVER_TASK_TYPE_SET_LORA :
{
2024-10-10 22:57:42 +02:00
common_lora_adapters_apply ( ctx , loras ) ;
2024-12-06 11:14:32 +01:00
auto res = std : : make_unique < server_task_result_apply_lora > ( ) ;
res - > id = task . id ;
queue_results . send ( std : : move ( res ) ) ;
2024-08-06 17:33:39 +02:00
} break ;
2023-07-02 23:38:44 +02:00
}
2024-01-26 13:42:20 +01:00
}
2023-11-30 23:25:04 +01:00
2024-03-11 10:56:41 +01:00
void update_slots ( ) {
2024-03-07 10:41:53 +01:00
// check if all slots are idle
{
bool all_idle = true ;
for ( auto & slot : slots ) {
2024-09-06 23:21:29 +02:00
if ( slot . is_processing ( ) ) {
2024-03-07 10:41:53 +01:00
all_idle = false ;
break ;
}
}
if ( all_idle ) {
2024-09-15 19:46:12 +02:00
SRV_INF ( " %s " , " all slots are idle \n " ) ;
2024-10-12 13:51:54 +02:00
if ( clean_kv_cache ) {
2024-03-07 10:41:53 +01:00
kv_cache_clear ( ) ;
}
2024-03-11 10:56:41 +01:00
return ;
2024-03-07 10:41:53 +01:00
}
}
2024-01-30 19:17:30 +01:00
2023-10-22 21:53:08 +02:00
{
2024-09-15 19:46:12 +02:00
SRV_DBG ( " %s " , " posting NEXT_RESPONSE \n " ) ;
2024-03-07 10:41:53 +01:00
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_NEXT_RESPONSE ) ;
task . id = queue_tasks . get_new_id ( ) ;
2024-03-07 10:41:53 +01:00
queue_tasks . post ( task ) ;
}
// apply context-shift if needed
// TODO: simplify and improve
for ( server_slot & slot : slots ) {
2024-10-12 15:06:31 +02:00
if ( slot . is_processing ( ) & & slot . n_past + 1 > = slot . n_ctx ) {
2024-11-25 15:31:38 +01:00
if ( ! params_base . ctx_shift ) {
2024-10-12 15:06:31 +02:00
// this check is redundant (for good)
// we should never get here, because generation should already stopped in process_token()
slot . release ( ) ;
send_error ( slot , " context shift is disabled " , ERROR_TYPE_SERVER ) ;
continue ;
}
2023-10-22 21:53:08 +02:00
2024-10-12 15:06:31 +02:00
// Shift context
const int n_keep = slot . params . n_keep + add_bos_token ;
const int n_left = slot . n_past - n_keep ;
const int n_discard = slot . params . n_discard ? slot . params . n_discard : ( n_left / 2 ) ;
2024-03-07 10:41:53 +01:00
2024-10-12 15:06:31 +02:00
SLT_WRN ( slot , " slot context shift, n_keep = %d, n_left = %d, n_discard = %d \n " , n_keep , n_left , n_discard ) ;
2023-10-22 21:53:08 +02:00
2024-11-06 12:29:01 +01:00
llama_kv_cache_seq_rm ( ctx , slot . id , n_keep , n_keep + n_discard ) ;
llama_kv_cache_seq_add ( ctx , slot . id , n_keep + n_discard , slot . n_past , - n_discard ) ;
2023-10-22 21:53:08 +02:00
2024-10-12 15:06:31 +02:00
if ( slot . params . cache_prompt ) {
for ( size_t i = n_keep + n_discard ; i < slot . cache_tokens . size ( ) ; i + + ) {
slot . cache_tokens [ i - n_discard ] = slot . cache_tokens [ i ] ;
2024-03-07 10:41:53 +01:00
}
2023-10-22 21:53:08 +02:00
2024-10-12 15:06:31 +02:00
slot . cache_tokens . resize ( slot . cache_tokens . size ( ) - n_discard ) ;
2024-01-27 14:38:05 +01:00
}
2024-10-12 15:06:31 +02:00
slot . n_past - = n_discard ;
slot . truncated = true ;
2023-07-05 22:51:13 +02:00
}
2023-10-22 21:53:08 +02:00
}
2024-03-07 10:41:53 +01:00
// start populating the batch for this iteration
2024-10-10 22:57:42 +02:00
common_batch_clear ( batch ) ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
// frist, add sampled tokens from any ongoing sequences
for ( auto & slot : slots ) {
2024-09-06 23:21:29 +02:00
if ( slot . state ! = SLOT_STATE_GENERATING ) {
2023-10-22 21:53:08 +02:00
continue ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-10-22 21:53:08 +02:00
slot . i_batch = batch . n_tokens ;
2024-11-06 12:29:01 +01:00
common_batch_add ( batch , slot . sampled , slot . n_past , { slot . id } , true ) ;
2024-03-07 10:41:53 +01:00
2023-10-22 21:53:08 +02:00
slot . n_past + = 1 ;
2024-03-07 10:41:53 +01:00
if ( slot . params . cache_prompt ) {
slot . cache_tokens . push_back ( slot . sampled ) ;
}
2024-10-12 13:51:54 +02:00
SLT_DBG ( slot , " slot decode token, n_ctx = %d, n_past = %d, n_cache_tokens = %d, truncated = %d \n " ,
slot . n_ctx , slot . n_past , ( int ) slot . cache_tokens . size ( ) , slot . truncated ) ;
2023-05-21 19:51:18 +02:00
}
2023-10-22 21:53:08 +02:00
// process in chunks of params.n_batch
2024-03-22 12:08:28 +01:00
int32_t n_batch = llama_n_batch ( ctx ) ;
2024-03-13 18:54:21 +01:00
int32_t n_ubatch = llama_n_ubatch ( ctx ) ;
2023-10-22 21:53:08 +02:00
2024-07-12 10:14:12 +02:00
// track if this is an embedding or non-embedding batch
// if we've added sampled tokens above, we are in non-embedding mode
// -1: none, 0: non-embedding, 1: embedding
2024-09-28 16:42:03 +02:00
// TODO: make enum
2024-07-12 10:14:12 +02:00
int32_t batch_type = batch . n_tokens > 0 ? 0 : - 1 ;
2024-03-07 10:41:53 +01:00
// next, batch any pending prompts without exceeding n_batch
2024-11-25 15:31:38 +01:00
if ( params_base . cont_batching | | batch . n_tokens = = 0 ) {
2024-03-07 10:41:53 +01:00
for ( auto & slot : slots ) {
// this slot still has a prompt to be processed
2024-10-24 21:51:22 +02:00
if ( slot . state = = SLOT_STATE_PROCESSING_PROMPT | | slot . state = = SLOT_STATE_STARTED ) {
2024-03-07 10:41:53 +01:00
auto & prompt_tokens = slot . prompt_tokens ;
2023-10-22 21:53:08 +02:00
2024-10-24 21:51:22 +02:00
// TODO: maybe move branch to outside of this loop in the future
if ( slot . state = = SLOT_STATE_STARTED ) {
2024-03-07 10:41:53 +01:00
slot . t_start_process_prompt = ggml_time_us ( ) ;
slot . t_start_generation = 0 ;
2024-10-28 07:49:32 +01:00
2024-03-07 10:41:53 +01:00
slot . n_past = 0 ;
2024-02-29 21:42:11 +01:00
slot . n_prompt_tokens = prompt_tokens . size ( ) ;
2024-10-24 21:51:22 +02:00
slot . state = SLOT_STATE_PROCESSING_PROMPT ;
2023-11-11 06:48:21 +01:00
2024-10-24 21:51:22 +02:00
SLT_INF ( slot , " new prompt, n_ctx_slot = %d, n_keep = %d, n_prompt_tokens = %d \n " , slot . n_ctx , slot . params . n_keep , slot . n_prompt_tokens ) ;
2024-03-09 10:30:04 +01:00
2024-10-12 15:14:27 +02:00
// print prompt tokens (for debugging)
if ( 1 ) {
// first 16 tokens (avoid flooding logs)
for ( int i = 0 ; i < std : : min < int > ( 16 , prompt_tokens . size ( ) ) ; i + + ) {
SLT_DBG ( slot , " prompt token %3d: %6d '%s' \n " , i , prompt_tokens [ i ] , common_token_to_piece ( ctx , prompt_tokens [ i ] ) . c_str ( ) ) ;
}
} else {
// all
for ( int i = 0 ; i < ( int ) prompt_tokens . size ( ) ; i + + ) {
SLT_DBG ( slot , " prompt token %3d: %6d '%s' \n " , i , prompt_tokens [ i ] , common_token_to_piece ( ctx , prompt_tokens [ i ] ) . c_str ( ) ) ;
}
2024-10-12 07:21:51 +02:00
}
2024-03-09 11:34:18 +01:00
// empty prompt passed -> release the slot and send empty response
if ( prompt_tokens . empty ( ) ) {
2024-09-15 19:46:12 +02:00
SLT_WRN ( slot , " %s " , " empty prompt - releasing slot \n " ) ;
2024-03-09 11:34:18 +01:00
slot . release ( ) ;
slot . print_timings ( ) ;
send_final_response ( slot ) ;
continue ;
}
2024-12-07 20:21:09 +01:00
if ( slot . is_non_causal ( ) ) {
2024-03-13 18:54:21 +01:00
if ( slot . n_prompt_tokens > n_ubatch ) {
2024-03-07 10:41:53 +01:00
slot . release ( ) ;
2024-05-20 07:56:05 +02:00
send_error ( slot , " input is too large to process. increase the physical batch size " , ERROR_TYPE_SERVER ) ;
2024-03-07 10:41:53 +01:00
continue ;
}
2024-10-25 09:13:46 +02:00
if ( slot . n_prompt_tokens > slot . n_ctx ) {
slot . release ( ) ;
send_error ( slot , " input is larger than the max context size. skipping " , ERROR_TYPE_SERVER ) ;
continue ;
}
2024-03-07 10:41:53 +01:00
} else {
2024-11-25 15:31:38 +01:00
if ( ! params_base . ctx_shift ) {
2024-09-23 22:23:54 +02:00
// if context shift is disabled, we make sure prompt size is smaller than KV size
2024-10-12 15:06:31 +02:00
// TODO: there should be a separate parameter that control prompt truncation
// context shift should be applied only during the generation phase
2024-10-12 13:51:54 +02:00
if ( slot . n_prompt_tokens > = slot . n_ctx ) {
2024-09-23 22:23:54 +02:00
slot . release ( ) ;
send_error ( slot , " the request exceeds the available context size. try increasing the context size or enable context shift " , ERROR_TYPE_INVALID_REQUEST ) ;
continue ;
}
}
2024-03-07 10:41:53 +01:00
if ( slot . params . n_keep < 0 ) {
slot . params . n_keep = slot . n_prompt_tokens ;
}
slot . params . n_keep = std : : min ( slot . n_ctx - 4 , slot . params . n_keep ) ;
2023-10-22 21:53:08 +02:00
2024-10-13 17:52:48 +02:00
// if input prompt is too big, truncate it
2024-10-12 15:06:31 +02:00
if ( slot . n_prompt_tokens > = slot . n_ctx ) {
2024-03-07 10:41:53 +01:00
const int n_left = slot . n_ctx - slot . params . n_keep ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
const int n_block_size = n_left / 2 ;
const int erased_blocks = ( slot . n_prompt_tokens - slot . params . n_keep - n_block_size ) / n_block_size ;
2024-02-25 19:43:50 +01:00
2024-10-24 21:51:22 +02:00
llama_tokens new_tokens (
2024-03-07 10:41:53 +01:00
prompt_tokens . begin ( ) ,
prompt_tokens . begin ( ) + slot . params . n_keep ) ;
2024-02-25 19:43:50 +01:00
2024-03-07 10:41:53 +01:00
new_tokens . insert (
new_tokens . end ( ) ,
prompt_tokens . begin ( ) + slot . params . n_keep + erased_blocks * n_block_size ,
prompt_tokens . end ( ) ) ;
prompt_tokens = std : : move ( new_tokens ) ;
slot . truncated = true ;
slot . n_prompt_tokens = prompt_tokens . size ( ) ;
2024-09-15 19:46:12 +02:00
SLT_WRN ( slot , " input truncated, n_ctx = %d, n_keep = %d, n_left = %d, n_prompt_tokens = %d \n " , slot . n_ctx , slot . params . n_keep , n_left , slot . n_prompt_tokens ) ;
2024-03-07 10:41:53 +01:00
GGML_ASSERT ( slot . n_prompt_tokens < slot . n_ctx ) ;
}
2024-10-12 15:06:31 +02:00
if ( slot . params . cache_prompt ) {
2024-03-07 10:41:53 +01:00
// reuse any previously computed tokens that are common with the new prompt
2024-11-25 08:58:41 +01:00
slot . n_past = common_lcp ( slot . cache_tokens , prompt_tokens ) ;
2024-03-07 10:41:53 +01:00
2024-10-13 17:52:48 +02:00
// reuse chunks from the cached prompt by shifting their KV cache in the new position
2024-11-25 15:31:38 +01:00
if ( params_base . n_cache_reuse > 0 ) {
2024-10-13 17:52:48 +02:00
size_t head_c = slot . n_past ; // cache
size_t head_p = slot . n_past ; // current prompt
2024-11-25 15:31:38 +01:00
SLT_DBG ( slot , " trying to reuse chunks with size > %d, slot.n_past = %d \n " , params_base . n_cache_reuse , slot . n_past ) ;
2024-10-13 17:52:48 +02:00
while ( head_c < slot . cache_tokens . size ( ) & &
head_p < prompt_tokens . size ( ) ) {
size_t n_match = 0 ;
while ( head_c + n_match < slot . cache_tokens . size ( ) & &
head_p + n_match < prompt_tokens . size ( ) & &
slot . cache_tokens [ head_c + n_match ] = = prompt_tokens [ head_p + n_match ] ) {
n_match + + ;
}
2024-11-25 15:31:38 +01:00
if ( n_match > = ( size_t ) params_base . n_cache_reuse ) {
2024-10-15 15:28:55 +02:00
SLT_INF ( slot , " reusing chunk with size %zu, shifting KV cache [%zu, %zu) -> [%zu, %zu) \n " , n_match , head_c , head_c + n_match , head_p , head_p + n_match ) ;
2024-10-13 17:52:48 +02:00
//for (size_t i = head_p; i < head_p + n_match; i++) {
// SLT_DBG(slot, "cache token %3zu: %6d '%s'\n", i, prompt_tokens[i], common_token_to_piece(ctx, prompt_tokens[i]).c_str());
//}
const int64_t kv_shift = ( int64_t ) head_p - ( int64_t ) head_c ;
2024-11-06 12:29:01 +01:00
llama_kv_cache_seq_rm ( ctx , slot . id , head_p , head_c ) ;
llama_kv_cache_seq_add ( ctx , slot . id , head_c , - 1 , kv_shift ) ;
2024-10-13 17:52:48 +02:00
for ( size_t i = 0 ; i < n_match ; i + + ) {
slot . cache_tokens [ head_p + i ] = slot . cache_tokens [ head_c + i ] ;
slot . n_past + + ;
}
head_c + = n_match ;
head_p + = n_match ;
} else {
head_c + = 1 ;
}
}
SLT_DBG ( slot , " after context reuse, new slot.n_past = %d \n " , slot . n_past ) ;
}
2024-01-27 14:38:05 +01:00
}
}
2024-03-07 10:41:53 +01:00
if ( slot . n_past = = slot . n_prompt_tokens & & slot . n_past > 0 ) {
// we have to evaluate at least 1 token to generate logits.
2024-09-15 19:46:12 +02:00
SLT_WRN ( slot , " need to evaluate at least 1 token to generate logits, n_past = %d, n_prompt_tokens = %d \n " , slot . n_past , slot . n_prompt_tokens ) ;
2024-03-07 10:41:53 +01:00
slot . n_past - - ;
}
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
slot . n_prompt_tokens_processed = 0 ;
}
2023-10-22 21:53:08 +02:00
2024-09-28 16:42:03 +02:00
// non-causal tasks require to fit the entire prompt in the physical batch
2024-12-07 20:21:09 +01:00
if ( slot . is_non_causal ( ) ) {
2024-03-07 10:41:53 +01:00
// cannot fit the prompt in the current batch - will try next iter
if ( batch . n_tokens + slot . n_prompt_tokens > n_batch ) {
continue ;
2024-01-27 14:38:05 +01:00
}
2023-10-22 21:53:08 +02:00
}
2024-07-12 10:14:12 +02:00
// check that we are in the right batch_type, if not defer the slot
2024-12-07 20:21:09 +01:00
int slot_type = slot . is_non_causal ( ) ;
2024-07-12 10:14:12 +02:00
if ( batch_type = = - 1 ) {
batch_type = slot_type ;
} else if ( batch_type ! = slot_type ) {
continue ;
}
llama : support Mamba Selective State Space Models (#5328)
* mamba : begin working on support for Mamba SSM
* mamba : begin figuring out how to (ab)use the kv cache for Mamba
* mamba : recurrent inference almost works, but incoherent
* mamba : recurrent inference WORKS!!!
* convert : optionally use d_conv and d_state from config.json for Mamba
* mamba : refactor recurrent conv, resulting in 20% perf increase
It's still slower than I'd like, but I did not really optimize `ggml_exp` yet.
I also refactored `ggml_exp` to work with tensors with more than 2 dimensions.
* ggml : parallelize ggml_exp
This results in 8% faster token generation for Mamba-130M.
* mamba : simplify the conv step with a self-overlapping view
Turns out the conv_state can be made smaller by one column.
Note that this breaks existing GGUFs of Mamba,
because the key_value_length field is tied to the conv_state size.
Convolution with a self-overlapping view is cool!
And it's much simpler than what I initially thought would be necessary
to make the convolution step work with more than 1 token at a time.
Next step is to make the SSM step work on batches of tokens too,
and thus I need to figure out a way to make a parallel selective scan
which will keep the ssm_state small and won't make it bigger
by a factor of (n_layer * batch_size).
* llama : fix Mamba KV self size wrongly displaying as f16 instead of f32
Relatedly, I also tried to see if other types than f32 worked for the states,
but they don't, because of the operators used.
It's probably better anyway to keep lots of precision there,
since the states are small anyway.
* mamba : fix self-overlapping view depth stride
* mamba : handle batches of more than 1 token
This means running Mamba no longer crashes when using the default settings!
And probably also slightly faster prompt processing.
Both batched and non-batched processing yield the same output.
Previously, the state was not cleared when starting a sequence.
Next step is to make the KV cache API work as expected for Mamba models.
* ggml: add ggml_ssm_scan to help with parallel selective scan
If the selective scan was implemented without a custom operator,
there would be waaay too many nodes in the graph. For example,
for Mamba-130M, with a batch size of 512 (the default),
a naive selective scan could add at least 24*512=12288 nodes,
which is more than LLAMA_MAX_NODES (8192),
and that's only for the smallest Mamba model.
So it's much cleaner with a custom operator.
Not sure about the name, though.
* ggml : in ggml_ssm_scan, merge multiple rows in the same vec operation
This will help with performance on CPU if ggml_vec_mul_f32
and ggml_vec_add_f32 are ever optimized with SIMD.
* mamba : very basic quantization support
Mostly works, but there is currently no difference
between the variants of a k-quant (e.g. Q4_K_S and Q4_K_M are the same).
Most of the SSM-specific weights can be kept in f32 without affecting
the size that much, since they are relatively small.
(the linear projection weights are responsible for most of Mamba's size)
Too much quantization seems to make the state degrade quite fast, and
the model begins to output gibberish.
It seems to affect bigger models to a lesser extent than small models,
but I'm not sure by how much.
Experimentation will be needed to figure out which weights are more important
for the _M (and _L?) variants of k-quants for Mamba.
* convert : fix wrong name for layer norm weight of offical Mamba models
I was using Q-bert/Mamba-* models before, which have a slighlty different
naming scheme for the weights.
(they start with "model.layers" instead of "backbone.layers")
* mamba : fuse more steps of the SSM scan in the ggml_ssm_scan operator
This increases performance on CPU by around 30% for prompt processing,
and by around 20% for text generation.
However, it also makes the ggml_exp and ggml_soft_plus operators unused.
Whether or not they should be kept will be decided later.
* convert : for Mamba, also consider the "MambaLMHeadModel" arch name
It's the name of the class of the official implementation,
though they don't use it (yet) in the "architectures" field of config.json
* mamba : fix vocab size problems with official models
The perplexity was waaaay to high for models with a non-round vocab size.
Not sure why, but it needed to be fixed in the metadata.
Note that this breaks existing GGUF-converted Mamba models,
but **only if** the vocab size was not already rounded.
* ggml : remove ggml_exp and ggml_soft_plus
They did not exist anyway outside of this branch,
and since ggml_ssm_scan fused operations together, they are unused.
It's always possible to bring them back if needed.
* mamba : remove some useless comments
No code change.
* convert : fix flake8 linter errors
* mamba : apply suggestions from code review
* mamba : remove unecessary branch for row-wise ssm_state and C multiplication
It was previously done to avoid permuting when only one token is processed
at a time (like when generating text), but permuting is cheap,
and dynamically changing the compute graph is not future-proof.
* ggml : in ggml_ssm_scan, use more appropriate asserts
* ggml : rename the destination pointer in ggml_compute_forward_ssm_scan_f32
* mamba : multiple sequences, but one at a time
This is a step towards making this Mamba implementation usable
with the server example (the way the system prompt is kept when clearing
the client slots will need to be changed before this can work, though).
The KV cache size for this kind of model is tied to the maximum number
of sequences kept at any single time.
For now, this number is obtained from n_parallel (plus one,
to have an extra sequence to dedicate to the system prompt),
but there might be a better way to do this which won't also
make the main example use 2 cells even if only 1 is really used.
(for this specific case, --parallel 0 helps)
Simultaneous sequence processing will probably require changes to
ggml_ssm_scan, and possibly a new operator for the conv step.
* mamba : support llama_kv_cache_seq_cp
This (mis)uses the logic around K shifts, because tokens in a state
can't be shifted anyway, and because inp_K_shift has the right shape and type.
Using ggml_get_rows is a nice way to do copies, but copy chains can't work.
Fortunately, copy chains don't really seem to be used in the examples.
Each KV cell is dedicated to the sequence ID corresponding to its own index.
* mamba : use a state mask
It's cleaner than the previous heuristic of
checking for the pos of the first token in the batch.
inp_KQ_mask could not be re-used for this, because it has the wrong shape
and because it seems more suited to the next step of
simultaneous sequence processing (helping with the problem of
remembering which token belongs to which sequence(s)/state(s)).
* llama : replace the usage of n_ctx with kv_self.size in many places
* mamba : use n_tokens directly instead of n_tok
* mamba : in comments, properly refer to KV cells instead of slots
* mamba : reduce memory usage of ggml_ssm_scan
From 290.37 MiB to 140.68 MiB of CPU compute buffer size
with Mamba 3B with a batch size of 512.
The result tensor of ggml_ssm_scan was previously a big part
of the CPU compute buffer size. To make it smaller,
it does not contain the intermediate ssm states anymore.
Both y and the last ssm state are combined in the result tensor,
because it seems only a single tensor can be returned by an operator
with the way the graph is built.
* mamba : simultaneous sequence processing
A batch can now contain tokens from multiple sequences.
This is necessary for at least the parallel example, the server example,
and the HellaSwag test in the perplexity example.
However, for this to be useful, uses of llama_kv_cache_seq_rm/cp
will need to be changed to work on whole sequences.
* ggml : add ggml_ssm_conv as a new operator for the conv step of Mamba
This operator makes it possible to use and update the correct states
for each token of the batch in the same way as ggml_ssm_scan.
Other solutions which use existing operators would need loops which would
add too many nodes to the graph (at least the ones I thought of).
Using this operator further reduces the size of the CPU compute buffer
from 140.68 MiB to 103.20 MiB with Mamba 3B with a batch size of 512.
And (at least on CPU), it's a bit faster than before.
Note that "ggml_ssm_conv" is probably not the most appropriate name,
and it could be changed if a better one is found.
* llama : add inp_s_seq as a new input tensor
The most convenient implementation to select the correct state (for Mamba)
for each token is to directly get the correct index from a tensor.
This is why inp_s_seq is storing int32_t and not floats.
The other, less convenient way to select the correct state would be
to have inp_KQ_mask contain 1.0f for each state used by a token
and 0.0f otherwise. This complicates quickly fetching the first used
state of a token, and is also less efficient because a whole row
of the mask would always need to be read for each token.
Using indexes makes it easy to stop searching when there are
no more sequences for a token, and the first sequence assigned
is always very quickly available (it's the first element of each row).
* mamba : support llama_kv_cache_seq_cp copy chains
* mamba : support shifting and dividing the kv cache pos
* mamba : make the server and parallel examples work with whole sequences
A seq_id is dedicated to the system prompt in both cases.
* llama : make llama_kv_cache_seq_rm return whether it succeeded or not
* mamba : dedicate an input tensor for state copy indices
This is cleaner and makes it easier to adapt when/if token positions
(and by extension, inp_K_shift) are no longer integers.
* mamba : adapt perplexity, batched, and batched-bench examples
* perplexity : limit the max number of sequences
This adapts to what the loaded model can provide.
* llama : add llama_n_max_seq to get the upper limit for seq_ids
Used by the perplexity example.
* batched : pass n_parallel to the model's context params
This should have been there already, but it wasn't.
* batched-bench : reserve sequences to support Mamba
* batched-bench : fix tokens being put in wrong sequences
Generation quality isn't what's measured in there anyway,
but at least using the correct sequences avoids using non-consecutive
token positions.
* mamba : stop abusing attention metadata
This breaks existing converted-to-GGUF Mamba models,
but will allow supporting mixed architectures like MambaFormer
without needing to break Mamba models.
This will also allow changing the size of Mamba's states
without having to reconvert models in the future.
(e.g. using something else than d_conv - 1 columns for the conv_states
will not require breaking existing converted Mamba models again)
* gguf-py : add new KV metadata key-value pairs for Mamba
* llama : add new metadata key-value pairs for Mamba
* llama : guard against divisions by zero when n_head is 0
* mamba : rename "unlimited" KV cache property to "recurrent"
* mamba : more correctly update the "used" field of the KV cache
* ggml : in ggml_ssm_scan, use a threshold for soft_plus
This is how the official Mamba implementation does it,
and it's also what torch.nn.Softplus does.
* convert : for Mamba, fallback to internal NeoX tokenizer
The resulting models are exactly the same
as if the tokenizer.json and tokenizer_config.json of GPT-NeoX were there.
* mamba : support state saving and restoring
* ggml : implicitly pass src tensors through dst for Mamba-related ops
* mamba : clarify some comments
* server : fix cache_tokens not getting correctly resized
Otherwise, when the "we have to evaluate at least 1 token" special case
was triggered, an extra token was kept in cache_tokens even if it was
removed from the KV cache.
For Mamba, this caused useless prompt reprocessing when the previous
request triggered the above case.
* convert-hf : support new metadata keys for Mamba
For the models available at
https://huggingface.co/collections/state-spaces/transformers-compatible-mamba-65e7b40ab87e5297e45ae406
* mamba : rename metadata to be more similar to transformers library
This breaks existing converted-to-GGUF models,
but the metadata names are more "standard".
* mamba : support mamba-*-hf models
These models share their token_embd.weight with their output.weight
* mamba : add missing spaces
This is purely a formatting change.
* convert-hf : omit output.weight when identical with token_embd.weight
Only for Mamba for now, but it might be relevant for other models eventually.
Most Mamba models actually share these two tensors, albeit implicitly.
* readme : add Mamba to supported models, and add recent API changes
* mamba : move state_seq and state_mask views outside layer loop
A few tensors were also missing `struct` in front of `ggml_tensor`.
2024-03-08 23:31:00 +01:00
// keep only the common part
2024-11-06 12:29:01 +01:00
if ( ! llama_kv_cache_seq_rm ( ctx , slot . id , slot . n_past , - 1 ) ) {
llama : support Mamba Selective State Space Models (#5328)
* mamba : begin working on support for Mamba SSM
* mamba : begin figuring out how to (ab)use the kv cache for Mamba
* mamba : recurrent inference almost works, but incoherent
* mamba : recurrent inference WORKS!!!
* convert : optionally use d_conv and d_state from config.json for Mamba
* mamba : refactor recurrent conv, resulting in 20% perf increase
It's still slower than I'd like, but I did not really optimize `ggml_exp` yet.
I also refactored `ggml_exp` to work with tensors with more than 2 dimensions.
* ggml : parallelize ggml_exp
This results in 8% faster token generation for Mamba-130M.
* mamba : simplify the conv step with a self-overlapping view
Turns out the conv_state can be made smaller by one column.
Note that this breaks existing GGUFs of Mamba,
because the key_value_length field is tied to the conv_state size.
Convolution with a self-overlapping view is cool!
And it's much simpler than what I initially thought would be necessary
to make the convolution step work with more than 1 token at a time.
Next step is to make the SSM step work on batches of tokens too,
and thus I need to figure out a way to make a parallel selective scan
which will keep the ssm_state small and won't make it bigger
by a factor of (n_layer * batch_size).
* llama : fix Mamba KV self size wrongly displaying as f16 instead of f32
Relatedly, I also tried to see if other types than f32 worked for the states,
but they don't, because of the operators used.
It's probably better anyway to keep lots of precision there,
since the states are small anyway.
* mamba : fix self-overlapping view depth stride
* mamba : handle batches of more than 1 token
This means running Mamba no longer crashes when using the default settings!
And probably also slightly faster prompt processing.
Both batched and non-batched processing yield the same output.
Previously, the state was not cleared when starting a sequence.
Next step is to make the KV cache API work as expected for Mamba models.
* ggml: add ggml_ssm_scan to help with parallel selective scan
If the selective scan was implemented without a custom operator,
there would be waaay too many nodes in the graph. For example,
for Mamba-130M, with a batch size of 512 (the default),
a naive selective scan could add at least 24*512=12288 nodes,
which is more than LLAMA_MAX_NODES (8192),
and that's only for the smallest Mamba model.
So it's much cleaner with a custom operator.
Not sure about the name, though.
* ggml : in ggml_ssm_scan, merge multiple rows in the same vec operation
This will help with performance on CPU if ggml_vec_mul_f32
and ggml_vec_add_f32 are ever optimized with SIMD.
* mamba : very basic quantization support
Mostly works, but there is currently no difference
between the variants of a k-quant (e.g. Q4_K_S and Q4_K_M are the same).
Most of the SSM-specific weights can be kept in f32 without affecting
the size that much, since they are relatively small.
(the linear projection weights are responsible for most of Mamba's size)
Too much quantization seems to make the state degrade quite fast, and
the model begins to output gibberish.
It seems to affect bigger models to a lesser extent than small models,
but I'm not sure by how much.
Experimentation will be needed to figure out which weights are more important
for the _M (and _L?) variants of k-quants for Mamba.
* convert : fix wrong name for layer norm weight of offical Mamba models
I was using Q-bert/Mamba-* models before, which have a slighlty different
naming scheme for the weights.
(they start with "model.layers" instead of "backbone.layers")
* mamba : fuse more steps of the SSM scan in the ggml_ssm_scan operator
This increases performance on CPU by around 30% for prompt processing,
and by around 20% for text generation.
However, it also makes the ggml_exp and ggml_soft_plus operators unused.
Whether or not they should be kept will be decided later.
* convert : for Mamba, also consider the "MambaLMHeadModel" arch name
It's the name of the class of the official implementation,
though they don't use it (yet) in the "architectures" field of config.json
* mamba : fix vocab size problems with official models
The perplexity was waaaay to high for models with a non-round vocab size.
Not sure why, but it needed to be fixed in the metadata.
Note that this breaks existing GGUF-converted Mamba models,
but **only if** the vocab size was not already rounded.
* ggml : remove ggml_exp and ggml_soft_plus
They did not exist anyway outside of this branch,
and since ggml_ssm_scan fused operations together, they are unused.
It's always possible to bring them back if needed.
* mamba : remove some useless comments
No code change.
* convert : fix flake8 linter errors
* mamba : apply suggestions from code review
* mamba : remove unecessary branch for row-wise ssm_state and C multiplication
It was previously done to avoid permuting when only one token is processed
at a time (like when generating text), but permuting is cheap,
and dynamically changing the compute graph is not future-proof.
* ggml : in ggml_ssm_scan, use more appropriate asserts
* ggml : rename the destination pointer in ggml_compute_forward_ssm_scan_f32
* mamba : multiple sequences, but one at a time
This is a step towards making this Mamba implementation usable
with the server example (the way the system prompt is kept when clearing
the client slots will need to be changed before this can work, though).
The KV cache size for this kind of model is tied to the maximum number
of sequences kept at any single time.
For now, this number is obtained from n_parallel (plus one,
to have an extra sequence to dedicate to the system prompt),
but there might be a better way to do this which won't also
make the main example use 2 cells even if only 1 is really used.
(for this specific case, --parallel 0 helps)
Simultaneous sequence processing will probably require changes to
ggml_ssm_scan, and possibly a new operator for the conv step.
* mamba : support llama_kv_cache_seq_cp
This (mis)uses the logic around K shifts, because tokens in a state
can't be shifted anyway, and because inp_K_shift has the right shape and type.
Using ggml_get_rows is a nice way to do copies, but copy chains can't work.
Fortunately, copy chains don't really seem to be used in the examples.
Each KV cell is dedicated to the sequence ID corresponding to its own index.
* mamba : use a state mask
It's cleaner than the previous heuristic of
checking for the pos of the first token in the batch.
inp_KQ_mask could not be re-used for this, because it has the wrong shape
and because it seems more suited to the next step of
simultaneous sequence processing (helping with the problem of
remembering which token belongs to which sequence(s)/state(s)).
* llama : replace the usage of n_ctx with kv_self.size in many places
* mamba : use n_tokens directly instead of n_tok
* mamba : in comments, properly refer to KV cells instead of slots
* mamba : reduce memory usage of ggml_ssm_scan
From 290.37 MiB to 140.68 MiB of CPU compute buffer size
with Mamba 3B with a batch size of 512.
The result tensor of ggml_ssm_scan was previously a big part
of the CPU compute buffer size. To make it smaller,
it does not contain the intermediate ssm states anymore.
Both y and the last ssm state are combined in the result tensor,
because it seems only a single tensor can be returned by an operator
with the way the graph is built.
* mamba : simultaneous sequence processing
A batch can now contain tokens from multiple sequences.
This is necessary for at least the parallel example, the server example,
and the HellaSwag test in the perplexity example.
However, for this to be useful, uses of llama_kv_cache_seq_rm/cp
will need to be changed to work on whole sequences.
* ggml : add ggml_ssm_conv as a new operator for the conv step of Mamba
This operator makes it possible to use and update the correct states
for each token of the batch in the same way as ggml_ssm_scan.
Other solutions which use existing operators would need loops which would
add too many nodes to the graph (at least the ones I thought of).
Using this operator further reduces the size of the CPU compute buffer
from 140.68 MiB to 103.20 MiB with Mamba 3B with a batch size of 512.
And (at least on CPU), it's a bit faster than before.
Note that "ggml_ssm_conv" is probably not the most appropriate name,
and it could be changed if a better one is found.
* llama : add inp_s_seq as a new input tensor
The most convenient implementation to select the correct state (for Mamba)
for each token is to directly get the correct index from a tensor.
This is why inp_s_seq is storing int32_t and not floats.
The other, less convenient way to select the correct state would be
to have inp_KQ_mask contain 1.0f for each state used by a token
and 0.0f otherwise. This complicates quickly fetching the first used
state of a token, and is also less efficient because a whole row
of the mask would always need to be read for each token.
Using indexes makes it easy to stop searching when there are
no more sequences for a token, and the first sequence assigned
is always very quickly available (it's the first element of each row).
* mamba : support llama_kv_cache_seq_cp copy chains
* mamba : support shifting and dividing the kv cache pos
* mamba : make the server and parallel examples work with whole sequences
A seq_id is dedicated to the system prompt in both cases.
* llama : make llama_kv_cache_seq_rm return whether it succeeded or not
* mamba : dedicate an input tensor for state copy indices
This is cleaner and makes it easier to adapt when/if token positions
(and by extension, inp_K_shift) are no longer integers.
* mamba : adapt perplexity, batched, and batched-bench examples
* perplexity : limit the max number of sequences
This adapts to what the loaded model can provide.
* llama : add llama_n_max_seq to get the upper limit for seq_ids
Used by the perplexity example.
* batched : pass n_parallel to the model's context params
This should have been there already, but it wasn't.
* batched-bench : reserve sequences to support Mamba
* batched-bench : fix tokens being put in wrong sequences
Generation quality isn't what's measured in there anyway,
but at least using the correct sequences avoids using non-consecutive
token positions.
* mamba : stop abusing attention metadata
This breaks existing converted-to-GGUF Mamba models,
but will allow supporting mixed architectures like MambaFormer
without needing to break Mamba models.
This will also allow changing the size of Mamba's states
without having to reconvert models in the future.
(e.g. using something else than d_conv - 1 columns for the conv_states
will not require breaking existing converted Mamba models again)
* gguf-py : add new KV metadata key-value pairs for Mamba
* llama : add new metadata key-value pairs for Mamba
* llama : guard against divisions by zero when n_head is 0
* mamba : rename "unlimited" KV cache property to "recurrent"
* mamba : more correctly update the "used" field of the KV cache
* ggml : in ggml_ssm_scan, use a threshold for soft_plus
This is how the official Mamba implementation does it,
and it's also what torch.nn.Softplus does.
* convert : for Mamba, fallback to internal NeoX tokenizer
The resulting models are exactly the same
as if the tokenizer.json and tokenizer_config.json of GPT-NeoX were there.
* mamba : support state saving and restoring
* ggml : implicitly pass src tensors through dst for Mamba-related ops
* mamba : clarify some comments
* server : fix cache_tokens not getting correctly resized
Otherwise, when the "we have to evaluate at least 1 token" special case
was triggered, an extra token was kept in cache_tokens even if it was
removed from the KV cache.
For Mamba, this caused useless prompt reprocessing when the previous
request triggered the above case.
* convert-hf : support new metadata keys for Mamba
For the models available at
https://huggingface.co/collections/state-spaces/transformers-compatible-mamba-65e7b40ab87e5297e45ae406
* mamba : rename metadata to be more similar to transformers library
This breaks existing converted-to-GGUF models,
but the metadata names are more "standard".
* mamba : support mamba-*-hf models
These models share their token_embd.weight with their output.weight
* mamba : add missing spaces
This is purely a formatting change.
* convert-hf : omit output.weight when identical with token_embd.weight
Only for Mamba for now, but it might be relevant for other models eventually.
Most Mamba models actually share these two tensors, albeit implicitly.
* readme : add Mamba to supported models, and add recent API changes
* mamba : move state_seq and state_mask views outside layer loop
A few tensors were also missing `struct` in front of `ggml_tensor`.
2024-03-08 23:31:00 +01:00
// could not partially delete (likely using a non-Transformer model)
2024-11-06 12:29:01 +01:00
llama_kv_cache_seq_rm ( ctx , slot . id , - 1 , - 1 ) ;
llama : support Mamba Selective State Space Models (#5328)
* mamba : begin working on support for Mamba SSM
* mamba : begin figuring out how to (ab)use the kv cache for Mamba
* mamba : recurrent inference almost works, but incoherent
* mamba : recurrent inference WORKS!!!
* convert : optionally use d_conv and d_state from config.json for Mamba
* mamba : refactor recurrent conv, resulting in 20% perf increase
It's still slower than I'd like, but I did not really optimize `ggml_exp` yet.
I also refactored `ggml_exp` to work with tensors with more than 2 dimensions.
* ggml : parallelize ggml_exp
This results in 8% faster token generation for Mamba-130M.
* mamba : simplify the conv step with a self-overlapping view
Turns out the conv_state can be made smaller by one column.
Note that this breaks existing GGUFs of Mamba,
because the key_value_length field is tied to the conv_state size.
Convolution with a self-overlapping view is cool!
And it's much simpler than what I initially thought would be necessary
to make the convolution step work with more than 1 token at a time.
Next step is to make the SSM step work on batches of tokens too,
and thus I need to figure out a way to make a parallel selective scan
which will keep the ssm_state small and won't make it bigger
by a factor of (n_layer * batch_size).
* llama : fix Mamba KV self size wrongly displaying as f16 instead of f32
Relatedly, I also tried to see if other types than f32 worked for the states,
but they don't, because of the operators used.
It's probably better anyway to keep lots of precision there,
since the states are small anyway.
* mamba : fix self-overlapping view depth stride
* mamba : handle batches of more than 1 token
This means running Mamba no longer crashes when using the default settings!
And probably also slightly faster prompt processing.
Both batched and non-batched processing yield the same output.
Previously, the state was not cleared when starting a sequence.
Next step is to make the KV cache API work as expected for Mamba models.
* ggml: add ggml_ssm_scan to help with parallel selective scan
If the selective scan was implemented without a custom operator,
there would be waaay too many nodes in the graph. For example,
for Mamba-130M, with a batch size of 512 (the default),
a naive selective scan could add at least 24*512=12288 nodes,
which is more than LLAMA_MAX_NODES (8192),
and that's only for the smallest Mamba model.
So it's much cleaner with a custom operator.
Not sure about the name, though.
* ggml : in ggml_ssm_scan, merge multiple rows in the same vec operation
This will help with performance on CPU if ggml_vec_mul_f32
and ggml_vec_add_f32 are ever optimized with SIMD.
* mamba : very basic quantization support
Mostly works, but there is currently no difference
between the variants of a k-quant (e.g. Q4_K_S and Q4_K_M are the same).
Most of the SSM-specific weights can be kept in f32 without affecting
the size that much, since they are relatively small.
(the linear projection weights are responsible for most of Mamba's size)
Too much quantization seems to make the state degrade quite fast, and
the model begins to output gibberish.
It seems to affect bigger models to a lesser extent than small models,
but I'm not sure by how much.
Experimentation will be needed to figure out which weights are more important
for the _M (and _L?) variants of k-quants for Mamba.
* convert : fix wrong name for layer norm weight of offical Mamba models
I was using Q-bert/Mamba-* models before, which have a slighlty different
naming scheme for the weights.
(they start with "model.layers" instead of "backbone.layers")
* mamba : fuse more steps of the SSM scan in the ggml_ssm_scan operator
This increases performance on CPU by around 30% for prompt processing,
and by around 20% for text generation.
However, it also makes the ggml_exp and ggml_soft_plus operators unused.
Whether or not they should be kept will be decided later.
* convert : for Mamba, also consider the "MambaLMHeadModel" arch name
It's the name of the class of the official implementation,
though they don't use it (yet) in the "architectures" field of config.json
* mamba : fix vocab size problems with official models
The perplexity was waaaay to high for models with a non-round vocab size.
Not sure why, but it needed to be fixed in the metadata.
Note that this breaks existing GGUF-converted Mamba models,
but **only if** the vocab size was not already rounded.
* ggml : remove ggml_exp and ggml_soft_plus
They did not exist anyway outside of this branch,
and since ggml_ssm_scan fused operations together, they are unused.
It's always possible to bring them back if needed.
* mamba : remove some useless comments
No code change.
* convert : fix flake8 linter errors
* mamba : apply suggestions from code review
* mamba : remove unecessary branch for row-wise ssm_state and C multiplication
It was previously done to avoid permuting when only one token is processed
at a time (like when generating text), but permuting is cheap,
and dynamically changing the compute graph is not future-proof.
* ggml : in ggml_ssm_scan, use more appropriate asserts
* ggml : rename the destination pointer in ggml_compute_forward_ssm_scan_f32
* mamba : multiple sequences, but one at a time
This is a step towards making this Mamba implementation usable
with the server example (the way the system prompt is kept when clearing
the client slots will need to be changed before this can work, though).
The KV cache size for this kind of model is tied to the maximum number
of sequences kept at any single time.
For now, this number is obtained from n_parallel (plus one,
to have an extra sequence to dedicate to the system prompt),
but there might be a better way to do this which won't also
make the main example use 2 cells even if only 1 is really used.
(for this specific case, --parallel 0 helps)
Simultaneous sequence processing will probably require changes to
ggml_ssm_scan, and possibly a new operator for the conv step.
* mamba : support llama_kv_cache_seq_cp
This (mis)uses the logic around K shifts, because tokens in a state
can't be shifted anyway, and because inp_K_shift has the right shape and type.
Using ggml_get_rows is a nice way to do copies, but copy chains can't work.
Fortunately, copy chains don't really seem to be used in the examples.
Each KV cell is dedicated to the sequence ID corresponding to its own index.
* mamba : use a state mask
It's cleaner than the previous heuristic of
checking for the pos of the first token in the batch.
inp_KQ_mask could not be re-used for this, because it has the wrong shape
and because it seems more suited to the next step of
simultaneous sequence processing (helping with the problem of
remembering which token belongs to which sequence(s)/state(s)).
* llama : replace the usage of n_ctx with kv_self.size in many places
* mamba : use n_tokens directly instead of n_tok
* mamba : in comments, properly refer to KV cells instead of slots
* mamba : reduce memory usage of ggml_ssm_scan
From 290.37 MiB to 140.68 MiB of CPU compute buffer size
with Mamba 3B with a batch size of 512.
The result tensor of ggml_ssm_scan was previously a big part
of the CPU compute buffer size. To make it smaller,
it does not contain the intermediate ssm states anymore.
Both y and the last ssm state are combined in the result tensor,
because it seems only a single tensor can be returned by an operator
with the way the graph is built.
* mamba : simultaneous sequence processing
A batch can now contain tokens from multiple sequences.
This is necessary for at least the parallel example, the server example,
and the HellaSwag test in the perplexity example.
However, for this to be useful, uses of llama_kv_cache_seq_rm/cp
will need to be changed to work on whole sequences.
* ggml : add ggml_ssm_conv as a new operator for the conv step of Mamba
This operator makes it possible to use and update the correct states
for each token of the batch in the same way as ggml_ssm_scan.
Other solutions which use existing operators would need loops which would
add too many nodes to the graph (at least the ones I thought of).
Using this operator further reduces the size of the CPU compute buffer
from 140.68 MiB to 103.20 MiB with Mamba 3B with a batch size of 512.
And (at least on CPU), it's a bit faster than before.
Note that "ggml_ssm_conv" is probably not the most appropriate name,
and it could be changed if a better one is found.
* llama : add inp_s_seq as a new input tensor
The most convenient implementation to select the correct state (for Mamba)
for each token is to directly get the correct index from a tensor.
This is why inp_s_seq is storing int32_t and not floats.
The other, less convenient way to select the correct state would be
to have inp_KQ_mask contain 1.0f for each state used by a token
and 0.0f otherwise. This complicates quickly fetching the first used
state of a token, and is also less efficient because a whole row
of the mask would always need to be read for each token.
Using indexes makes it easy to stop searching when there are
no more sequences for a token, and the first sequence assigned
is always very quickly available (it's the first element of each row).
* mamba : support llama_kv_cache_seq_cp copy chains
* mamba : support shifting and dividing the kv cache pos
* mamba : make the server and parallel examples work with whole sequences
A seq_id is dedicated to the system prompt in both cases.
* llama : make llama_kv_cache_seq_rm return whether it succeeded or not
* mamba : dedicate an input tensor for state copy indices
This is cleaner and makes it easier to adapt when/if token positions
(and by extension, inp_K_shift) are no longer integers.
* mamba : adapt perplexity, batched, and batched-bench examples
* perplexity : limit the max number of sequences
This adapts to what the loaded model can provide.
* llama : add llama_n_max_seq to get the upper limit for seq_ids
Used by the perplexity example.
* batched : pass n_parallel to the model's context params
This should have been there already, but it wasn't.
* batched-bench : reserve sequences to support Mamba
* batched-bench : fix tokens being put in wrong sequences
Generation quality isn't what's measured in there anyway,
but at least using the correct sequences avoids using non-consecutive
token positions.
* mamba : stop abusing attention metadata
This breaks existing converted-to-GGUF Mamba models,
but will allow supporting mixed architectures like MambaFormer
without needing to break Mamba models.
This will also allow changing the size of Mamba's states
without having to reconvert models in the future.
(e.g. using something else than d_conv - 1 columns for the conv_states
will not require breaking existing converted Mamba models again)
* gguf-py : add new KV metadata key-value pairs for Mamba
* llama : add new metadata key-value pairs for Mamba
* llama : guard against divisions by zero when n_head is 0
* mamba : rename "unlimited" KV cache property to "recurrent"
* mamba : more correctly update the "used" field of the KV cache
* ggml : in ggml_ssm_scan, use a threshold for soft_plus
This is how the official Mamba implementation does it,
and it's also what torch.nn.Softplus does.
* convert : for Mamba, fallback to internal NeoX tokenizer
The resulting models are exactly the same
as if the tokenizer.json and tokenizer_config.json of GPT-NeoX were there.
* mamba : support state saving and restoring
* ggml : implicitly pass src tensors through dst for Mamba-related ops
* mamba : clarify some comments
* server : fix cache_tokens not getting correctly resized
Otherwise, when the "we have to evaluate at least 1 token" special case
was triggered, an extra token was kept in cache_tokens even if it was
removed from the KV cache.
For Mamba, this caused useless prompt reprocessing when the previous
request triggered the above case.
* convert-hf : support new metadata keys for Mamba
For the models available at
https://huggingface.co/collections/state-spaces/transformers-compatible-mamba-65e7b40ab87e5297e45ae406
* mamba : rename metadata to be more similar to transformers library
This breaks existing converted-to-GGUF models,
but the metadata names are more "standard".
* mamba : support mamba-*-hf models
These models share their token_embd.weight with their output.weight
* mamba : add missing spaces
This is purely a formatting change.
* convert-hf : omit output.weight when identical with token_embd.weight
Only for Mamba for now, but it might be relevant for other models eventually.
Most Mamba models actually share these two tensors, albeit implicitly.
* readme : add Mamba to supported models, and add recent API changes
* mamba : move state_seq and state_mask views outside layer loop
A few tensors were also missing `struct` in front of `ggml_tensor`.
2024-03-08 23:31:00 +01:00
2024-10-12 13:51:54 +02:00
// there is no common part left
llama : support Mamba Selective State Space Models (#5328)
* mamba : begin working on support for Mamba SSM
* mamba : begin figuring out how to (ab)use the kv cache for Mamba
* mamba : recurrent inference almost works, but incoherent
* mamba : recurrent inference WORKS!!!
* convert : optionally use d_conv and d_state from config.json for Mamba
* mamba : refactor recurrent conv, resulting in 20% perf increase
It's still slower than I'd like, but I did not really optimize `ggml_exp` yet.
I also refactored `ggml_exp` to work with tensors with more than 2 dimensions.
* ggml : parallelize ggml_exp
This results in 8% faster token generation for Mamba-130M.
* mamba : simplify the conv step with a self-overlapping view
Turns out the conv_state can be made smaller by one column.
Note that this breaks existing GGUFs of Mamba,
because the key_value_length field is tied to the conv_state size.
Convolution with a self-overlapping view is cool!
And it's much simpler than what I initially thought would be necessary
to make the convolution step work with more than 1 token at a time.
Next step is to make the SSM step work on batches of tokens too,
and thus I need to figure out a way to make a parallel selective scan
which will keep the ssm_state small and won't make it bigger
by a factor of (n_layer * batch_size).
* llama : fix Mamba KV self size wrongly displaying as f16 instead of f32
Relatedly, I also tried to see if other types than f32 worked for the states,
but they don't, because of the operators used.
It's probably better anyway to keep lots of precision there,
since the states are small anyway.
* mamba : fix self-overlapping view depth stride
* mamba : handle batches of more than 1 token
This means running Mamba no longer crashes when using the default settings!
And probably also slightly faster prompt processing.
Both batched and non-batched processing yield the same output.
Previously, the state was not cleared when starting a sequence.
Next step is to make the KV cache API work as expected for Mamba models.
* ggml: add ggml_ssm_scan to help with parallel selective scan
If the selective scan was implemented without a custom operator,
there would be waaay too many nodes in the graph. For example,
for Mamba-130M, with a batch size of 512 (the default),
a naive selective scan could add at least 24*512=12288 nodes,
which is more than LLAMA_MAX_NODES (8192),
and that's only for the smallest Mamba model.
So it's much cleaner with a custom operator.
Not sure about the name, though.
* ggml : in ggml_ssm_scan, merge multiple rows in the same vec operation
This will help with performance on CPU if ggml_vec_mul_f32
and ggml_vec_add_f32 are ever optimized with SIMD.
* mamba : very basic quantization support
Mostly works, but there is currently no difference
between the variants of a k-quant (e.g. Q4_K_S and Q4_K_M are the same).
Most of the SSM-specific weights can be kept in f32 without affecting
the size that much, since they are relatively small.
(the linear projection weights are responsible for most of Mamba's size)
Too much quantization seems to make the state degrade quite fast, and
the model begins to output gibberish.
It seems to affect bigger models to a lesser extent than small models,
but I'm not sure by how much.
Experimentation will be needed to figure out which weights are more important
for the _M (and _L?) variants of k-quants for Mamba.
* convert : fix wrong name for layer norm weight of offical Mamba models
I was using Q-bert/Mamba-* models before, which have a slighlty different
naming scheme for the weights.
(they start with "model.layers" instead of "backbone.layers")
* mamba : fuse more steps of the SSM scan in the ggml_ssm_scan operator
This increases performance on CPU by around 30% for prompt processing,
and by around 20% for text generation.
However, it also makes the ggml_exp and ggml_soft_plus operators unused.
Whether or not they should be kept will be decided later.
* convert : for Mamba, also consider the "MambaLMHeadModel" arch name
It's the name of the class of the official implementation,
though they don't use it (yet) in the "architectures" field of config.json
* mamba : fix vocab size problems with official models
The perplexity was waaaay to high for models with a non-round vocab size.
Not sure why, but it needed to be fixed in the metadata.
Note that this breaks existing GGUF-converted Mamba models,
but **only if** the vocab size was not already rounded.
* ggml : remove ggml_exp and ggml_soft_plus
They did not exist anyway outside of this branch,
and since ggml_ssm_scan fused operations together, they are unused.
It's always possible to bring them back if needed.
* mamba : remove some useless comments
No code change.
* convert : fix flake8 linter errors
* mamba : apply suggestions from code review
* mamba : remove unecessary branch for row-wise ssm_state and C multiplication
It was previously done to avoid permuting when only one token is processed
at a time (like when generating text), but permuting is cheap,
and dynamically changing the compute graph is not future-proof.
* ggml : in ggml_ssm_scan, use more appropriate asserts
* ggml : rename the destination pointer in ggml_compute_forward_ssm_scan_f32
* mamba : multiple sequences, but one at a time
This is a step towards making this Mamba implementation usable
with the server example (the way the system prompt is kept when clearing
the client slots will need to be changed before this can work, though).
The KV cache size for this kind of model is tied to the maximum number
of sequences kept at any single time.
For now, this number is obtained from n_parallel (plus one,
to have an extra sequence to dedicate to the system prompt),
but there might be a better way to do this which won't also
make the main example use 2 cells even if only 1 is really used.
(for this specific case, --parallel 0 helps)
Simultaneous sequence processing will probably require changes to
ggml_ssm_scan, and possibly a new operator for the conv step.
* mamba : support llama_kv_cache_seq_cp
This (mis)uses the logic around K shifts, because tokens in a state
can't be shifted anyway, and because inp_K_shift has the right shape and type.
Using ggml_get_rows is a nice way to do copies, but copy chains can't work.
Fortunately, copy chains don't really seem to be used in the examples.
Each KV cell is dedicated to the sequence ID corresponding to its own index.
* mamba : use a state mask
It's cleaner than the previous heuristic of
checking for the pos of the first token in the batch.
inp_KQ_mask could not be re-used for this, because it has the wrong shape
and because it seems more suited to the next step of
simultaneous sequence processing (helping with the problem of
remembering which token belongs to which sequence(s)/state(s)).
* llama : replace the usage of n_ctx with kv_self.size in many places
* mamba : use n_tokens directly instead of n_tok
* mamba : in comments, properly refer to KV cells instead of slots
* mamba : reduce memory usage of ggml_ssm_scan
From 290.37 MiB to 140.68 MiB of CPU compute buffer size
with Mamba 3B with a batch size of 512.
The result tensor of ggml_ssm_scan was previously a big part
of the CPU compute buffer size. To make it smaller,
it does not contain the intermediate ssm states anymore.
Both y and the last ssm state are combined in the result tensor,
because it seems only a single tensor can be returned by an operator
with the way the graph is built.
* mamba : simultaneous sequence processing
A batch can now contain tokens from multiple sequences.
This is necessary for at least the parallel example, the server example,
and the HellaSwag test in the perplexity example.
However, for this to be useful, uses of llama_kv_cache_seq_rm/cp
will need to be changed to work on whole sequences.
* ggml : add ggml_ssm_conv as a new operator for the conv step of Mamba
This operator makes it possible to use and update the correct states
for each token of the batch in the same way as ggml_ssm_scan.
Other solutions which use existing operators would need loops which would
add too many nodes to the graph (at least the ones I thought of).
Using this operator further reduces the size of the CPU compute buffer
from 140.68 MiB to 103.20 MiB with Mamba 3B with a batch size of 512.
And (at least on CPU), it's a bit faster than before.
Note that "ggml_ssm_conv" is probably not the most appropriate name,
and it could be changed if a better one is found.
* llama : add inp_s_seq as a new input tensor
The most convenient implementation to select the correct state (for Mamba)
for each token is to directly get the correct index from a tensor.
This is why inp_s_seq is storing int32_t and not floats.
The other, less convenient way to select the correct state would be
to have inp_KQ_mask contain 1.0f for each state used by a token
and 0.0f otherwise. This complicates quickly fetching the first used
state of a token, and is also less efficient because a whole row
of the mask would always need to be read for each token.
Using indexes makes it easy to stop searching when there are
no more sequences for a token, and the first sequence assigned
is always very quickly available (it's the first element of each row).
* mamba : support llama_kv_cache_seq_cp copy chains
* mamba : support shifting and dividing the kv cache pos
* mamba : make the server and parallel examples work with whole sequences
A seq_id is dedicated to the system prompt in both cases.
* llama : make llama_kv_cache_seq_rm return whether it succeeded or not
* mamba : dedicate an input tensor for state copy indices
This is cleaner and makes it easier to adapt when/if token positions
(and by extension, inp_K_shift) are no longer integers.
* mamba : adapt perplexity, batched, and batched-bench examples
* perplexity : limit the max number of sequences
This adapts to what the loaded model can provide.
* llama : add llama_n_max_seq to get the upper limit for seq_ids
Used by the perplexity example.
* batched : pass n_parallel to the model's context params
This should have been there already, but it wasn't.
* batched-bench : reserve sequences to support Mamba
* batched-bench : fix tokens being put in wrong sequences
Generation quality isn't what's measured in there anyway,
but at least using the correct sequences avoids using non-consecutive
token positions.
* mamba : stop abusing attention metadata
This breaks existing converted-to-GGUF Mamba models,
but will allow supporting mixed architectures like MambaFormer
without needing to break Mamba models.
This will also allow changing the size of Mamba's states
without having to reconvert models in the future.
(e.g. using something else than d_conv - 1 columns for the conv_states
will not require breaking existing converted Mamba models again)
* gguf-py : add new KV metadata key-value pairs for Mamba
* llama : add new metadata key-value pairs for Mamba
* llama : guard against divisions by zero when n_head is 0
* mamba : rename "unlimited" KV cache property to "recurrent"
* mamba : more correctly update the "used" field of the KV cache
* ggml : in ggml_ssm_scan, use a threshold for soft_plus
This is how the official Mamba implementation does it,
and it's also what torch.nn.Softplus does.
* convert : for Mamba, fallback to internal NeoX tokenizer
The resulting models are exactly the same
as if the tokenizer.json and tokenizer_config.json of GPT-NeoX were there.
* mamba : support state saving and restoring
* ggml : implicitly pass src tensors through dst for Mamba-related ops
* mamba : clarify some comments
* server : fix cache_tokens not getting correctly resized
Otherwise, when the "we have to evaluate at least 1 token" special case
was triggered, an extra token was kept in cache_tokens even if it was
removed from the KV cache.
For Mamba, this caused useless prompt reprocessing when the previous
request triggered the above case.
* convert-hf : support new metadata keys for Mamba
For the models available at
https://huggingface.co/collections/state-spaces/transformers-compatible-mamba-65e7b40ab87e5297e45ae406
* mamba : rename metadata to be more similar to transformers library
This breaks existing converted-to-GGUF models,
but the metadata names are more "standard".
* mamba : support mamba-*-hf models
These models share their token_embd.weight with their output.weight
* mamba : add missing spaces
This is purely a formatting change.
* convert-hf : omit output.weight when identical with token_embd.weight
Only for Mamba for now, but it might be relevant for other models eventually.
Most Mamba models actually share these two tensors, albeit implicitly.
* readme : add Mamba to supported models, and add recent API changes
* mamba : move state_seq and state_mask views outside layer loop
A few tensors were also missing `struct` in front of `ggml_tensor`.
2024-03-08 23:31:00 +01:00
slot . n_past = 0 ;
}
2024-10-12 15:06:31 +02:00
SLT_INF ( slot , " kv cache rm [%d, end) \n " , slot . n_past ) ;
llama : support Mamba Selective State Space Models (#5328)
* mamba : begin working on support for Mamba SSM
* mamba : begin figuring out how to (ab)use the kv cache for Mamba
* mamba : recurrent inference almost works, but incoherent
* mamba : recurrent inference WORKS!!!
* convert : optionally use d_conv and d_state from config.json for Mamba
* mamba : refactor recurrent conv, resulting in 20% perf increase
It's still slower than I'd like, but I did not really optimize `ggml_exp` yet.
I also refactored `ggml_exp` to work with tensors with more than 2 dimensions.
* ggml : parallelize ggml_exp
This results in 8% faster token generation for Mamba-130M.
* mamba : simplify the conv step with a self-overlapping view
Turns out the conv_state can be made smaller by one column.
Note that this breaks existing GGUFs of Mamba,
because the key_value_length field is tied to the conv_state size.
Convolution with a self-overlapping view is cool!
And it's much simpler than what I initially thought would be necessary
to make the convolution step work with more than 1 token at a time.
Next step is to make the SSM step work on batches of tokens too,
and thus I need to figure out a way to make a parallel selective scan
which will keep the ssm_state small and won't make it bigger
by a factor of (n_layer * batch_size).
* llama : fix Mamba KV self size wrongly displaying as f16 instead of f32
Relatedly, I also tried to see if other types than f32 worked for the states,
but they don't, because of the operators used.
It's probably better anyway to keep lots of precision there,
since the states are small anyway.
* mamba : fix self-overlapping view depth stride
* mamba : handle batches of more than 1 token
This means running Mamba no longer crashes when using the default settings!
And probably also slightly faster prompt processing.
Both batched and non-batched processing yield the same output.
Previously, the state was not cleared when starting a sequence.
Next step is to make the KV cache API work as expected for Mamba models.
* ggml: add ggml_ssm_scan to help with parallel selective scan
If the selective scan was implemented without a custom operator,
there would be waaay too many nodes in the graph. For example,
for Mamba-130M, with a batch size of 512 (the default),
a naive selective scan could add at least 24*512=12288 nodes,
which is more than LLAMA_MAX_NODES (8192),
and that's only for the smallest Mamba model.
So it's much cleaner with a custom operator.
Not sure about the name, though.
* ggml : in ggml_ssm_scan, merge multiple rows in the same vec operation
This will help with performance on CPU if ggml_vec_mul_f32
and ggml_vec_add_f32 are ever optimized with SIMD.
* mamba : very basic quantization support
Mostly works, but there is currently no difference
between the variants of a k-quant (e.g. Q4_K_S and Q4_K_M are the same).
Most of the SSM-specific weights can be kept in f32 without affecting
the size that much, since they are relatively small.
(the linear projection weights are responsible for most of Mamba's size)
Too much quantization seems to make the state degrade quite fast, and
the model begins to output gibberish.
It seems to affect bigger models to a lesser extent than small models,
but I'm not sure by how much.
Experimentation will be needed to figure out which weights are more important
for the _M (and _L?) variants of k-quants for Mamba.
* convert : fix wrong name for layer norm weight of offical Mamba models
I was using Q-bert/Mamba-* models before, which have a slighlty different
naming scheme for the weights.
(they start with "model.layers" instead of "backbone.layers")
* mamba : fuse more steps of the SSM scan in the ggml_ssm_scan operator
This increases performance on CPU by around 30% for prompt processing,
and by around 20% for text generation.
However, it also makes the ggml_exp and ggml_soft_plus operators unused.
Whether or not they should be kept will be decided later.
* convert : for Mamba, also consider the "MambaLMHeadModel" arch name
It's the name of the class of the official implementation,
though they don't use it (yet) in the "architectures" field of config.json
* mamba : fix vocab size problems with official models
The perplexity was waaaay to high for models with a non-round vocab size.
Not sure why, but it needed to be fixed in the metadata.
Note that this breaks existing GGUF-converted Mamba models,
but **only if** the vocab size was not already rounded.
* ggml : remove ggml_exp and ggml_soft_plus
They did not exist anyway outside of this branch,
and since ggml_ssm_scan fused operations together, they are unused.
It's always possible to bring them back if needed.
* mamba : remove some useless comments
No code change.
* convert : fix flake8 linter errors
* mamba : apply suggestions from code review
* mamba : remove unecessary branch for row-wise ssm_state and C multiplication
It was previously done to avoid permuting when only one token is processed
at a time (like when generating text), but permuting is cheap,
and dynamically changing the compute graph is not future-proof.
* ggml : in ggml_ssm_scan, use more appropriate asserts
* ggml : rename the destination pointer in ggml_compute_forward_ssm_scan_f32
* mamba : multiple sequences, but one at a time
This is a step towards making this Mamba implementation usable
with the server example (the way the system prompt is kept when clearing
the client slots will need to be changed before this can work, though).
The KV cache size for this kind of model is tied to the maximum number
of sequences kept at any single time.
For now, this number is obtained from n_parallel (plus one,
to have an extra sequence to dedicate to the system prompt),
but there might be a better way to do this which won't also
make the main example use 2 cells even if only 1 is really used.
(for this specific case, --parallel 0 helps)
Simultaneous sequence processing will probably require changes to
ggml_ssm_scan, and possibly a new operator for the conv step.
* mamba : support llama_kv_cache_seq_cp
This (mis)uses the logic around K shifts, because tokens in a state
can't be shifted anyway, and because inp_K_shift has the right shape and type.
Using ggml_get_rows is a nice way to do copies, but copy chains can't work.
Fortunately, copy chains don't really seem to be used in the examples.
Each KV cell is dedicated to the sequence ID corresponding to its own index.
* mamba : use a state mask
It's cleaner than the previous heuristic of
checking for the pos of the first token in the batch.
inp_KQ_mask could not be re-used for this, because it has the wrong shape
and because it seems more suited to the next step of
simultaneous sequence processing (helping with the problem of
remembering which token belongs to which sequence(s)/state(s)).
* llama : replace the usage of n_ctx with kv_self.size in many places
* mamba : use n_tokens directly instead of n_tok
* mamba : in comments, properly refer to KV cells instead of slots
* mamba : reduce memory usage of ggml_ssm_scan
From 290.37 MiB to 140.68 MiB of CPU compute buffer size
with Mamba 3B with a batch size of 512.
The result tensor of ggml_ssm_scan was previously a big part
of the CPU compute buffer size. To make it smaller,
it does not contain the intermediate ssm states anymore.
Both y and the last ssm state are combined in the result tensor,
because it seems only a single tensor can be returned by an operator
with the way the graph is built.
* mamba : simultaneous sequence processing
A batch can now contain tokens from multiple sequences.
This is necessary for at least the parallel example, the server example,
and the HellaSwag test in the perplexity example.
However, for this to be useful, uses of llama_kv_cache_seq_rm/cp
will need to be changed to work on whole sequences.
* ggml : add ggml_ssm_conv as a new operator for the conv step of Mamba
This operator makes it possible to use and update the correct states
for each token of the batch in the same way as ggml_ssm_scan.
Other solutions which use existing operators would need loops which would
add too many nodes to the graph (at least the ones I thought of).
Using this operator further reduces the size of the CPU compute buffer
from 140.68 MiB to 103.20 MiB with Mamba 3B with a batch size of 512.
And (at least on CPU), it's a bit faster than before.
Note that "ggml_ssm_conv" is probably not the most appropriate name,
and it could be changed if a better one is found.
* llama : add inp_s_seq as a new input tensor
The most convenient implementation to select the correct state (for Mamba)
for each token is to directly get the correct index from a tensor.
This is why inp_s_seq is storing int32_t and not floats.
The other, less convenient way to select the correct state would be
to have inp_KQ_mask contain 1.0f for each state used by a token
and 0.0f otherwise. This complicates quickly fetching the first used
state of a token, and is also less efficient because a whole row
of the mask would always need to be read for each token.
Using indexes makes it easy to stop searching when there are
no more sequences for a token, and the first sequence assigned
is always very quickly available (it's the first element of each row).
* mamba : support llama_kv_cache_seq_cp copy chains
* mamba : support shifting and dividing the kv cache pos
* mamba : make the server and parallel examples work with whole sequences
A seq_id is dedicated to the system prompt in both cases.
* llama : make llama_kv_cache_seq_rm return whether it succeeded or not
* mamba : dedicate an input tensor for state copy indices
This is cleaner and makes it easier to adapt when/if token positions
(and by extension, inp_K_shift) are no longer integers.
* mamba : adapt perplexity, batched, and batched-bench examples
* perplexity : limit the max number of sequences
This adapts to what the loaded model can provide.
* llama : add llama_n_max_seq to get the upper limit for seq_ids
Used by the perplexity example.
* batched : pass n_parallel to the model's context params
This should have been there already, but it wasn't.
* batched-bench : reserve sequences to support Mamba
* batched-bench : fix tokens being put in wrong sequences
Generation quality isn't what's measured in there anyway,
but at least using the correct sequences avoids using non-consecutive
token positions.
* mamba : stop abusing attention metadata
This breaks existing converted-to-GGUF Mamba models,
but will allow supporting mixed architectures like MambaFormer
without needing to break Mamba models.
This will also allow changing the size of Mamba's states
without having to reconvert models in the future.
(e.g. using something else than d_conv - 1 columns for the conv_states
will not require breaking existing converted Mamba models again)
* gguf-py : add new KV metadata key-value pairs for Mamba
* llama : add new metadata key-value pairs for Mamba
* llama : guard against divisions by zero when n_head is 0
* mamba : rename "unlimited" KV cache property to "recurrent"
* mamba : more correctly update the "used" field of the KV cache
* ggml : in ggml_ssm_scan, use a threshold for soft_plus
This is how the official Mamba implementation does it,
and it's also what torch.nn.Softplus does.
* convert : for Mamba, fallback to internal NeoX tokenizer
The resulting models are exactly the same
as if the tokenizer.json and tokenizer_config.json of GPT-NeoX were there.
* mamba : support state saving and restoring
* ggml : implicitly pass src tensors through dst for Mamba-related ops
* mamba : clarify some comments
* server : fix cache_tokens not getting correctly resized
Otherwise, when the "we have to evaluate at least 1 token" special case
was triggered, an extra token was kept in cache_tokens even if it was
removed from the KV cache.
For Mamba, this caused useless prompt reprocessing when the previous
request triggered the above case.
* convert-hf : support new metadata keys for Mamba
For the models available at
https://huggingface.co/collections/state-spaces/transformers-compatible-mamba-65e7b40ab87e5297e45ae406
* mamba : rename metadata to be more similar to transformers library
This breaks existing converted-to-GGUF models,
but the metadata names are more "standard".
* mamba : support mamba-*-hf models
These models share their token_embd.weight with their output.weight
* mamba : add missing spaces
This is purely a formatting change.
* convert-hf : omit output.weight when identical with token_embd.weight
Only for Mamba for now, but it might be relevant for other models eventually.
Most Mamba models actually share these two tensors, albeit implicitly.
* readme : add Mamba to supported models, and add recent API changes
* mamba : move state_seq and state_mask views outside layer loop
A few tensors were also missing `struct` in front of `ggml_tensor`.
2024-03-08 23:31:00 +01:00
// remove the non-common part from the cache
slot . cache_tokens . resize ( slot . n_past ) ;
2024-03-07 10:41:53 +01:00
// add prompt tokens for processing in the current batch
2024-10-12 15:06:31 +02:00
while ( slot . n_past < slot . n_prompt_tokens & & batch . n_tokens < n_batch ) {
2024-11-06 12:29:01 +01:00
common_batch_add ( batch , prompt_tokens [ slot . n_past ] , slot . n_past , { slot . id } , false ) ;
2024-03-07 10:41:53 +01:00
if ( slot . params . cache_prompt ) {
slot . cache_tokens . push_back ( prompt_tokens [ slot . n_past ] ) ;
}
slot . n_prompt_tokens_processed + + ;
2024-10-12 15:06:31 +02:00
slot . n_past + + ;
2023-10-22 21:53:08 +02:00
}
2024-09-15 19:46:12 +02:00
SLT_INF ( slot , " prompt processing progress, n_past = %d, n_tokens = %d, progress = %f \n " , slot . n_past , batch . n_tokens , ( float ) slot . n_prompt_tokens_processed / slot . n_prompt_tokens ) ;
2024-03-07 10:41:53 +01:00
2024-09-06 23:21:29 +02:00
// entire prompt has been processed
2024-03-07 10:41:53 +01:00
if ( slot . n_past = = slot . n_prompt_tokens ) {
2024-09-06 23:21:29 +02:00
slot . state = SLOT_STATE_DONE_PROMPT ;
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
GGML_ASSERT ( batch . n_tokens > 0 ) ;
2024-10-23 21:27:51 +02:00
common_sampler_reset ( slot . smpl ) ;
// Process all prompt tokens through sampler system
for ( int i = 0 ; i < slot . n_prompt_tokens ; + + i ) {
common_sampler_accept ( slot . smpl , prompt_tokens [ i ] , false ) ;
}
2024-03-07 10:41:53 +01:00
// extract the logits only for the last token
2023-10-22 21:53:08 +02:00
batch . logits [ batch . n_tokens - 1 ] = true ;
2024-03-07 10:41:53 +01:00
slot . n_decoded = 0 ;
slot . i_batch = batch . n_tokens - 1 ;
2024-09-15 19:46:12 +02:00
SLT_INF ( slot , " prompt done, n_past = %d, n_tokens = %d \n " , slot . n_past , batch . n_tokens ) ;
2023-10-22 21:53:08 +02:00
}
2024-03-07 10:41:53 +01:00
}
2023-10-22 21:53:08 +02:00
2024-03-07 10:41:53 +01:00
if ( batch . n_tokens > = n_batch ) {
break ;
2023-10-22 21:53:08 +02:00
}
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-05-21 19:51:18 +02:00
2024-03-07 10:41:53 +01:00
if ( batch . n_tokens = = 0 ) {
2024-09-15 19:46:12 +02:00
SRV_WRN ( " %s " , " no tokens to decode \n " ) ;
2024-03-11 10:56:41 +01:00
return ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-05-21 19:51:18 +02:00
2024-09-15 19:46:12 +02:00
SRV_DBG ( " decoding batch, n_tokens = %d \n " , batch . n_tokens ) ;
2024-03-07 10:41:53 +01:00
2024-07-12 10:14:12 +02:00
// make sure we're in the right embedding mode
llama_set_embeddings ( ctx , batch_type = = 1 ) ;
2024-03-07 10:41:53 +01:00
// process the created batch of tokens
2024-04-26 12:15:30 +02:00
for ( int32_t i = 0 ; i < batch . n_tokens ; i + = n_batch ) {
2024-03-04 21:31:20 +01:00
const int32_t n_tokens = std : : min ( n_batch , batch . n_tokens - i ) ;
2024-01-27 14:38:05 +01:00
2024-03-07 10:41:53 +01:00
llama_batch batch_view = {
2023-10-22 21:53:08 +02:00
n_tokens ,
batch . token + i ,
nullptr ,
batch . pos + i ,
batch . n_seq_id + i ,
batch . seq_id + i ,
batch . logits + i ,
} ;
2023-05-21 19:51:18 +02:00
2023-10-22 21:53:08 +02:00
const int ret = llama_decode ( ctx , batch_view ) ;
2024-09-06 23:21:29 +02:00
metrics . on_decoded ( slots ) ;
2024-01-27 14:38:05 +01:00
2024-03-07 10:41:53 +01:00
if ( ret ! = 0 ) {
if ( n_batch = = 1 | | ret < 0 ) {
2023-10-22 21:53:08 +02:00
// if you get here, it means the KV cache is full - try increasing it via the context size
2024-09-15 19:46:12 +02:00
SRV_ERR ( " failed to decode the batch: KV cache is full - try increasing it via the context size, i = %d, n_batch = %d, ret = %d \n " , i , n_batch , ret ) ;
2024-03-11 10:56:41 +01:00
for ( auto & slot : slots ) {
slot . release ( ) ;
send_error ( slot , " Input prompt is too big compared to KV size. Please try increasing KV size. " ) ;
}
break ; // break loop of n_batch
2023-10-22 21:53:08 +02:00
}
2023-06-20 00:12:39 +02:00
2023-10-22 21:53:08 +02:00
// retry with half the batch size to try to find a free slot in the KV cache
n_batch / = 2 ;
i - = n_batch ;
2024-03-07 10:41:53 +01:00
2024-09-15 19:46:12 +02:00
SRV_WRN ( " failed to find free space in the KV cache, retrying with smaller batch size - try increasing it via the context size or enable defragmentation, i = %d, n_batch = %d, ret = %d \n " , i , n_batch , ret ) ;
2024-04-12 13:49:21 +02:00
2024-03-11 10:56:41 +01:00
continue ; // continue loop of n_batch
2023-10-22 21:53:08 +02:00
}
2024-03-07 10:41:53 +01:00
for ( auto & slot : slots ) {
2024-09-06 23:21:29 +02:00
if ( slot . i_batch < ( int ) i | | slot . i_batch > = ( int ) ( i + n_tokens ) ) {
2024-03-11 10:56:41 +01:00
continue ; // continue loop of slots
2023-10-22 21:53:08 +02:00
}
2024-09-06 23:21:29 +02:00
if ( slot . state = = SLOT_STATE_DONE_PROMPT ) {
2024-12-07 20:21:09 +01:00
if ( slot . task_type = = SERVER_TASK_TYPE_EMBEDDING ) {
2024-09-06 23:21:29 +02:00
// prompt evaluated for embedding
send_embedding ( slot , batch_view ) ;
slot . release ( ) ;
slot . i_batch = - 1 ;
continue ; // continue loop of slots
}
2024-09-07 14:16:19 +02:00
2024-12-07 20:21:09 +01:00
if ( slot . task_type = = SERVER_TASK_TYPE_RERANK ) {
2024-09-28 16:42:03 +02:00
send_rerank ( slot , batch_view ) ;
slot . release ( ) ;
slot . i_batch = - 1 ;
continue ; // continue loop of slots
}
2024-09-07 14:16:19 +02:00
// prompt evaluated for next-token prediction
slot . state = SLOT_STATE_GENERATING ;
2024-09-06 23:21:29 +02:00
} else if ( slot . state ! = SLOT_STATE_GENERATING ) {
2024-03-11 10:56:41 +01:00
continue ; // continue loop of slots
2023-10-22 21:53:08 +02:00
}
2024-11-26 12:36:40 +01:00
llama_token id = common_sampler_sample ( slot . smpl , ctx , slot . i_batch - i ) ;
2024-11-25 15:31:38 +01:00
2024-11-26 12:36:40 +01:00
slot . i_batch = - 1 ;
2024-11-25 15:31:38 +01:00
2024-11-26 12:36:40 +01:00
common_sampler_accept ( slot . smpl , id , true ) ;
2024-11-25 15:31:38 +01:00
2024-11-26 12:36:40 +01:00
slot . n_decoded + = 1 ;
2024-12-02 14:45:54 +01:00
const int64_t t_current = ggml_time_us ( ) ;
2024-11-26 12:36:40 +01:00
if ( slot . n_decoded = = 1 ) {
2024-12-02 14:45:54 +01:00
slot . t_start_generation = t_current ;
2024-11-26 12:36:40 +01:00
slot . t_prompt_processing = ( slot . t_start_generation - slot . t_start_process_prompt ) / 1e3 ;
metrics . on_prompt_eval ( slot ) ;
}
2024-11-25 15:31:38 +01:00
2024-12-02 14:45:54 +01:00
slot . t_token_generation = ( t_current - slot . t_start_generation ) / 1e3 ;
2024-11-26 12:36:40 +01:00
completion_token_output result ;
result . tok = id ;
2024-11-25 15:31:38 +01:00
2024-11-26 12:36:40 +01:00
const auto * cur_p = common_sampler_get_candidates ( slot . smpl ) ;
2023-10-22 21:53:08 +02:00
2024-11-26 12:36:40 +01:00
for ( size_t i = 0 ; i < ( size_t ) slot . params . sampling . n_probs ; + + i ) {
2024-12-06 11:14:32 +01:00
auto tok_id = cur_p - > data [ i ] . id ;
2024-11-26 12:36:40 +01:00
result . probs . push_back ( {
2024-12-06 11:14:32 +01:00
tok_id ,
tokens_to_output_formatted_string ( ctx , tok_id ) ,
i > = cur_p - > size ? 0.0f : cur_p - > data [ i ] . p ,
2024-11-26 12:36:40 +01:00
} ) ;
}
2024-11-25 15:31:38 +01:00
2024-11-26 12:36:40 +01:00
if ( ! process_token ( result , slot ) ) {
// release slot because of stop condition
slot . release ( ) ;
slot . print_timings ( ) ;
send_final_response ( slot ) ;
metrics . on_prediction ( slot ) ;
continue ;
2024-11-25 15:31:38 +01:00
}
2024-11-26 12:36:40 +01:00
}
2023-10-22 21:53:08 +02:00
2024-11-26 12:36:40 +01:00
// do speculative decoding
for ( auto & slot : slots ) {
if ( ! slot . is_processing ( ) | | ! slot . can_speculate ( ) ) {
2024-11-25 15:31:38 +01:00
continue ;
2023-10-22 21:53:08 +02:00
}
2024-12-03 10:20:00 +01:00
if ( slot . state ! = SLOT_STATE_GENERATING ) {
continue ;
}
2024-12-04 21:38:20 +01:00
// determine the max draft that fits the current slot state
int n_draft_max = slot . params . speculative . n_max ;
// note: n_past is not yet increased for the `id` token sampled above
// also, need to leave space for 1 extra token to allow context shifts
n_draft_max = std : : min ( n_draft_max , slot . n_ctx - slot . n_past - 2 ) ;
if ( slot . n_remaining > 0 ) {
n_draft_max = std : : min ( n_draft_max , slot . n_remaining - 1 ) ;
}
SLT_DBG ( slot , " max possible draft: %d \n " , n_draft_max ) ;
if ( n_draft_max < slot . params . speculative . n_min ) {
SLT_DBG ( slot , " the max possible draft is too small: %d < %d - skipping speculative decoding \n " , n_draft_max , slot . params . speculative . n_min ) ;
continue ;
}
2024-11-26 12:36:40 +01:00
llama_token id = slot . sampled ;
2024-11-25 15:31:38 +01:00
struct common_speculative_params params_spec ;
2024-12-04 21:38:20 +01:00
params_spec . n_draft = n_draft_max ;
2024-11-25 15:31:38 +01:00
params_spec . n_reuse = llama_n_ctx ( slot . ctx_dft ) - slot . params . speculative . n_max ;
params_spec . p_min = slot . params . speculative . p_min ;
2023-10-22 21:53:08 +02:00
2024-11-25 15:31:38 +01:00
llama_tokens draft = common_speculative_gen_draft ( slot . spec , params_spec , slot . cache_tokens , id ) ;
2023-10-22 21:53:08 +02:00
2024-11-25 15:31:38 +01:00
// ignore small drafts
if ( slot . params . speculative . n_min > ( int ) draft . size ( ) ) {
2024-12-04 21:38:20 +01:00
SLT_DBG ( slot , " ignoring small draft: %d < %d \n " , ( int ) draft . size ( ) , slot . params . speculative . n_min ) ;
2024-11-25 15:31:38 +01:00
continue ;
2023-10-22 21:53:08 +02:00
}
2024-11-25 15:31:38 +01:00
// construct the speculation batch
common_batch_clear ( slot . batch_spec ) ;
common_batch_add ( slot . batch_spec , id , slot . n_past , { slot . id } , true ) ;
for ( size_t i = 0 ; i < draft . size ( ) ; + + i ) {
common_batch_add ( slot . batch_spec , draft [ i ] , slot . n_past + 1 + i , { slot . id } , true ) ;
}
2024-12-04 21:38:20 +01:00
SLT_DBG ( slot , " decoding speculative batch, size = %d \n " , slot . batch_spec . n_tokens ) ;
2024-11-25 15:31:38 +01:00
llama_decode ( ctx , slot . batch_spec ) ;
// the accepted tokens from the speculation
const auto ids = common_sampler_sample_and_accept_n ( slot . smpl , ctx , draft ) ;
slot . n_past + = ids . size ( ) ;
slot . n_decoded + = ids . size ( ) ;
slot . cache_tokens . push_back ( id ) ;
slot . cache_tokens . insert ( slot . cache_tokens . end ( ) , ids . begin ( ) , ids . end ( ) - 1 ) ;
llama_kv_cache_seq_rm ( ctx , slot . id , slot . n_past , - 1 ) ;
for ( size_t i = 0 ; i < ids . size ( ) ; + + i ) {
completion_token_output result ;
result . tok = ids [ i ] ;
if ( ! process_token ( result , slot ) ) {
// release slot because of stop condition
slot . release ( ) ;
slot . print_timings ( ) ;
send_final_response ( slot ) ;
metrics . on_prediction ( slot ) ;
break ;
}
2023-10-22 21:53:08 +02:00
}
2024-12-04 21:38:20 +01:00
SLT_DBG ( slot , " accepted %d/%d draft tokens, new n_past = %d \n " , ( int ) ids . size ( ) - 1 , ( int ) draft . size ( ) , slot . n_past ) ;
2023-10-22 21:53:08 +02:00
}
2023-06-20 00:12:39 +02:00
}
2024-02-25 13:50:32 +01:00
2024-09-15 19:46:12 +02:00
SRV_DBG ( " %s " , " run slots completed \n " ) ;
2023-06-20 00:12:39 +02:00
}
2024-03-02 22:00:14 +01:00
2024-03-07 10:41:53 +01:00
json model_meta ( ) const {
return json {
{ " vocab_type " , llama_vocab_type ( model ) } ,
{ " n_vocab " , llama_n_vocab ( model ) } ,
{ " n_ctx_train " , llama_n_ctx_train ( model ) } ,
{ " n_embd " , llama_n_embd ( model ) } ,
{ " n_params " , llama_model_n_params ( model ) } ,
{ " size " , llama_model_size ( model ) } ,
2024-03-02 22:00:14 +01:00
} ;
}
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
} ;
2024-03-07 10:41:53 +01:00
static void log_server_request ( const httplib : : Request & req , const httplib : : Response & res ) {
2024-02-25 13:50:32 +01:00
// skip GH copilot requests when using default port
2024-03-07 10:41:53 +01:00
if ( req . path = = " /v1/health " | | req . path = = " /v1/completions " ) {
2024-02-25 13:50:32 +01:00
return ;
}
2024-09-15 19:46:12 +02:00
LOG_INF ( " request: %s %s %s %d \n " , req . method . c_str ( ) , req . path . c_str ( ) , req . remote_addr . c_str ( ) , res . status ) ;
2023-07-04 16:05:27 +02:00
2024-09-15 19:46:12 +02:00
LOG_DBG ( " request: %s \n " , req . body . c_str ( ) ) ;
LOG_DBG ( " response: %s \n " , res . body . c_str ( ) ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
}
2023-05-21 19:51:18 +02:00
2024-02-18 17:23:16 +01:00
std : : function < void ( int ) > shutdown_handler ;
2024-02-28 09:55:37 +01:00
std : : atomic_flag is_terminating = ATOMIC_FLAG_INIT ;
2024-03-07 10:41:53 +01:00
2024-02-28 09:55:37 +01:00
inline void signal_handler ( int signal ) {
if ( is_terminating . test_and_set ( ) ) {
// in case it hangs, we can force terminate the server by hitting Ctrl+C twice
// this is for better developer experience, we can remove when the server is stable enough
fprintf ( stderr , " Received second interrupt, terminating immediately. \n " ) ;
exit ( 1 ) ;
}
2024-03-07 10:41:53 +01:00
2024-02-28 09:55:37 +01:00
shutdown_handler ( signal ) ;
}
2024-02-18 17:23:16 +01:00
2024-03-07 10:41:53 +01:00
int main ( int argc , char * * argv ) {
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
// own arguments required by this example
2024-10-10 22:57:42 +02:00
common_params params ;
2024-06-04 20:23:39 +02:00
2024-10-10 22:57:42 +02:00
if ( ! common_params_parse ( argc , argv , params , LLAMA_EXAMPLE_SERVER ) ) {
2024-06-04 20:23:39 +02:00
return 1 ;
}
2024-10-10 22:57:42 +02:00
common_init ( ) ;
2024-09-15 19:46:12 +02:00
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
// struct that contains llama context and inference
2024-03-07 10:41:53 +01:00
server_context ctx_server ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-02-16 10:31:07 +01:00
llama_backend_init ( ) ;
llama_numa_init ( params . numa ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-09-15 19:46:12 +02:00
LOG_INF ( " system info: n_threads = %d, n_threads_batch = %d, total_threads = %d \n " , params . cpuparams . n_threads , params . cpuparams_batch . n_threads , std : : thread : : hardware_concurrency ( ) ) ;
LOG_INF ( " \n " ) ;
2024-10-10 22:57:42 +02:00
LOG_INF ( " %s \n " , common_params_get_system_info ( params ) . c_str ( ) ) ;
2024-09-15 19:46:12 +02:00
LOG_INF ( " \n " ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-03-09 10:57:09 +01:00
std : : unique_ptr < httplib : : Server > svr ;
# ifdef CPPHTTPLIB_OPENSSL_SUPPORT
2024-06-04 20:23:39 +02:00
if ( params . ssl_file_key ! = " " & & params . ssl_file_cert ! = " " ) {
2024-09-18 08:28:20 +02:00
LOG_INF ( " Running with SSL: key = %s, cert = %s \n " , params . ssl_file_key . c_str ( ) , params . ssl_file_cert . c_str ( ) ) ;
2024-03-09 10:57:09 +01:00
svr . reset (
2024-06-04 20:23:39 +02:00
new httplib : : SSLServer ( params . ssl_file_cert . c_str ( ) , params . ssl_file_key . c_str ( ) )
2024-03-09 10:57:09 +01:00
) ;
} else {
2024-09-18 08:28:20 +02:00
LOG_INF ( " Running without SSL \n " ) ;
2024-03-09 10:57:09 +01:00
svr . reset ( new httplib : : Server ( ) ) ;
}
# else
2024-09-25 14:05:13 +02:00
if ( params . ssl_file_key ! = " " & & params . ssl_file_cert ! = " " ) {
LOG_ERR ( " Server is built without SSL support \n " ) ;
return 1 ;
}
2024-03-09 10:57:09 +01:00
svr . reset ( new httplib : : Server ( ) ) ;
# endif
2024-01-10 20:56:05 +01:00
2024-01-11 08:10:34 +01:00
std : : atomic < server_state > state { SERVER_STATE_LOADING_MODEL } ;
2024-01-10 20:56:05 +01:00
2024-03-09 10:57:09 +01:00
svr - > set_default_headers ( { { " Server " , " llama.cpp " } } ) ;
svr - > set_logger ( log_server_request ) ;
2024-01-10 20:56:05 +01:00
2024-09-15 19:46:12 +02:00
auto res_error = [ ] ( httplib : : Response & res , const json & error_data ) {
2024-03-11 10:56:41 +01:00
json final_response { { " error " , error_data } } ;
2024-12-07 20:21:09 +01:00
res . set_content ( safe_json_to_str ( final_response ) , MIMETYPE_JSON ) ;
2024-03-11 10:56:41 +01:00
res . status = json_value ( error_data , " code " , 500 ) ;
} ;
2024-01-10 20:56:05 +01:00
2024-09-15 19:46:12 +02:00
auto res_ok = [ ] ( httplib : : Response & res , const json & data ) {
2024-12-07 20:21:09 +01:00
res . set_content ( safe_json_to_str ( data ) , MIMETYPE_JSON ) ;
2024-09-02 17:11:51 +02:00
res . status = 200 ;
} ;
2024-12-07 17:02:05 +01:00
svr - > set_exception_handler ( [ & res_error ] ( const httplib : : Request & , httplib : : Response & res , const std : : exception_ptr & ep ) {
2024-03-11 10:56:41 +01:00
std : : string message ;
2024-03-07 10:41:53 +01:00
try {
2024-09-15 19:46:12 +02:00
std : : rethrow_exception ( ep ) ;
2024-12-07 17:02:05 +01:00
} catch ( const std : : exception & e ) {
2024-03-11 10:56:41 +01:00
message = e . what ( ) ;
2024-03-07 10:41:53 +01:00
} catch ( . . . ) {
2024-03-11 10:56:41 +01:00
message = " Unknown Exception " ;
2024-03-07 10:41:53 +01:00
}
2024-03-11 10:56:41 +01:00
json formatted_error = format_error_response ( message , ERROR_TYPE_SERVER ) ;
2024-09-15 19:46:12 +02:00
LOG_WRN ( " got exception: %s \n " , formatted_error . dump ( ) . c_str ( ) ) ;
2024-03-11 10:56:41 +01:00
res_error ( res , formatted_error ) ;
2024-03-07 10:41:53 +01:00
} ) ;
2024-03-11 10:56:41 +01:00
svr - > set_error_handler ( [ & res_error ] ( const httplib : : Request & , httplib : : Response & res ) {
2024-03-07 10:41:53 +01:00
if ( res . status = = 404 ) {
2024-03-11 10:56:41 +01:00
res_error ( res , format_error_response ( " File Not Found " , ERROR_TYPE_NOT_FOUND ) ) ;
2024-03-07 10:41:53 +01:00
}
2024-03-11 10:56:41 +01:00
// for other error codes, we skip processing here because it's already done by res_error()
2024-03-07 10:41:53 +01:00
} ) ;
2024-01-10 20:56:05 +01:00
// set timeouts and change hostname and port
2024-06-04 20:23:39 +02:00
svr - > set_read_timeout ( params . timeout_read ) ;
svr - > set_write_timeout ( params . timeout_write ) ;
2024-01-10 20:56:05 +01:00
std : : unordered_map < std : : string , std : : string > log_data ;
2024-03-07 10:41:53 +01:00
2024-06-04 20:23:39 +02:00
log_data [ " hostname " ] = params . hostname ;
log_data [ " port " ] = std : : to_string ( params . port ) ;
2024-01-10 20:56:05 +01:00
2024-06-04 20:23:39 +02:00
if ( params . api_keys . size ( ) = = 1 ) {
auto key = params . api_keys [ 0 ] ;
2024-03-09 11:27:53 +01:00
log_data [ " api_key " ] = " api_key: **** " + key . substr ( std : : max ( ( int ) ( key . length ( ) - 4 ) , 0 ) ) ;
2024-06-04 20:23:39 +02:00
} else if ( params . api_keys . size ( ) > 1 ) {
log_data [ " api_key " ] = " api_key: " + std : : to_string ( params . api_keys . size ( ) ) + " keys loaded " ;
2024-01-10 20:56:05 +01:00
}
2024-06-08 09:50:31 +02:00
// Necessary similarity of prompt for slot selection
ctx_server . slot_prompt_similarity = params . slot_prompt_similarity ;
2024-03-09 11:27:53 +01:00
//
// Middlewares
//
2024-12-03 19:38:44 +01:00
auto middleware_validate_api_key = [ & params , & res_error ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-10-08 13:27:04 +02:00
static const std : : unordered_set < std : : string > public_endpoints = {
" /health " ,
" /models " ,
" /v1/models " ,
2024-03-09 11:27:53 +01:00
} ;
2023-12-15 12:49:01 +01:00
// If API key is not set, skip validation
2024-06-04 20:23:39 +02:00
if ( params . api_keys . empty ( ) ) {
2023-12-15 12:49:01 +01:00
return true ;
}
2024-11-15 10:48:49 +01:00
// If path is public or is static file, skip validation
2024-12-03 19:38:44 +01:00
if ( public_endpoints . find ( req . path ) ! = public_endpoints . end ( ) | | req . path = = " / " ) {
2024-03-09 11:27:53 +01:00
return true ;
}
2023-12-15 12:49:01 +01:00
// Check for API key in the header
auto auth_header = req . get_header_value ( " Authorization " ) ;
2024-03-07 10:41:53 +01:00
2023-12-15 12:49:01 +01:00
std : : string prefix = " Bearer " ;
if ( auth_header . substr ( 0 , prefix . size ( ) ) = = prefix ) {
std : : string received_api_key = auth_header . substr ( prefix . size ( ) ) ;
2024-06-04 20:23:39 +02:00
if ( std : : find ( params . api_keys . begin ( ) , params . api_keys . end ( ) , received_api_key ) ! = params . api_keys . end ( ) ) {
2023-12-15 12:49:01 +01:00
return true ; // API key is valid
}
}
// API key is invalid or not provided
2024-03-11 10:56:41 +01:00
res_error ( res , format_error_response ( " Invalid API Key " , ERROR_TYPE_AUTHENTICATION ) ) ;
2023-12-15 12:49:01 +01:00
2024-09-15 19:46:12 +02:00
LOG_WRN ( " Unauthorized: Invalid API Key \n " ) ;
2023-12-15 12:49:01 +01:00
return false ;
} ;
2024-09-13 14:23:11 +02:00
auto middleware_server_state = [ & res_error , & state ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-08-16 17:19:05 +02:00
server_state current_state = state . load ( ) ;
if ( current_state = = SERVER_STATE_LOADING_MODEL ) {
2024-10-25 17:57:54 +02:00
auto tmp = string_split < std : : string > ( req . path , ' . ' ) ;
2024-09-13 14:23:11 +02:00
if ( req . path = = " / " | | tmp . back ( ) = = " html " ) {
res . set_content ( reinterpret_cast < const char * > ( loading_html ) , loading_html_len , " text/html; charset=utf-8 " ) ;
res . status = 503 ;
} else {
res_error ( res , format_error_response ( " Loading model " , ERROR_TYPE_UNAVAILABLE ) ) ;
}
2024-08-16 17:19:05 +02:00
return false ;
}
return true ;
} ;
2024-03-09 11:27:53 +01:00
// register server middlewares
2024-08-16 17:19:05 +02:00
svr - > set_pre_routing_handler ( [ & middleware_validate_api_key , & middleware_server_state ] ( const httplib : : Request & req , httplib : : Response & res ) {
res . set_header ( " Access-Control-Allow-Origin " , req . get_header_value ( " Origin " ) ) ;
2024-11-07 22:31:10 +01:00
// If this is OPTIONS request, skip validation because browsers don't include Authorization header
if ( req . method = = " OPTIONS " ) {
res . set_header ( " Access-Control-Allow-Credentials " , " true " ) ;
res . set_header ( " Access-Control-Allow-Methods " , " GET, POST " ) ;
res . set_header ( " Access-Control-Allow-Headers " , " * " ) ;
res . set_content ( " " , " text/html " ) ; // blank response, no data
return httplib : : Server : : HandlerResponse : : Handled ; // skip further processing
}
2024-08-16 17:19:05 +02:00
if ( ! middleware_server_state ( req , res ) ) {
return httplib : : Server : : HandlerResponse : : Handled ;
}
2024-03-09 11:27:53 +01:00
if ( ! middleware_validate_api_key ( req , res ) ) {
return httplib : : Server : : HandlerResponse : : Handled ;
}
return httplib : : Server : : HandlerResponse : : Unhandled ;
2024-03-07 10:41:53 +01:00
} ) ;
2023-07-05 22:51:13 +02:00
2024-03-09 11:27:53 +01:00
//
// Route handlers (or controllers)
//
2023-07-04 16:05:27 +02:00
2024-08-16 17:19:05 +02:00
const auto handle_health = [ & ] ( const httplib : : Request & , httplib : : Response & res ) {
// error and loading states are handled by middleware
json health = { { " status " , " ok " } } ;
2024-09-02 17:11:51 +02:00
res_ok ( res , health ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-08-16 17:19:05 +02:00
const auto handle_slots = [ & ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-06-04 20:23:39 +02:00
if ( ! params . endpoint_slots ) {
2024-10-08 13:27:04 +02:00
res_error ( res , format_error_response ( " This server does not support slots endpoint. Start it with `--slots` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
2024-03-09 11:27:53 +01:00
return ;
}
// request slots data using task queue
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_METRICS ) ;
2024-03-09 11:27:53 +01:00
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
2024-09-06 23:21:29 +02:00
ctx_server . queue_tasks . post ( task , true ) ; // high-priority task
2024-03-09 11:27:53 +01:00
// get the result
2024-12-06 11:14:32 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
2024-03-09 11:27:53 +01:00
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
}
// TODO: get rid of this dynamic_cast
auto res_metrics = dynamic_cast < server_task_result_metrics * > ( result . get ( ) ) ;
GGML_ASSERT ( res_metrics ! = nullptr ) ;
2024-08-16 17:19:05 +02:00
// optionally return "fail_on_no_slot" error
if ( req . has_param ( " fail_on_no_slot " ) ) {
2024-12-06 11:14:32 +01:00
if ( res_metrics - > n_idle_slots = = 0 ) {
2024-08-16 17:19:05 +02:00
res_error ( res , format_error_response ( " no slot available " , ERROR_TYPE_UNAVAILABLE ) ) ;
return ;
}
}
2024-12-06 11:14:32 +01:00
res_ok ( res , res_metrics - > slots_data ) ;
2024-03-09 11:27:53 +01:00
} ;
const auto handle_metrics = [ & ] ( const httplib : : Request & , httplib : : Response & res ) {
2024-06-04 20:23:39 +02:00
if ( ! params . endpoint_metrics ) {
2024-08-16 17:19:05 +02:00
res_error ( res , format_error_response ( " This server does not support metrics endpoint. Start it with `--metrics` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
2024-03-09 11:27:53 +01:00
return ;
}
// request slots data using task queue
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_METRICS ) ;
2024-03-09 11:27:53 +01:00
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
2024-12-07 20:21:09 +01:00
task . metrics_reset_bucket = true ;
2024-03-09 11:27:53 +01:00
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
2024-09-06 23:21:29 +02:00
ctx_server . queue_tasks . post ( task , true ) ; // high-priority task
2024-03-09 11:27:53 +01:00
// get the result
2024-12-06 11:14:32 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
2024-03-09 11:27:53 +01:00
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
}
2024-09-06 23:21:29 +02:00
2024-12-06 11:14:32 +01:00
// TODO: get rid of this dynamic_cast
auto res_metrics = dynamic_cast < server_task_result_metrics * > ( result . get ( ) ) ;
GGML_ASSERT ( res_metrics ! = nullptr ) ;
2024-03-09 11:27:53 +01:00
// metrics definition: https://prometheus.io/docs/practices/naming/#metric-names
json all_metrics_def = json {
{ " counter " , { {
{ " name " , " prompt_tokens_total " } ,
{ " help " , " Number of prompt tokens processed. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > n_prompt_tokens_processed_total }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " prompt_seconds_total " } ,
{ " help " , " Prompt process time " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > t_prompt_processing_total / 1.e3 }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " tokens_predicted_total " } ,
{ " help " , " Number of generation tokens processed. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > n_tokens_predicted_total }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " tokens_predicted_seconds_total " } ,
{ " help " , " Predict process time " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > t_tokens_generation_total / 1.e3 }
2024-09-06 23:21:29 +02:00
} , {
{ " name " , " n_decode_total " } ,
{ " help " , " Total number of llama_decode() calls " } ,
2024-12-06 11:14:32 +01:00
{ " value " , res_metrics - > n_decode_total }
2024-09-06 23:21:29 +02:00
} , {
{ " name " , " n_busy_slots_per_decode " } ,
{ " help " , " Average number of busy slots per llama_decode() call " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( float ) res_metrics - > n_busy_slots_total / ( float ) res_metrics - > n_decode_total }
2024-03-09 11:27:53 +01:00
} } } ,
{ " gauge " , { {
{ " name " , " prompt_tokens_seconds " } ,
{ " help " , " Average prompt throughput in tokens/s. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , res_metrics - > n_prompt_tokens_processed ? 1.e3 / res_metrics - > t_prompt_processing * res_metrics - > n_prompt_tokens_processed : 0. }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " predicted_tokens_seconds " } ,
{ " help " , " Average generation throughput in tokens/s. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , res_metrics - > n_tokens_predicted ? 1.e3 / res_metrics - > t_tokens_generation * res_metrics - > n_tokens_predicted : 0. }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " kv_cache_usage_ratio " } ,
{ " help " , " KV-cache usage. 1 means 100 percent usage. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , 1. * res_metrics - > kv_cache_used_cells / params . n_ctx }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " kv_cache_tokens " } ,
{ " help " , " KV-cache tokens. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > kv_cache_tokens_count }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " requests_processing " } ,
{ " help " , " Number of request processing. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > n_processing_slots }
2024-03-09 11:27:53 +01:00
} , {
{ " name " , " requests_deferred " } ,
{ " help " , " Number of request deferred. " } ,
2024-12-06 11:14:32 +01:00
{ " value " , ( uint64_t ) res_metrics - > n_tasks_deferred }
2024-03-09 11:27:53 +01:00
} } }
} ;
std : : stringstream prometheus ;
for ( const auto & el : all_metrics_def . items ( ) ) {
const auto & type = el . key ( ) ;
const auto & metrics_def = el . value ( ) ;
for ( const auto & metric_def : metrics_def ) {
2024-05-08 21:53:08 +02:00
const std : : string name = metric_def . at ( " name " ) ;
const std : : string help = metric_def . at ( " help " ) ;
2024-03-09 11:27:53 +01:00
auto value = json_value ( metric_def , " value " , 0. ) ;
prometheus < < " # HELP llamacpp: " < < name < < " " < < help < < " \n "
< < " # TYPE llamacpp: " < < name < < " " < < type < < " \n "
< < " llamacpp: " < < name < < " " < < value < < " \n " ;
}
}
2024-12-06 11:14:32 +01:00
res . set_header ( " Process-Start-Time-Unix " , std : : to_string ( res_metrics - > t_start ) ) ;
2024-03-09 11:27:53 +01:00
res . set_content ( prometheus . str ( ) , " text/plain; version=0.0.4 " ) ;
res . status = 200 ; // HTTP OK
} ;
2024-09-02 17:11:51 +02:00
const auto handle_slots_save = [ & ctx_server , & res_error , & res_ok , & params ] ( const httplib : : Request & req , httplib : : Response & res , int id_slot ) {
2024-04-08 14:43:30 +02:00
json request_data = json : : parse ( req . body ) ;
2024-05-08 21:53:08 +02:00
std : : string filename = request_data . at ( " filename " ) ;
2024-05-22 19:04:20 +02:00
if ( ! fs_validate_filename ( filename ) ) {
2024-04-08 14:43:30 +02:00
res_error ( res , format_error_response ( " Invalid filename " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
2024-06-04 20:23:39 +02:00
std : : string filepath = params . slot_save_path + filename ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_SLOT_SAVE ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . slot_action . slot_id = id_slot ;
task . slot_action . filename = filename ;
task . slot_action . filepath = filepath ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
ctx_server . queue_tasks . post ( task ) ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-04-08 14:43:30 +02:00
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
2024-04-08 14:43:30 +02:00
}
2024-12-06 11:14:32 +01:00
res_ok ( res , result - > to_json ( ) ) ;
2024-04-08 14:43:30 +02:00
} ;
2024-09-02 17:11:51 +02:00
const auto handle_slots_restore = [ & ctx_server , & res_error , & res_ok , & params ] ( const httplib : : Request & req , httplib : : Response & res , int id_slot ) {
2024-04-08 14:43:30 +02:00
json request_data = json : : parse ( req . body ) ;
2024-05-08 21:53:08 +02:00
std : : string filename = request_data . at ( " filename " ) ;
2024-05-22 19:04:20 +02:00
if ( ! fs_validate_filename ( filename ) ) {
2024-04-08 14:43:30 +02:00
res_error ( res , format_error_response ( " Invalid filename " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
2024-06-04 20:23:39 +02:00
std : : string filepath = params . slot_save_path + filename ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_SLOT_RESTORE ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . slot_action . slot_id = id_slot ;
task . slot_action . filename = filename ;
task . slot_action . filepath = filepath ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
ctx_server . queue_tasks . post ( task ) ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-04-08 14:43:30 +02:00
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
2024-04-08 14:43:30 +02:00
}
2024-12-06 11:14:32 +01:00
GGML_ASSERT ( dynamic_cast < server_task_result_slot_save_load * > ( result . get ( ) ) ! = nullptr ) ;
res_ok ( res , result - > to_json ( ) ) ;
2024-04-08 14:43:30 +02:00
} ;
2024-09-02 17:11:51 +02:00
const auto handle_slots_erase = [ & ctx_server , & res_error , & res_ok ] ( const httplib : : Request & /* req */ , httplib : : Response & res , int id_slot ) {
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_SLOT_ERASE ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . slot_action . slot_id = id_slot ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
ctx_server . queue_tasks . post ( task ) ;
2024-04-08 14:43:30 +02:00
2024-12-07 20:21:09 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-04-08 14:43:30 +02:00
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
2024-04-08 14:43:30 +02:00
}
2024-12-06 11:14:32 +01:00
GGML_ASSERT ( dynamic_cast < server_task_result_slot_erase * > ( result . get ( ) ) ! = nullptr ) ;
res_ok ( res , result - > to_json ( ) ) ;
2024-04-08 14:43:30 +02:00
} ;
2024-09-02 17:11:51 +02:00
const auto handle_slots_action = [ & params , & res_error , & handle_slots_save , & handle_slots_restore , & handle_slots_erase ] ( const httplib : : Request & req , httplib : : Response & res ) {
if ( params . slot_save_path . empty ( ) ) {
res_error ( res , format_error_response ( " This server does not support slots action. Start it with `--slot-save-path` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
return ;
}
2024-04-08 14:43:30 +02:00
std : : string id_slot_str = req . path_params . at ( " id_slot " ) ;
int id_slot ;
try {
id_slot = std : : stoi ( id_slot_str ) ;
} catch ( const std : : exception & ) {
res_error ( res , format_error_response ( " Invalid slot ID " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
std : : string action = req . get_param_value ( " action " ) ;
if ( action = = " save " ) {
handle_slots_save ( req , res , id_slot ) ;
} else if ( action = = " restore " ) {
handle_slots_restore ( req , res , id_slot ) ;
} else if ( action = = " erase " ) {
handle_slots_erase ( req , res , id_slot ) ;
} else {
res_error ( res , format_error_response ( " Invalid action " , ERROR_TYPE_INVALID_REQUEST ) ) ;
}
} ;
2024-09-02 17:11:51 +02:00
const auto handle_props = [ & ctx_server , & res_ok ] ( const httplib : : Request & , httplib : : Response & res ) {
2024-12-07 20:21:09 +01:00
// this endpoint is publicly available, please only return what is safe to be exposed
2024-03-07 10:41:53 +01:00
json data = {
{ " default_generation_settings " , ctx_server . default_generation_settings_for_props } ,
2024-11-25 15:31:38 +01:00
{ " total_slots " , ctx_server . params_base . n_parallel } ,
2024-12-07 20:21:09 +01:00
{ " model_path " , ctx_server . params_base . model } ,
2024-10-08 13:27:04 +02:00
{ " chat_template " , llama_get_chat_template ( ctx_server . model ) } ,
2024-03-07 10:41:53 +01:00
} ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-09-02 17:11:51 +02:00
res_ok ( res , data ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-03-07 10:41:53 +01:00
2024-10-08 13:27:04 +02:00
const auto handle_props_change = [ & ctx_server , & res_error , & res_ok ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-11-25 15:31:38 +01:00
if ( ! ctx_server . params_base . endpoint_props ) {
2024-10-08 13:27:04 +02:00
res_error ( res , format_error_response ( " This server does not support changing global properties. Start it with `--props` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
return ;
}
json data = json : : parse ( req . body ) ;
2024-10-12 13:51:54 +02:00
// update any props here
2024-10-08 13:27:04 +02:00
res_ok ( res , { { " success " , true } } ) ;
} ;
2024-12-06 11:14:32 +01:00
// handle completion-like requests (completion, chat, infill)
// we can optionally provide a custom format for partial results and final results
const auto handle_completions_generic = [ & ctx_server , & res_error , & res_ok ] (
2024-12-07 20:21:09 +01:00
server_task_type type ,
2024-12-06 11:14:32 +01:00
json & data ,
httplib : : Response & res ,
2024-12-07 20:21:09 +01:00
bool oaicompat = false ,
bool oaicompat_chat = false ) {
GGML_ASSERT ( type = = SERVER_TASK_TYPE_COMPLETION | | type = = SERVER_TASK_TYPE_INFILL ) ;
2024-11-25 15:31:38 +01:00
if ( ctx_server . params_base . embedding ) {
2024-11-02 17:34:00 +01:00
res_error ( res , format_error_response ( " This server does not support completions. Start it without `--embeddings` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
2024-07-12 10:14:12 +02:00
return ;
}
2024-12-07 20:21:09 +01:00
auto completion_id = gen_chatcmplid ( ) ;
std : : vector < server_task > tasks ;
try {
std : : vector < llama_tokens > tokenized_prompts = tokenize_input_prompts ( ctx_server . ctx , data . at ( " prompt " ) , true , true ) ;
tasks . reserve ( tokenized_prompts . size ( ) ) ;
for ( size_t i = 0 ; i < tokenized_prompts . size ( ) ; i + + ) {
server_task task = server_task ( type ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . index = i ;
task . prompt_tokens = std : : move ( tokenized_prompts [ i ] ) ;
task . params = server_task : : params_from_json_cmpl ( ctx_server . model , ctx_server . params_base , data ) ;
task . id_selected_slot = json_value ( data , " id_slot " , - 1 ) ;
// OAI-compat
task . params . oaicompat = oaicompat ;
task . params . oaicompat_chat = oaicompat_chat ;
task . params . oaicompat_cmpl_id = completion_id ;
// oaicompat_model is already populated by params_from_json_cmpl
tasks . push_back ( task ) ;
}
} catch ( const std : : exception & e ) {
res_error ( res , format_error_response ( e . what ( ) , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
2024-09-02 17:11:51 +02:00
ctx_server . queue_results . add_waiting_tasks ( tasks ) ;
ctx_server . queue_tasks . post ( tasks ) ;
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
2024-09-02 17:11:51 +02:00
bool stream = json_value ( data , " stream " , false ) ;
const auto task_ids = server_task : : get_list_id ( tasks ) ;
2024-03-07 10:41:53 +01:00
2024-09-02 17:11:51 +02:00
if ( ! stream ) {
2024-12-06 11:14:32 +01:00
ctx_server . receive_multi_results ( task_ids , [ & ] ( std : : vector < server_task_result_ptr > & results ) {
2024-09-02 17:11:51 +02:00
if ( results . size ( ) = = 1 ) {
// single result
2024-12-06 11:14:32 +01:00
res_ok ( res , results [ 0 ] - > to_json ( ) ) ;
2024-09-02 17:11:51 +02:00
} else {
// multiple results (multitask)
json arr = json : : array ( ) ;
2024-12-06 11:14:32 +01:00
for ( auto & res : results ) {
arr . push_back ( res - > to_json ( ) ) ;
2024-03-07 10:41:53 +01:00
}
2024-09-02 17:11:51 +02:00
res_ok ( res , arr ) ;
2023-10-22 21:53:08 +02:00
}
2024-09-15 19:46:12 +02:00
} , [ & ] ( const json & error_data ) {
2024-09-02 17:11:51 +02:00
res_error ( res , error_data ) ;
} ) ;
2024-09-19 11:44:53 +02:00
ctx_server . queue_results . remove_waiting_task_ids ( task_ids ) ;
2024-09-02 17:11:51 +02:00
} else {
2024-12-07 20:21:09 +01:00
const auto chunked_content_provider = [ task_ids , & ctx_server , oaicompat ] ( size_t , httplib : : DataSink & sink ) {
2024-12-06 11:14:32 +01:00
ctx_server . receive_cmpl_results_stream ( task_ids , [ & ] ( server_task_result_ptr & result ) - > bool {
json res_json = result - > to_json ( ) ;
if ( res_json . is_array ( ) ) {
for ( const auto & res : res_json ) {
if ( ! server_sent_event ( sink , " data " , res ) ) {
return false ;
}
}
return true ;
} else {
return server_sent_event ( sink , " data " , res_json ) ;
}
2024-09-15 19:46:12 +02:00
} , [ & ] ( const json & error_data ) {
2024-09-02 17:11:51 +02:00
server_sent_event ( sink , " error " , error_data ) ;
} ) ;
2024-12-07 20:21:09 +01:00
if ( oaicompat ) {
2024-12-06 11:14:32 +01:00
static const std : : string ev_done = " data: [DONE] \n \n " ;
sink . write ( ev_done . data ( ) , ev_done . size ( ) ) ;
}
2024-03-07 10:41:53 +01:00
sink . done ( ) ;
2024-09-02 17:11:51 +02:00
return false ;
2024-03-07 10:41:53 +01:00
} ;
2024-09-19 11:44:53 +02:00
auto on_complete = [ task_ids , & ctx_server ] ( bool ) {
ctx_server . queue_results . remove_waiting_task_ids ( task_ids ) ;
} ;
res . set_chunked_content_provider ( " text/event-stream " , chunked_content_provider , on_complete ) ;
2024-03-07 10:41:53 +01:00
}
2024-03-07 11:42:39 +01:00
} ;
2024-09-02 17:11:51 +02:00
const auto handle_completions = [ & handle_completions_generic ] ( const httplib : : Request & req , httplib : : Response & res ) {
json data = json : : parse ( req . body ) ;
2024-12-07 20:21:09 +01:00
return handle_completions_generic (
SERVER_TASK_TYPE_COMPLETION ,
data ,
res ,
/* oaicompat */ false ,
/* oaicompat_chat */ false ) ;
2024-09-02 17:11:51 +02:00
} ;
2024-03-07 10:41:53 +01:00
2024-10-12 07:21:51 +02:00
const auto handle_infill = [ & ctx_server , & res_error , & handle_completions_generic ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-10-24 21:51:22 +02:00
// check model compatibility
2024-10-12 07:21:51 +02:00
std : : string err ;
if ( llama_token_fim_pre ( ctx_server . model ) = = LLAMA_TOKEN_NULL ) {
err + = " prefix token is missing. " ;
}
if ( llama_token_fim_suf ( ctx_server . model ) = = LLAMA_TOKEN_NULL ) {
err + = " suffix token is missing. " ;
}
if ( llama_token_fim_mid ( ctx_server . model ) = = LLAMA_TOKEN_NULL ) {
err + = " middle token is missing. " ;
}
if ( ! err . empty ( ) ) {
res_error ( res , format_error_response ( string_format ( " Infill is not supported by this model: %s " , err . c_str ( ) ) , ERROR_TYPE_NOT_SUPPORTED ) ) ;
return ;
}
2024-09-02 17:11:51 +02:00
json data = json : : parse ( req . body ) ;
2024-10-24 21:51:22 +02:00
// validate input
2024-12-08 23:04:29 +01:00
if ( data . contains ( " prompt " ) & & ! data . at ( " prompt " ) . is_string ( ) ) {
// prompt is optional
res_error ( res , format_error_response ( " \" prompt \" must be a string " , ERROR_TYPE_INVALID_REQUEST ) ) ;
}
2024-10-24 21:51:22 +02:00
if ( ! data . contains ( " input_prefix " ) ) {
res_error ( res , format_error_response ( " \" input_prefix \" is required " , ERROR_TYPE_INVALID_REQUEST ) ) ;
}
if ( ! data . contains ( " input_suffix " ) ) {
res_error ( res , format_error_response ( " \" input_suffix \" is required " , ERROR_TYPE_INVALID_REQUEST ) ) ;
}
if ( data . contains ( " input_extra " ) & & ! data . at ( " input_extra " ) . is_array ( ) ) {
2024-12-08 23:04:29 +01:00
// input_extra is optional
2024-10-24 21:51:22 +02:00
res_error ( res , format_error_response ( " \" input_extra \" must be an array of { \" filename \" : string, \" text \" : string} " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
2024-12-08 23:04:29 +01:00
2024-10-24 21:51:22 +02:00
json input_extra = json_value ( data , " input_extra " , json : : array ( ) ) ;
for ( const auto & chunk : input_extra ) {
// { "text": string, "filename": string }
if ( ! chunk . contains ( " text " ) | | ! chunk . at ( " text " ) . is_string ( ) ) {
res_error ( res , format_error_response ( " extra_context chunk must contain a \" text \" field with a string value " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
// filename is optional
if ( chunk . contains ( " filename " ) & & ! chunk . at ( " filename " ) . is_string ( ) ) {
res_error ( res , format_error_response ( " extra_context chunk's \" filename \" field must be a string " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
}
data [ " input_extra " ] = input_extra ; // default to empty array if it's not exist
2024-12-08 23:04:29 +01:00
std : : string prompt = json_value ( data , " prompt " , std : : string ( ) ) ;
std : : vector < llama_tokens > tokenized_prompts = tokenize_input_prompts ( ctx_server . ctx , prompt , true , true ) ;
SRV_DBG ( " creating infill tasks, n_prompts = %d \n " , ( int ) tokenized_prompts . size ( ) ) ;
data [ " prompt " ] = format_infill (
ctx_server . ctx ,
data . at ( " input_prefix " ) ,
data . at ( " input_suffix " ) ,
data . at ( " input_extra " ) ,
ctx_server . params_base . n_batch ,
ctx_server . params_base . n_predict ,
ctx_server . slots [ 0 ] . n_ctx , // TODO: there should be a better way
ctx_server . params_base . spm_infill ,
tokenized_prompts [ 0 ]
) ;
2024-12-07 20:21:09 +01:00
return handle_completions_generic ( SERVER_TASK_TYPE_INFILL , data , res ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-03-07 10:41:53 +01:00
2024-12-06 11:14:32 +01:00
const auto handle_chat_completions = [ & ctx_server , & params , & res_error , & handle_completions_generic ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-11-25 15:31:38 +01:00
if ( ctx_server . params_base . embedding ) {
2024-11-02 17:34:00 +01:00
res_error ( res , format_error_response ( " This server does not support completions. Start it without `--embeddings` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
2024-07-12 10:14:12 +02:00
return ;
}
2024-03-07 10:41:53 +01:00
2024-09-02 17:11:51 +02:00
json data = oaicompat_completion_params_parse ( ctx_server . model , json : : parse ( req . body ) , params . chat_template ) ;
2024-12-07 20:21:09 +01:00
return handle_completions_generic (
SERVER_TASK_TYPE_COMPLETION ,
data ,
res ,
/* oaicompat */ true ,
/* oaicompat_chat */ true ) ;
2024-02-28 09:39:15 +01:00
} ;
2024-12-07 20:21:09 +01:00
const auto handle_models = [ & params , & ctx_server , & res_ok ] ( const httplib : : Request & , httplib : : Response & res ) {
2024-09-02 17:11:51 +02:00
json models = {
{ " object " , " list " } ,
{ " data " , {
2024-09-06 23:21:29 +02:00
{
{ " id " , params . model_alias } ,
{ " object " , " model " } ,
{ " created " , std : : time ( 0 ) } ,
{ " owned_by " , " llamacpp " } ,
{ " meta " , ctx_server . model_meta ( ) }
} ,
2024-09-02 17:11:51 +02:00
} }
} ;
2024-01-26 13:42:20 +01:00
2024-12-07 20:21:09 +01:00
res_ok ( res , models ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-01-29 14:48:10 +01:00
2024-09-02 17:11:51 +02:00
const auto handle_tokenize = [ & ctx_server , & res_ok ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-03-07 10:41:53 +01:00
const json body = json : : parse ( req . body ) ;
2024-01-29 14:48:10 +01:00
2024-09-12 22:30:11 +02:00
json tokens_response = json : : array ( ) ;
2024-03-07 10:41:53 +01:00
if ( body . count ( " content " ) ! = 0 ) {
2024-05-08 14:27:58 +02:00
const bool add_special = json_value ( body , " add_special " , false ) ;
2024-09-12 22:30:11 +02:00
const bool with_pieces = json_value ( body , " with_pieces " , false ) ;
2024-10-12 07:21:51 +02:00
2024-10-24 21:51:22 +02:00
llama_tokens tokens = tokenize_mixed ( ctx_server . ctx , body . at ( " content " ) , add_special , true ) ;
2024-09-12 22:30:11 +02:00
if ( with_pieces ) {
for ( const auto & token : tokens ) {
2024-10-10 22:57:42 +02:00
std : : string piece = common_token_to_piece ( ctx_server . ctx , token ) ;
2024-09-12 22:30:11 +02:00
json piece_json ;
// Check if the piece is valid UTF-8
if ( is_valid_utf8 ( piece ) ) {
piece_json = piece ;
} else {
// If not valid UTF-8, store as array of byte values
piece_json = json : : array ( ) ;
for ( unsigned char c : piece ) {
piece_json . push_back ( static_cast < int > ( c ) ) ;
}
}
tokens_response . push_back ( {
{ " id " , token } ,
{ " piece " , piece_json }
} ) ;
}
} else {
tokens_response = tokens ;
}
2024-03-07 10:41:53 +01:00
}
2024-09-12 22:30:11 +02:00
const json data = format_tokenizer_response ( tokens_response ) ;
2024-09-02 17:11:51 +02:00
res_ok ( res , data ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-01-29 14:48:10 +01:00
2024-09-02 17:11:51 +02:00
const auto handle_detokenize = [ & ctx_server , & res_ok ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-03-07 10:41:53 +01:00
const json body = json : : parse ( req . body ) ;
2024-01-29 14:48:10 +01:00
2024-03-07 10:41:53 +01:00
std : : string content ;
if ( body . count ( " tokens " ) ! = 0 ) {
2024-10-24 21:51:22 +02:00
const llama_tokens tokens = body . at ( " tokens " ) ;
2024-03-07 10:41:53 +01:00
content = tokens_to_str ( ctx_server . ctx , tokens . cbegin ( ) , tokens . cend ( ) ) ;
}
2024-01-29 14:48:10 +01:00
2024-03-07 10:41:53 +01:00
const json data = format_detokenized_response ( content ) ;
2024-09-02 17:11:51 +02:00
res_ok ( res , data ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-01-29 14:48:10 +01:00
2024-09-02 17:11:51 +02:00
const auto handle_embeddings = [ & ctx_server , & res_error , & res_ok ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-03-07 10:41:53 +01:00
const json body = json : : parse ( req . body ) ;
2024-12-06 11:14:32 +01:00
bool oaicompat = false ;
2024-03-07 10:41:53 +01:00
2024-03-13 11:39:11 +01:00
// an input prompt can be a string or a list of tokens (integer)
json prompt ;
2024-03-09 11:27:53 +01:00
if ( body . count ( " input " ) ! = 0 ) {
2024-12-06 11:14:32 +01:00
oaicompat = true ;
2024-05-08 21:53:08 +02:00
prompt = body . at ( " input " ) ;
2024-03-09 11:27:53 +01:00
} else if ( body . count ( " content " ) ! = 0 ) {
2024-03-13 11:39:11 +01:00
// with "content", we only support single prompt
2024-05-08 21:53:08 +02:00
prompt = std : : vector < std : : string > { body . at ( " content " ) } ;
2024-03-07 10:41:53 +01:00
} else {
2024-03-11 10:56:41 +01:00
res_error ( res , format_error_response ( " \" input \" or \" content \" must be provided " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
2024-03-07 10:41:53 +01:00
}
2024-03-13 11:39:11 +01:00
// create and queue the task
2024-09-02 17:11:51 +02:00
json responses = json : : array ( ) ;
bool error = false ;
2024-03-13 11:39:11 +01:00
{
2024-12-07 20:21:09 +01:00
std : : vector < server_task > tasks ;
std : : vector < llama_tokens > tokenized_prompts = tokenize_input_prompts ( ctx_server . ctx , prompt , /* add_special */ false , true ) ;
for ( size_t i = 0 ; i < tokenized_prompts . size ( ) ; i + + ) {
server_task task = server_task ( SERVER_TASK_TYPE_EMBEDDING ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . index = i ;
task . prompt_tokens = std : : move ( tokenized_prompts [ i ] ) ;
tasks . push_back ( task ) ;
}
2024-09-02 17:11:51 +02:00
ctx_server . queue_results . add_waiting_tasks ( tasks ) ;
ctx_server . queue_tasks . post ( tasks ) ;
2024-03-07 10:41:53 +01:00
2024-03-09 11:27:53 +01:00
// get the result
2024-09-02 17:11:51 +02:00
std : : unordered_set < int > task_ids = server_task : : get_list_id ( tasks ) ;
2024-12-06 11:14:32 +01:00
ctx_server . receive_multi_results ( task_ids , [ & ] ( std : : vector < server_task_result_ptr > & results ) {
for ( auto & res : results ) {
GGML_ASSERT ( dynamic_cast < server_task_result_embd * > ( res . get ( ) ) ! = nullptr ) ;
responses . push_back ( res - > to_json ( ) ) ;
2024-03-13 11:39:11 +01:00
}
2024-09-15 19:46:12 +02:00
} , [ & ] ( const json & error_data ) {
2024-09-02 17:11:51 +02:00
res_error ( res , error_data ) ;
error = true ;
} ) ;
2024-09-19 11:44:53 +02:00
ctx_server . queue_results . remove_waiting_task_ids ( task_ids ) ;
2024-09-02 17:11:51 +02:00
}
if ( error ) {
return ;
2024-03-09 11:27:53 +01:00
}
// write JSON response
2024-12-06 11:14:32 +01:00
json root = oaicompat
2024-03-13 11:39:11 +01:00
? format_embeddings_response_oaicompat ( body , responses )
2024-12-07 20:21:09 +01:00
: responses . size ( ) = = 1 ? responses [ 0 ] : json ( responses ) ;
2024-09-02 17:11:51 +02:00
res_ok ( res , root ) ;
2024-03-09 11:27:53 +01:00
} ;
2024-03-07 10:41:53 +01:00
2024-09-28 16:42:03 +02:00
const auto handle_rerank = [ & ctx_server , & res_error , & res_ok ] ( const httplib : : Request & req , httplib : : Response & res ) {
2024-11-25 15:31:38 +01:00
if ( ! ctx_server . params_base . reranking | | ctx_server . params_base . embedding ) {
2024-11-02 17:34:00 +01:00
res_error ( res , format_error_response ( " This server does not support reranking. Start it with `--reranking` and without `--embedding` " , ERROR_TYPE_NOT_SUPPORTED ) ) ;
2024-09-28 16:42:03 +02:00
return ;
}
2024-11-02 17:34:00 +01:00
2024-09-28 16:42:03 +02:00
const json body = json : : parse ( req . body ) ;
// TODO: implement
//int top_n = 1;
//if (body.count("top_n") != 1) {
// top_n = body.at("top_n");
//} else {
// res_error(res, format_error_response("\"top_n\" must be provided", ERROR_TYPE_INVALID_REQUEST));
// return;
//}
json query ;
if ( body . count ( " query " ) = = 1 ) {
query = body . at ( " query " ) ;
if ( ! query . is_string ( ) ) {
res_error ( res , format_error_response ( " \" query \" must be a string " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
} else {
res_error ( res , format_error_response ( " \" query \" must be provided " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
std : : vector < std : : string > documents = json_value ( body , " documents " , std : : vector < std : : string > ( ) ) ;
if ( documents . empty ( ) ) {
res_error ( res , format_error_response ( " \" documents \" must be a non-empty string array " , ERROR_TYPE_INVALID_REQUEST ) ) ;
return ;
}
2024-12-07 20:21:09 +01:00
llama_tokens tokenized_query = tokenize_input_prompts ( ctx_server . ctx , query , /* add_special */ false , true ) [ 0 ] ;
2024-09-28 16:42:03 +02:00
// create and queue the task
json responses = json : : array ( ) ;
bool error = false ;
{
2024-12-07 20:21:09 +01:00
std : : vector < server_task > tasks ;
std : : vector < llama_tokens > tokenized_docs = tokenize_input_prompts ( ctx_server . ctx , documents , /* add_special */ false , true ) ;
tasks . reserve ( tokenized_docs . size ( ) ) ;
for ( size_t i = 0 ; i < tokenized_docs . size ( ) ; i + + ) {
server_task task = server_task ( SERVER_TASK_TYPE_RERANK ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
task . index = i ;
task . prompt_tokens = format_rerank ( ctx_server . model , tokenized_query , tokenized_docs [ i ] ) ;
tasks . push_back ( task ) ;
}
2024-09-28 16:42:03 +02:00
ctx_server . queue_results . add_waiting_tasks ( tasks ) ;
ctx_server . queue_tasks . post ( tasks ) ;
// get the result
std : : unordered_set < int > task_ids = server_task : : get_list_id ( tasks ) ;
2024-12-06 11:14:32 +01:00
ctx_server . receive_multi_results ( task_ids , [ & ] ( std : : vector < server_task_result_ptr > & results ) {
for ( auto & res : results ) {
GGML_ASSERT ( dynamic_cast < server_task_result_rerank * > ( res . get ( ) ) ! = nullptr ) ;
responses . push_back ( res - > to_json ( ) ) ;
2024-09-28 16:42:03 +02:00
}
} , [ & ] ( const json & error_data ) {
res_error ( res , error_data ) ;
error = true ;
} ) ;
}
if ( error ) {
return ;
}
// write JSON response
json root = format_response_rerank ( body , responses ) ;
res_ok ( res , root ) ;
} ;
2024-08-16 17:19:05 +02:00
const auto handle_lora_adapters_list = [ & ] ( const httplib : : Request & , httplib : : Response & res ) {
2024-08-06 17:33:39 +02:00
json result = json : : array ( ) ;
2024-09-15 19:46:12 +02:00
for ( size_t i = 0 ; i < ctx_server . loras . size ( ) ; + + i ) {
auto & lora = ctx_server . loras [ i ] ;
2024-08-06 17:33:39 +02:00
result . push_back ( {
{ " id " , i } ,
2024-09-15 19:46:12 +02:00
{ " path " , lora . path } ,
{ " scale " , lora . scale } ,
2024-08-06 17:33:39 +02:00
} ) ;
}
2024-09-02 17:11:51 +02:00
res_ok ( res , result ) ;
2024-08-06 17:33:39 +02:00
res . status = 200 ; // HTTP OK
} ;
const auto handle_lora_adapters_apply = [ & ] ( const httplib : : Request & req , httplib : : Response & res ) {
const std : : vector < json > body = json : : parse ( req . body ) ;
2024-09-15 19:46:12 +02:00
int max_idx = ctx_server . loras . size ( ) ;
2024-08-06 17:33:39 +02:00
// clear existing value
2024-09-15 19:46:12 +02:00
for ( auto & lora : ctx_server . loras ) {
lora . scale = 0.0f ;
2024-08-06 17:33:39 +02:00
}
// set value
for ( auto entry : body ) {
int id = entry . at ( " id " ) ;
float scale = entry . at ( " scale " ) ;
if ( 0 < = id & & id < max_idx ) {
2024-09-15 19:46:12 +02:00
ctx_server . loras [ id ] . scale = scale ;
2024-08-06 17:33:39 +02:00
} else {
throw std : : runtime_error ( " invalid adapter id " ) ;
}
}
2024-12-07 20:21:09 +01:00
server_task task ( SERVER_TASK_TYPE_SET_LORA ) ;
task . id = ctx_server . queue_tasks . get_new_id ( ) ;
ctx_server . queue_results . add_waiting_task_id ( task . id ) ;
ctx_server . queue_tasks . post ( task ) ;
2024-08-06 17:33:39 +02:00
2024-12-07 20:21:09 +01:00
server_task_result_ptr result = ctx_server . queue_results . recv ( task . id ) ;
ctx_server . queue_results . remove_waiting_task_id ( task . id ) ;
2024-08-06 17:33:39 +02:00
2024-12-06 11:14:32 +01:00
if ( result - > is_error ( ) ) {
res_error ( res , result - > to_json ( ) ) ;
return ;
}
GGML_ASSERT ( dynamic_cast < server_task_result_apply_lora * > ( result . get ( ) ) ! = nullptr ) ;
res_ok ( res , result - > to_json ( ) ) ;
2024-08-06 17:33:39 +02:00
} ;
2024-03-09 11:27:53 +01:00
//
// Router
//
2024-03-07 10:41:53 +01:00
2024-12-10 18:22:34 +01:00
if ( ! params . webui ) {
LOG_INF ( " Web UI is disabled \n " ) ;
2024-10-08 13:27:04 +02:00
} else {
2024-12-10 18:22:34 +01:00
// register static assets routes
if ( ! params . public_path . empty ( ) ) {
// Set the base directory for serving static files
bool is_found = svr - > set_mount_point ( " / " , params . public_path ) ;
if ( ! is_found ) {
LOG_ERR ( " %s: static assets path not found: %s \n " , __func__ , params . public_path . c_str ( ) ) ;
return 1 ;
}
} else {
// using embedded static index.html
svr - > Get ( " / " , [ ] ( const httplib : : Request & , httplib : : Response & res ) {
res . set_content ( reinterpret_cast < const char * > ( index_html ) , index_html_len , " text/html; charset=utf-8 " ) ;
return false ;
} ) ;
}
2024-10-08 13:27:04 +02:00
}
2024-03-09 11:27:53 +01:00
// register API routes
2024-10-08 13:27:04 +02:00
svr - > Get ( " /health " , handle_health ) ; // public endpoint (no API key check)
2024-03-09 11:27:53 +01:00
svr - > Get ( " /metrics " , handle_metrics ) ;
svr - > Get ( " /props " , handle_props ) ;
2024-10-08 13:27:04 +02:00
svr - > Post ( " /props " , handle_props_change ) ;
svr - > Get ( " /models " , handle_models ) ; // public endpoint (no API key check)
svr - > Get ( " /v1/models " , handle_models ) ; // public endpoint (no API key check)
2024-03-09 11:27:53 +01:00
svr - > Post ( " /completion " , handle_completions ) ; // legacy
svr - > Post ( " /completions " , handle_completions ) ;
svr - > Post ( " /v1/completions " , handle_completions ) ;
svr - > Post ( " /chat/completions " , handle_chat_completions ) ;
svr - > Post ( " /v1/chat/completions " , handle_chat_completions ) ;
svr - > Post ( " /infill " , handle_infill ) ;
svr - > Post ( " /embedding " , handle_embeddings ) ; // legacy
svr - > Post ( " /embeddings " , handle_embeddings ) ;
svr - > Post ( " /v1/embeddings " , handle_embeddings ) ;
2024-09-28 16:42:03 +02:00
svr - > Post ( " /rerank " , handle_rerank ) ;
svr - > Post ( " /reranking " , handle_rerank ) ;
svr - > Post ( " /v1/rerank " , handle_rerank ) ;
svr - > Post ( " /v1/reranking " , handle_rerank ) ;
2024-03-09 11:27:53 +01:00
svr - > Post ( " /tokenize " , handle_tokenize ) ;
svr - > Post ( " /detokenize " , handle_detokenize ) ;
2024-08-06 17:33:39 +02:00
// LoRA adapters hotswap
svr - > Get ( " /lora-adapters " , handle_lora_adapters_list ) ;
svr - > Post ( " /lora-adapters " , handle_lora_adapters_apply ) ;
// Save & load slots
svr - > Get ( " /slots " , handle_slots ) ;
2024-09-02 17:11:51 +02:00
svr - > Post ( " /slots/:id_slot " , handle_slots_action ) ;
2023-05-21 19:51:18 +02:00
2024-03-09 11:27:53 +01:00
//
// Start the server
//
2024-06-04 20:23:39 +02:00
if ( params . n_threads_http < 1 ) {
2024-03-03 08:48:36 +01:00
// +2 threads for monitoring endpoints
2024-06-04 20:23:39 +02:00
params . n_threads_http = std : : max ( params . n_parallel + 2 , ( int32_t ) std : : thread : : hardware_concurrency ( ) - 1 ) ;
2024-03-01 10:08:08 +01:00
}
2024-06-04 20:23:39 +02:00
log_data [ " n_threads_http " ] = std : : to_string ( params . n_threads_http ) ;
svr - > new_task_queue = [ & params ] { return new httplib : : ThreadPool ( params . n_threads_http ) ; } ;
2024-03-01 10:08:08 +01:00
2024-08-16 17:19:05 +02:00
// clean up function, to be called before exit
auto clean_up = [ & svr ] ( ) {
svr - > stop ( ) ;
llama_backend_free ( ) ;
} ;
2024-03-07 10:41:53 +01:00
2024-12-01 12:33:12 +01:00
// bind HTTP listen port
bool was_bound = false ;
if ( params . port = = 0 ) {
int bound_port = svr - > bind_to_any_port ( params . hostname ) ;
if ( ( was_bound = ( bound_port > = 0 ) ) ) {
params . port = bound_port ;
}
} else {
was_bound = svr - > bind_to_port ( params . hostname , params . port ) ;
}
if ( ! was_bound ) {
2024-09-15 19:46:12 +02:00
//LOG_ERROR("couldn't bind HTTP server socket", {
// {"hostname", params.hostname},
// {"port", params.port},
//});
LOG_ERR ( " %s: couldn't bind HTTP server socket, hostname: %s, port: %d \n " , __func__ , params . hostname . c_str ( ) , params . port ) ;
2024-08-16 17:19:05 +02:00
clean_up ( ) ;
return 1 ;
}
2024-12-01 12:33:12 +01:00
// run the HTTP server in a thread
2024-08-16 17:19:05 +02:00
std : : thread t ( [ & ] ( ) { svr - > listen_after_bind ( ) ; } ) ;
svr - > wait_until_ready ( ) ;
2024-09-15 19:46:12 +02:00
LOG_INF ( " %s: HTTP server is listening, hostname: %s, port: %d, http threads: %d \n " , __func__ , params . hostname . c_str ( ) , params . port , params . n_threads_http ) ;
2024-08-16 17:19:05 +02:00
// load the model
2024-09-15 19:46:12 +02:00
LOG_INF ( " %s: loading model \n " , __func__ ) ;
2024-08-16 17:19:05 +02:00
if ( ! ctx_server . load_model ( params ) ) {
clean_up ( ) ;
t . join ( ) ;
2024-09-15 19:46:12 +02:00
LOG_ERR ( " %s: exiting due to model loading error \n " , __func__ ) ;
2024-08-16 17:19:05 +02:00
return 1 ;
2024-09-15 19:46:12 +02:00
}
2024-08-16 17:19:05 +02:00
2024-09-15 19:46:12 +02:00
ctx_server . init ( ) ;
state . store ( SERVER_STATE_READY ) ;
2024-08-16 17:19:05 +02:00
2024-09-15 19:46:12 +02:00
LOG_INF ( " %s: model loaded \n " , __func__ ) ;
2024-02-24 12:28:55 +01:00
2024-09-15 19:46:12 +02:00
// if a custom chat template is not supplied, we will use the one that comes with the model (if any)
if ( params . chat_template . empty ( ) ) {
if ( ! ctx_server . validate_model_chat_template ( ) ) {
LOG_WRN ( " %s: The chat template that comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses \n " , __func__ ) ;
params . chat_template = " chatml " ;
2024-08-16 17:19:05 +02:00
}
2024-09-15 19:46:12 +02:00
}
2024-02-24 12:28:55 +01:00
2024-09-15 19:46:12 +02:00
// print sample chat example to make it clear which template is used
2024-10-10 22:57:42 +02:00
LOG_INF ( " %s: chat template, built_in: %d, chat_example: '%s' \n " , __func__ , params . chat_template . empty ( ) , common_chat_format_example ( ctx_server . model , params . chat_template ) . c_str ( ) ) ;
2024-08-16 17:19:05 +02:00
2024-09-15 19:46:12 +02:00
ctx_server . queue_tasks . on_new_task ( std : : bind (
& server_context : : process_single_task , & ctx_server , std : : placeholders : : _1 ) ) ;
2024-10-13 17:52:48 +02:00
2024-09-15 19:46:12 +02:00
ctx_server . queue_tasks . on_update_slots ( std : : bind (
& server_context : : update_slots , & ctx_server ) ) ;
shutdown_handler = [ & ] ( int ) {
ctx_server . queue_tasks . terminate ( ) ;
} ;
2024-10-31 14:02:35 +01:00
LOG_INF ( " %s: server is listening on http://%s:%d - starting the main loop \n " , __func__ , params . hostname . c_str ( ) , params . port ) ;
2024-09-15 19:46:12 +02:00
ctx_server . queue_tasks . start_loop ( ) ;
2024-02-18 17:23:16 +01:00
# if defined (__unix__) || (defined (__APPLE__) && defined (__MACH__))
struct sigaction sigint_action ;
sigint_action . sa_handler = signal_handler ;
sigemptyset ( & sigint_action . sa_mask ) ;
sigint_action . sa_flags = 0 ;
sigaction ( SIGINT , & sigint_action , NULL ) ;
2024-03-28 09:50:48 +01:00
sigaction ( SIGTERM , & sigint_action , NULL ) ;
2024-02-18 17:23:16 +01:00
# elif defined (_WIN32)
auto console_ctrl_handler = + [ ] ( DWORD ctrl_type ) - > BOOL {
return ( ctrl_type = = CTRL_C_EVENT ) ? ( signal_handler ( SIGINT ) , true ) : false ;
} ;
SetConsoleCtrlHandler ( reinterpret_cast < PHANDLER_ROUTINE > ( console_ctrl_handler ) , true ) ;
# endif
2024-03-07 10:41:53 +01:00
2024-08-16 17:19:05 +02:00
clean_up ( ) ;
2023-10-22 21:53:08 +02:00
t . join ( ) ;
2023-07-10 17:49:56 +02:00
Server Example Refactor and Improvements (#1570)
A major rewrite for the server example.
Note that if you have built something on the previous server API, it will probably be incompatible.
Check out the examples for how a typical chat app could work.
This took a lot of effort, there are 24 PR's closed in the submitter's repo alone, over 160 commits and a lot of comments and testing.
Summary of the changes:
- adds missing generation parameters: tfs_z, typical_p, repeat_last_n, repeat_penalty, presence_penalty, frequency_penalty, mirostat, penalize_nl, seed, ignore_eos
- applies missing top k sampler
- removes interactive mode/terminal-like behavior, removes exclude parameter
- moves threads and batch size to server command-line parameters
- adds LoRA loading and matches command line parameters with main example
- fixes stopping on EOS token and with the specified token amount with n_predict
- adds server timeouts, host, and port settings
- adds expanded generation complete response; adds generation settings, stop reason, prompt truncated, model used, and final text
- sets defaults for unspecified parameters between requests
- removes /next-token endpoint and as_loop parameter, adds stream parameter and server-sent events for streaming
- adds CORS headers to responses
- adds request logging, exception printing and optional verbose logging
- adds better stopping words handling when matching multiple tokens and while streaming, or when it finishes on a partial stop string
- adds printing an error when it can't bind to the host/port specified
- fixes multi-byte character handling and replaces invalid UTF-8 characters on responses
- prints timing and build info on startup
- adds logit bias to request parameters
- removes embedding mode
- updates documentation; adds streaming Node.js and Bash examples
- fixes code formatting
- sets server threads to 1 since the current global state doesn't work well with simultaneous requests
- adds truncation of the input prompt and better context reset
- removes token limit from the input prompt
- significantly simplified the logic and removed a lot of variables
---------
Co-authored-by: anon998 <131767832+anon998@users.noreply.github.com>
Co-authored-by: Henri Vasserman <henv@hot.ee>
Co-authored-by: Felix Hellmann <privat@cirk2.de>
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Co-authored-by: Lesaun Harvey <Lesaun@gmail.com>
2023-06-17 13:53:04 +02:00
return 0 ;
2023-05-21 19:51:18 +02:00
}