mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
57bb2c40cd
* server : fix logprobs, make it openai-compatible * update docs * add std::log * return pre-sampling p * sort before apply softmax * add comment * fix test * set p for sampled token * update docs * add --multi-token-probs * update docs * add `post_sampling_probs` option * update docs [no ci] * remove --multi-token-probs * "top_probs" with "post_sampling_probs" * resolve review comments * rename struct token_prob to prob_info * correct comment placement * fix setting prob for sampled token |
||
---|---|---|
.. | ||
test_basic.py | ||
test_chat_completion.py | ||
test_completion.py | ||
test_ctx_shift.py | ||
test_embedding.py | ||
test_infill.py | ||
test_lora.py | ||
test_rerank.py | ||
test_security.py | ||
test_slot_save.py | ||
test_speculative.py | ||
test_tokenize.py |