mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 06:10:29 +01:00
28103f4832
* Server: add tests for consistent results * sampling: separate rng per sampling context |
||
---|---|---|
.. | ||
steps | ||
embeddings.feature | ||
environment.py | ||
issues.feature | ||
parallel.feature | ||
passkey.feature | ||
results.feature | ||
security.feature | ||
server.feature | ||
slotsave.feature | ||
wrong_usages.feature |