mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
6c59567689
* server : (tests) don't use thread for capturing stdout/stderr * test: bump openai to 1.55.2 * bump openai to 1.55.3
8 lines
125 B
Plaintext
8 lines
125 B
Plaintext
aiohttp~=3.9.3
|
|
pytest~=8.3.3
|
|
huggingface_hub~=0.23.2
|
|
numpy~=1.26.4
|
|
openai~=1.55.3
|
|
prometheus-client~=0.20.0
|
|
requests~=2.32.3
|