AlphaAtlas
|
071f0776ad
|
Add llama.cpp GPU offload option (#2060)
|
2023-05-14 22:58:11 -03:00 |
|
Ahmed Said
|
fbcd32988e
|
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-02 18:25:28 -03:00 |
|
oobabooga
|
ea6e77df72
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
|
oobabooga
|
2c52310642
|
Add --threads flag for llama.cpp
|
2023-03-31 21:18:05 -03:00 |
|
oobabooga
|
52065ae4cd
|
Add repetition_penalty
|
2023-03-31 19:01:34 -03:00 |
|
oobabooga
|
2259143fec
|
Fix llama.cpp with --no-stream
|
2023-03-31 18:43:45 -03:00 |
|
oobabooga
|
9d1dcf880a
|
General improvements
|
2023-03-31 14:27:01 -03:00 |
|
Thomas Antony
|
7fa5d96c22
|
Update to use new llamacpp API
|
2023-03-30 11:23:05 +01:00 |
|
Thomas Antony
|
7a562481fa
|
Initial version of llamacpp_model.py
|
2023-03-30 11:22:07 +01:00 |
|