Commit Graph

61 Commits

Author SHA1 Message Date
oobabooga
d6934bc7bc
Implement CFG for ExLlama_HF (#3666) 2023-08-24 16:27:36 -03:00
oobabooga
ee964bcce9 Update a comment about RoPE scaling 2023-08-20 07:01:43 -07:00
oobabooga
7cba000421
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q (#3610) 2023-08-18 12:03:34 -03:00
Chris Lefever
0230fa4e9c Add the --disable_exllama option for AutoGPTQ 2023-08-12 02:26:58 -04:00
cal066
7a4fcee069
Add ctransformers support (#3313)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
jllllll
d6765bebc4
Update installation documentation 2023-08-10 00:53:48 -05:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
d8fb506aff Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Gennadij
0e78f3b4d4
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa (#3494) 2023-08-08 00:31:11 -03:00
oobabooga
412f6ff9d3 Change alpha_value maximum and step 2023-08-07 06:08:51 -07:00
oobabooga
65aa11890f
Refactor everything (#3481) 2023-08-06 21:49:27 -03:00