Commit Graph

91 Commits

Author SHA1 Message Date
oobabooga
c0655475ae Add cache_8bit option 2023-11-02 11:23:04 -07:00
Mehran Ziadloo
aaf726dbfb
Updating the shared settings object when loading a model (#4425) 2023-11-01 01:29:57 -03:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization (#4340) 2023-10-26 23:39:51 -03:00
oobabooga
92691ee626 Disable trust_remote_code by default 2023-10-23 09:57:44 -07:00
oobabooga
df90d03e0b Replace --mul_mat_q with --no_mul_mat_q 2023-10-22 12:23:03 -07:00
oobabooga
773c17faec Fix a warning 2023-10-10 20:53:38 -07:00
oobabooga
3a9d90c3a1 Download models with 4 threads by default 2023-10-10 13:52:10 -07:00
cal066
cc632c3f33
AutoAWQ: initial support (#3999) 2023-10-05 13:19:18 -03:00
oobabooga
b6fe6acf88 Add threads_batch parameter 2023-10-01 21:28:00 -07:00
jllllll
41a2de96e5
Bump llama-cpp-python to 0.2.11 2023-10-01 18:08:10 -05:00
oobabooga
f2d82f731a Add recommended NTKv1 alpha values 2023-09-29 13:48:38 -07:00
oobabooga
96da2e1c0d Read more metadata (config.json & quantize_config.json) 2023-09-29 06:14:16 -07:00
oobabooga
f931184b53 Increase truncation limits to 32768 2023-09-28 19:28:22 -07:00
StoyanStAtanasov
7e6ff8d1f0
Enable NUMA feature for llama_cpp_python (#4040) 2023-09-26 22:05:00 -03:00
oobabooga
1ca54faaf0 Improve --multi-user mode 2023-09-26 06:42:33 -07:00
oobabooga
d0d221df49 Add --use_fast option (closes #3741) 2023-09-25 12:19:43 -07:00
oobabooga
b973b91d73 Automatically filter by loader (closes #4072) 2023-09-25 10:28:35 -07:00
oobabooga
36c38d7561 Add disable_exllama to Transformers loader (for GPTQ LoRA training) 2023-09-24 20:03:11 -07:00
oobabooga
7a3ca2c68f Better detect EXL2 models 2023-09-23 13:05:55 -07:00
oobabooga
37e2980e05 Recommend mul_mat_q for llama.cpp 2023-09-17 08:27:11 -07:00
kalomaze
7c9664ed35
Allow full model URL to be used for download (#3919)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-16 10:06:13 -03:00
Johan
fdcee0c215
Allow custom tokenizer for llamacpp_HF loader (#3941) 2023-09-15 12:38:38 -03:00
oobabooga
9331ab4798
Read GGUF metadata (#3873) 2023-09-11 18:49:30 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
oobabooga
4affa08821 Do not impose instruct mode while loading models 2023-09-02 11:31:33 -07:00
missionfloyd
787219267c
Allow downloading single file from UI (#3737) 2023-08-29 23:32:36 -03:00
oobabooga
0c9e818bb8 Update truncation length based on max_seq_len/n_ctx 2023-08-26 23:10:45 -07:00
oobabooga
3361728da1 Change some comments 2023-08-26 22:24:44 -07:00
oobabooga
5c7d8bfdfd Detect CodeLlama settings 2023-08-25 07:06:57 -07:00
oobabooga
52ab2a6b9e Add rope_freq_base parameter for CodeLlama 2023-08-25 06:55:15 -07:00
oobabooga
d6934bc7bc
Implement CFG for ExLlama_HF (#3666) 2023-08-24 16:27:36 -03:00
oobabooga
ee964bcce9 Update a comment about RoPE scaling 2023-08-20 07:01:43 -07:00
oobabooga
7cba000421
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q (#3610) 2023-08-18 12:03:34 -03:00
Chris Lefever
0230fa4e9c Add the --disable_exllama option for AutoGPTQ 2023-08-12 02:26:58 -04:00
cal066
7a4fcee069
Add ctransformers support (#3313)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
jllllll
d6765bebc4
Update installation documentation 2023-08-10 00:53:48 -05:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
d8fb506aff Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Gennadij
0e78f3b4d4
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa (#3494) 2023-08-08 00:31:11 -03:00
oobabooga
412f6ff9d3 Change alpha_value maximum and step 2023-08-07 06:08:51 -07:00
oobabooga
65aa11890f
Refactor everything (#3481) 2023-08-06 21:49:27 -03:00