Commit Graph

306 Commits

Author SHA1 Message Date
oobabooga
0e54a09bcb
Remove exllamav1 loaders (#5128) 2023-12-31 01:57:06 -03:00
oobabooga
29b0f14d5a
Bump llama-cpp-python to 0.2.25 (#5077) 2023-12-25 12:36:32 -03:00
Casper
92d5e64a82
Bump AutoAWQ to 0.1.8 (#5061) 2023-12-24 14:27:34 -03:00
oobabooga
d76b00c211 Pin lm_eval package version 2023-12-24 09:22:31 -08:00
oobabooga
f0f6d9bdf9 Add HQQ back & update version
This reverts commit 2289e9031e.
2023-12-20 07:46:09 -08:00
oobabooga
258c695ead Add rich requirement 2023-12-19 21:58:36 -08:00
oobabooga
2289e9031e Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993) 2023-12-19 21:33:49 -08:00
oobabooga
de138b8ba6
Add llama-cpp-python wheels with tensor cores support (#5003) 2023-12-19 17:30:53 -03:00
oobabooga
0a299d5959
Bump llama-cpp-python to 0.2.24 (#5001) 2023-12-19 15:22:21 -03:00
dependabot[bot]
9e48e50428
Update optimum requirement from ==1.15.* to ==1.16.* (#4986) 2023-12-18 21:43:29 -03:00
Water
674be9a09a
Add HQQ quant loader (#4888)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-12-18 21:23:16 -03:00
oobabooga
12690d3ffc
Better HF grammar implementation (#4953) 2023-12-17 02:01:23 -03:00
oobabooga
d2ed0a06bf Bump ExLlamav2 to 0.0.11 (adds Mixtral support) 2023-12-16 16:34:15 -08:00
oobabooga
7de10f4c8e Bump AutoGPTQ to 0.6.0 (adds Mixtral support) 2023-12-15 06:18:49 -08:00
oobabooga
85816898f9
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930) 2023-12-15 01:58:08 -03:00
oobabooga
8acecf3aee Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924) 2023-12-14 09:41:36 -08:00
oobabooga
21a5bfc67f Relax optimum requirement 2023-12-12 14:05:58 -08:00
dependabot[bot]
7a987417bb
Bump optimum from 1.14.0 to 1.15.0 (#4885) 2023-12-12 02:32:19 -03:00
dependabot[bot]
a17750db91
Update peft requirement from ==0.6.* to ==0.7.* (#4886) 2023-12-12 02:31:30 -03:00
dependabot[bot]
a8a92c6c87
Update transformers requirement from ==4.35.* to ==4.36.* (#4882) 2023-12-12 02:30:25 -03:00
俞航
ac9f154bcc
Bump exllamav2 from 0.0.8 to 0.0.10 & Fix code change (#4782) 2023-12-04 21:15:05 -03:00
dependabot[bot]
801ba87c68
Update accelerate requirement from ==0.24.* to ==0.25.* (#4810) 2023-12-04 20:36:01 -03:00
dependabot[bot]
2e83844f35
Bump safetensors from 0.4.0 to 0.4.1 (#4750) 2023-12-03 22:50:10 -03:00
oobabooga
0589ff5b12
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701) 2023-11-21 20:59:39 -03:00
oobabooga
fb124ab6e2 Bump to flash-attention 2.3.4 + switch to Github Actions wheels on Windows (#4700) 2023-11-21 15:07:17 -08:00
oobabooga
4b84e45116 Use +cpuavx2 instead of +cpuavx 2023-11-20 11:46:38 -08:00
oobabooga
d7f1bc102b
Fix "Illegal instruction" bug in llama.cpp CPU only version (#4677) 2023-11-20 16:36:38 -03:00
oobabooga
e0ca49ed9c
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt

* Add back seed
2023-11-18 00:31:27 -03:00
oobabooga
9d6f79db74 Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb.
2023-11-17 05:14:25 -08:00
oobabooga
923c8e25fb
Bump llama-cpp-python to 0.2.18 (#4611) 2023-11-16 22:55:14 -03:00
Casper
61f429563e
Bump AutoAWQ to 0.1.7 (#4620) 2023-11-16 17:08:08 -03:00
Anton Rogozin
8a9d5a0cea
update AutoGPTQ to higher version for lora applying error fixing (#4604) 2023-11-15 20:23:22 -03:00
oobabooga
dea90c7b67 Bump exllamav2 to 0.0.8 2023-11-13 10:34:10 -08:00
oobabooga
2af7e382b1 Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6.

The new version has issues:

https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
2023-11-09 10:02:13 -08:00
oobabooga
5c3eb22ce6 Bump llama-cpp-python to 0.2.14 2023-11-07 14:20:43 -08:00
dependabot[bot]
fd893baba1
Bump optimum from 1.13.1 to 1.14.0 (#4492) 2023-11-07 00:13:41 -03:00
dependabot[bot]
18739c8b3a
Update peft requirement from ==0.5.* to ==0.6.* (#4494) 2023-11-07 00:12:59 -03:00
Orang
2081f43ac2
Bump transformers to 4.35.* (#4474) 2023-11-04 14:00:24 -03:00
Casper
cfbd108826
Bump AWQ to 0.1.6 (#4470) 2023-11-04 13:09:41 -03:00
Orang
6b7fa45cc3
Update exllamav2 version (#4417) 2023-10-31 19:12:14 -03:00
Casper
41e159e88f
Bump AutoAWQ to v0.1.5 (#4410) 2023-10-31 19:11:22 -03:00
James Braza
f481ce3dd8
Adding platform_system to autoawq (#4390) 2023-10-27 01:02:28 -03:00
oobabooga
839a87bac8 Fix is_ccl_available & is_xpu_available imports 2023-10-26 20:27:04 -07:00
oobabooga
6086768309 Bump gradio to 3.50.* 2023-10-22 21:21:26 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
Johan
2706394bfe
Relax numpy version requirements (#4291) 2023-10-15 12:05:06 -03:00
jllllll
1f5a2c5597
Use Pytorch 2.1 exllama wheels (#4285) 2023-10-14 15:27:59 -03:00
oobabooga
cd1cad1b47 Bump exllamav2 2023-10-14 11:23:07 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) (#4258) 2023-10-10 22:20:49 -03:00
dependabot[bot]
520cbb2ab1
Bump safetensors from 0.3.2 to 0.4.0 (#4249) 2023-10-10 17:41:09 -03:00
jllllll
0eda9a0549
Use GPTQ wheels compatible with Pytorch 2.1 (#4210) 2023-10-07 00:35:41 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 (#4209) 2023-10-07 00:23:49 -03:00
Casper
0aa853f575
Bump AutoAWQ to v0.1.4 (#4203) 2023-10-06 15:30:01 -03:00
oobabooga
7d3201923b Bump AutoAWQ 2023-10-05 15:14:15 -07:00
turboderp
8a98646a21
Bump ExLlamaV2 to 0.0.5 (#4186) 2023-10-05 19:12:22 -03:00
cal066
cc632c3f33
AutoAWQ: initial support (#3999) 2023-10-05 13:19:18 -03:00
oobabooga
3f56151f03 Bump to transformers 4.34 2023-10-05 08:55:14 -07:00
oobabooga
ae4ba3007f
Add grammar to transformers and _HF loaders (#4091) 2023-10-05 10:01:36 -03:00
jllllll
41a2de96e5
Bump llama-cpp-python to 0.2.11 2023-10-01 18:08:10 -05:00
oobabooga
92a39c619b Add Mistral support 2023-09-28 15:41:03 -07:00
oobabooga
f46ba12b42 Add flash-attn wheels for Linux 2023-09-28 14:45:52 -07:00
jllllll
2bd23c29cb
Bump llama-cpp-python to 0.2.7 (#4110) 2023-09-27 23:45:36 -03:00
jllllll
13a54729b1
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095) 2023-09-26 21:36:14 -03:00
oobabooga
2e7b6b0014
Create alternative requirements.txt with AMD and Metal wheels (#4052) 2023-09-24 09:58:29 -03:00
oobabooga
05c4a4f83c Bump exllamav2 2023-09-21 14:56:01 -07:00
jllllll
b7c55665c1
Bump llama-cpp-python to 0.2.6 (#3982) 2023-09-18 14:08:37 -03:00
dependabot[bot]
661bfaac8e
Update accelerate from ==0.22.* to ==0.23.* (#3981) 2023-09-17 22:42:12 -03:00
Thireus ☠
45335fa8f4
Bump ExLlamav2 to v0.0.2 (#3970) 2023-09-17 19:24:40 -03:00
dependabot[bot]
eb9ebabec7
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896) 2023-09-13 02:13:51 -03:00
cal066
a4e4e887d7
Bump ctransformers to 0.2.27 (#3893) 2023-09-13 00:37:31 -03:00
jllllll
1a5d68015a
Bump llama-cpp-python to 0.1.85 (#3887) 2023-09-12 19:41:41 -03:00
oobabooga
833bc59f1b Remove ninja from requirements.txt
It's installed with exllamav2 automatically
2023-09-12 15:12:56 -07:00
dependabot[bot]
0efbe5ef76
Bump optimum from 1.12.0 to 1.13.1 (#3872) 2023-09-12 15:53:21 -03:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881) 2023-09-12 14:33:07 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
jllllll
859b4fd737
Bump exllama to 0.1.17 (#3847) 2023-09-11 01:13:14 -03:00
dependabot[bot]
1d6b384828
Update transformers requirement from ==4.32.* to ==4.33.* (#3865) 2023-09-11 01:12:22 -03:00
jllllll
e8f234ca8f
Bump llama-cpp-python to 0.1.84 (#3854) 2023-09-11 01:11:33 -03:00
oobabooga
66d5caba1b Pin pydantic version (closes #3850) 2023-09-10 21:09:04 -07:00
oobabooga
0576691538 Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
2023-08-31 08:45:38 -07:00
jllllll
9626f57721
Bump exllama to 0.0.14 (#3758) 2023-08-30 13:43:38 -03:00
jllllll
dac5f4b912
Bump llama-cpp-python to 0.1.83 (#3745) 2023-08-29 22:35:59 -03:00
VishwasKukreti
a9a1784420
Update accelerate to 0.22 in requirements.txt (#3725) 2023-08-29 17:47:37 -03:00
jllllll
fe1f7c6513
Bump ctransformers to 0.2.25 (#3740) 2023-08-29 17:24:36 -03:00
jllllll
22b2a30ec7
Bump llama-cpp-python to 0.1.82 (#3730) 2023-08-28 18:02:24 -03:00
jllllll
7d3a0b5387
Bump llama-cpp-python to 0.1.81 (#3716) 2023-08-27 22:38:41 -03:00
oobabooga
7f5370a272 Minor fixes/cosmetics 2023-08-26 22:11:07 -07:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support 2023-08-26 10:40:08 -05:00
oobabooga
6e6431e73f Update requirements.txt 2023-08-26 01:07:28 -07:00
cal066
960980247f
ctransformers: gguf support (#3685) 2023-08-25 11:33:04 -03:00
oobabooga
26c5e5e878 Bump autogptq 2023-08-24 19:23:08 -07:00
oobabooga
2b675533f7 Un-bump safetensors
The newest one doesn't work on Windows yet
2023-08-23 14:36:03 -07:00
oobabooga
335c49cc7e Bump peft and transformers 2023-08-22 13:14:59 -07:00
tkbit
df165fe6c4
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
2023-08-22 16:55:17 -03:00
cal066
e042bf8624
ctransformers: add mlock and no-mmap options (#3649) 2023-08-22 16:51:34 -03:00
oobabooga
b96fd22a81
Refactor the training tab (#3619) 2023-08-18 16:58:38 -03:00
jllllll
1a71ab58a9
Bump llama_cpp_python_cuda to 0.1.78 (#3614) 2023-08-18 12:04:01 -03:00
oobabooga
6170b5ba31 Bump llama-cpp-python 2023-08-17 21:41:02 -07:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama) 2023-08-14 15:15:55 -03:00