oobabooga
|
258c695ead
|
Add rich requirement
|
2023-12-19 21:58:36 -08:00 |
|
oobabooga
|
2289e9031e
|
Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993)
|
2023-12-19 21:33:49 -08:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
dependabot[bot]
|
9e48e50428
|
Update optimum requirement from ==1.15.* to ==1.16.* (#4986)
|
2023-12-18 21:43:29 -03:00 |
|
俞航
|
9fa3883630
|
Add ROCm wheels for exllamav2 (#4973)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:40:38 -03:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
12690d3ffc
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
|
oobabooga
|
d2ed0a06bf
|
Bump ExLlamav2 to 0.0.11 (adds Mixtral support)
|
2023-12-16 16:34:15 -08:00 |
|
oobabooga
|
7de10f4c8e
|
Bump AutoGPTQ to 0.6.0 (adds Mixtral support)
|
2023-12-15 06:18:49 -08:00 |
|
oobabooga
|
85816898f9
|
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930)
|
2023-12-15 01:58:08 -03:00 |
|
oobabooga
|
8acecf3aee
|
Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924)
|
2023-12-14 09:41:36 -08:00 |
|
oobabooga
|
21a5bfc67f
|
Relax optimum requirement
|
2023-12-12 14:05:58 -08:00 |
|
dependabot[bot]
|
7a987417bb
|
Bump optimum from 1.14.0 to 1.15.0 (#4885)
|
2023-12-12 02:32:19 -03:00 |
|
dependabot[bot]
|
a17750db91
|
Update peft requirement from ==0.6.* to ==0.7.* (#4886)
|
2023-12-12 02:31:30 -03:00 |
|
dependabot[bot]
|
a8a92c6c87
|
Update transformers requirement from ==4.35.* to ==4.36.* (#4882)
|
2023-12-12 02:30:25 -03:00 |
|
俞航
|
ac9f154bcc
|
Bump exllamav2 from 0.0.8 to 0.0.10 & Fix code change (#4782)
|
2023-12-04 21:15:05 -03:00 |
|
dependabot[bot]
|
801ba87c68
|
Update accelerate requirement from ==0.24.* to ==0.25.* (#4810)
|
2023-12-04 20:36:01 -03:00 |
|
dependabot[bot]
|
2e83844f35
|
Bump safetensors from 0.4.0 to 0.4.1 (#4750)
|
2023-12-03 22:50:10 -03:00 |
|
oobabooga
|
0589ff5b12
|
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701)
|
2023-11-21 20:59:39 -03:00 |
|
oobabooga
|
4b84e45116
|
Use +cpuavx2 instead of +cpuavx
|
2023-11-20 11:46:38 -08:00 |
|
oobabooga
|
d7f1bc102b
|
Fix "Illegal instruction" bug in llama.cpp CPU only version (#4677)
|
2023-11-20 16:36:38 -03:00 |
|
oobabooga
|
e0ca49ed9c
|
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt
* Add back seed
|
2023-11-18 00:31:27 -03:00 |
|
oobabooga
|
9d6f79db74
|
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb .
|
2023-11-17 05:14:25 -08:00 |
|
oobabooga
|
923c8e25fb
|
Bump llama-cpp-python to 0.2.18 (#4611)
|
2023-11-16 22:55:14 -03:00 |
|
Anton Rogozin
|
8a9d5a0cea
|
update AutoGPTQ to higher version for lora applying error fixing (#4604)
|
2023-11-15 20:23:22 -03:00 |
|
oobabooga
|
dea90c7b67
|
Bump exllamav2 to 0.0.8
|
2023-11-13 10:34:10 -08:00 |
|
oobabooga
|
2af7e382b1
|
Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6 .
The new version has issues:
https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
|
2023-11-09 10:02:13 -08:00 |
|
oobabooga
|
5c3eb22ce6
|
Bump llama-cpp-python to 0.2.14
|
2023-11-07 14:20:43 -08:00 |
|
dependabot[bot]
|
fd893baba1
|
Bump optimum from 1.13.1 to 1.14.0 (#4492)
|
2023-11-07 00:13:41 -03:00 |
|
dependabot[bot]
|
18739c8b3a
|
Update peft requirement from ==0.5.* to ==0.6.* (#4494)
|
2023-11-07 00:12:59 -03:00 |
|
oobabooga
|
40f7f37009
|
Update requirements
|
2023-11-04 10:12:06 -07:00 |
|
dependabot[bot]
|
af98587580
|
Update accelerate requirement from ==0.23.* to ==0.24.* (#4400)
|
2023-10-27 00:46:16 -03:00 |
|
oobabooga
|
6086768309
|
Bump gradio to 3.50.*
|
2023-10-22 21:21:26 -07:00 |
|
mjbogusz
|
8f6405d2fa
|
Python 3.11, 3.9, 3.8 support (#4233)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-20 21:13:33 -03:00 |
|
Johan
|
2706394bfe
|
Relax numpy version requirements (#4291)
|
2023-10-15 12:05:06 -03:00 |
|
jllllll
|
1f5a2c5597
|
Use Pytorch 2.1 exllama wheels (#4285)
|
2023-10-14 15:27:59 -03:00 |
|
oobabooga
|
cd1cad1b47
|
Bump exllamav2
|
2023-10-14 11:23:07 -07:00 |
|
oobabooga
|
fae8062d39
|
Bump to latest gradio (3.47) (#4258)
|
2023-10-10 22:20:49 -03:00 |
|
dependabot[bot]
|
520cbb2ab1
|
Bump safetensors from 0.3.2 to 0.4.0 (#4249)
|
2023-10-10 17:41:09 -03:00 |
|
jllllll
|
0eda9a0549
|
Use GPTQ wheels compatible with Pytorch 2.1 (#4210)
|
2023-10-07 00:35:41 -03:00 |
|
turboderp
|
8a98646a21
|
Bump ExLlamaV2 to 0.0.5 (#4186)
|
2023-10-05 19:12:22 -03:00 |
|
oobabooga
|
3f56151f03
|
Bump to transformers 4.34
|
2023-10-05 08:55:14 -07:00 |
|
oobabooga
|
ae4ba3007f
|
Add grammar to transformers and _HF loaders (#4091)
|
2023-10-05 10:01:36 -03:00 |
|
jllllll
|
41a2de96e5
|
Bump llama-cpp-python to 0.2.11
|
2023-10-01 18:08:10 -05:00 |
|
oobabooga
|
92a39c619b
|
Add Mistral support
|
2023-09-28 15:41:03 -07:00 |
|
jllllll
|
2bd23c29cb
|
Bump llama-cpp-python to 0.2.7 (#4110)
|
2023-09-27 23:45:36 -03:00 |
|
jllllll
|
13a54729b1
|
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095)
|
2023-09-26 21:36:14 -03:00 |
|
oobabooga
|
08c4fb12ae
|
Use bitsandbytes==0.38.1 for AMD
|
2023-09-24 08:11:59 -07:00 |
|
oobabooga
|
2e7b6b0014
|
Create alternative requirements.txt with AMD and Metal wheels (#4052)
|
2023-09-24 09:58:29 -03:00 |
|