oobabooga
|
2a92a842ce
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
|
oobabooga
|
a102c704f5
|
Add numba to requirements.txt
|
2024-03-10 16:13:29 -07:00 |
|
oobabooga
|
b3ade5832b
|
Keep AQLM only for Linux (fails to install on Windows)
|
2024-03-10 09:41:17 -07:00 |
|
oobabooga
|
67b24b0b88
|
Bump llama-cpp-python to 0.2.56
|
2024-03-10 09:07:27 -07:00 |
|
oobabooga
|
763f9beb7e
|
Bump bitsandbytes to 0.43, add official Windows wheel
|
2024-03-10 08:30:53 -07:00 |
|
oobabooga
|
9271e80914
|
Add back AutoAWQ for Windows
https://github.com/casper-hansen/AutoAWQ/issues/377#issuecomment-1986440695
|
2024-03-08 14:54:56 -08:00 |
|
oobabooga
|
d0663bae31
|
Bump AutoAWQ to 0.2.3 (Linux only) (#5658)
|
2024-03-08 17:36:28 -03:00 |
|
oobabooga
|
0e6eb7c27a
|
Add AQLM support (transformers loader) (#5466)
|
2024-03-08 17:30:36 -03:00 |
|
oobabooga
|
bde7f00cae
|
Change the exllamav2 version number
|
2024-03-06 21:08:29 -08:00 |
|
oobabooga
|
2ec1d96c91
|
Add cache_4bit option for ExLlamaV2 (#5645)
|
2024-03-06 23:02:25 -03:00 |
|
oobabooga
|
2174958362
|
Revert gradio to 3.50.2 (#5640)
|
2024-03-06 11:52:46 -03:00 |
|
oobabooga
|
03f03af535
|
Revert "Update peft requirement from ==0.8.* to ==0.9.* (#5626)"
This reverts commit 72a498ddd4 .
|
2024-03-05 02:56:37 -08:00 |
|
oobabooga
|
ae12d045ea
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-03-05 02:35:04 -08:00 |
|
dependabot[bot]
|
72a498ddd4
|
Update peft requirement from ==0.8.* to ==0.9.* (#5626)
|
2024-03-05 07:34:32 -03:00 |
|
oobabooga
|
1437f757a1
|
Bump HQQ to 0.1.5
|
2024-03-05 02:33:51 -08:00 |
|
oobabooga
|
63a1d4afc8
|
Bump gradio to 4.19 (#5522)
|
2024-03-05 07:32:28 -03:00 |
|
oobabooga
|
527ba98105
|
Do not install extensions requirements by default (#5621)
|
2024-03-04 04:46:39 -03:00 |
|
oobabooga
|
8bd4960d05
|
Update PyTorch to 2.2 (also update flash-attn to 2.5.6) (#5618)
|
2024-03-03 19:40:32 -03:00 |
|
oobabooga
|
70047a5c57
|
Bump bitsandytes to 0.42.0 on Windows
|
2024-03-03 13:19:27 -08:00 |
|
oobabooga
|
24e86bb21b
|
Bump llama-cpp-python to 0.2.55
|
2024-03-03 12:14:48 -08:00 |
|
oobabooga
|
314e42fd98
|
Fix transformers requirement
|
2024-03-03 10:49:28 -08:00 |
|
dependabot[bot]
|
dfdf6eb5b4
|
Bump hqq from 0.1.3 to 0.1.3.post1 (#5582)
|
2024-02-26 20:51:39 -03:00 |
|
oobabooga
|
332957ffec
|
Bump llama-cpp-python to 0.2.52
|
2024-02-26 15:05:53 -08:00 |
|
Bartowski
|
21acf504ce
|
Bump transformers to 4.38 for gemma compatibility (#5575)
|
2024-02-25 20:15:13 -03:00 |
|
oobabooga
|
c07dc56736
|
Bump llama-cpp-python to 0.2.50
|
2024-02-24 21:34:11 -08:00 |
|
oobabooga
|
98580cad8e
|
Bump exllamav2 to 0.0.14
|
2024-02-24 18:35:42 -08:00 |
|
oobabooga
|
527f2652af
|
Bump llama-cpp-python to 0.2.47
|
2024-02-22 19:48:49 -08:00 |
|
oobabooga
|
3f42e3292a
|
Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)"
This reverts commit d04fef6a07 .
|
2024-02-22 19:48:04 -08:00 |
|
dependabot[bot]
|
5f7dbf454a
|
Update optimum requirement from ==1.16.* to ==1.17.* (#5548)
|
2024-02-19 19:15:21 -03:00 |
|
dependabot[bot]
|
d04fef6a07
|
Bump autoawq from 0.1.8 to 0.2.2 (#5547)
|
2024-02-19 19:14:55 -03:00 |
|
dependabot[bot]
|
ed6ff49431
|
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546)
|
2024-02-19 19:14:04 -03:00 |
|
oobabooga
|
0b2279d031
|
Bump llama-cpp-python to 0.2.44
|
2024-02-19 13:42:31 -08:00 |
|
oobabooga
|
c375c753d6
|
Bump bitsandbytes to 0.42 (Linux only)
|
2024-02-16 10:47:57 -08:00 |
|
oobabooga
|
080f7132c0
|
Revert gradio to 3.50.2 (#5513)
|
2024-02-15 20:40:23 -03:00 |
|
oobabooga
|
ea0e1feee7
|
Bump llama-cpp-python to 0.2.43
|
2024-02-14 21:58:24 -08:00 |
|
oobabooga
|
549f106879
|
Bump ExLlamaV2 to v0.0.13.2
|
2024-02-14 21:57:48 -08:00 |
|
DominikKowalczyk
|
33c4ce0720
|
Bump gradio to 4.19 (#5419)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-14 23:28:26 -03:00 |
|
oobabooga
|
04d8bdf929
|
Fix ExLlamaV2 requirement on Windows
|
2024-02-14 06:31:20 -08:00 |
|
oobabooga
|
193548edce
|
Minor fix to ExLlamaV2 requirements
|
2024-02-13 16:00:06 -08:00 |
|
oobabooga
|
25b655faeb
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-02-13 15:49:53 -08:00 |
|
oobabooga
|
f99f1fc68e
|
Bump llama-cpp-python to 0.2.42
|
2024-02-13 15:49:20 -08:00 |
|
dependabot[bot]
|
d8081e85ec
|
Update peft requirement from ==0.7.* to ==0.8.* (#5446)
|
2024-02-13 16:27:18 -03:00 |
|
dependabot[bot]
|
653b195b1e
|
Update numpy requirement from ==1.24.* to ==1.26.* (#5490)
|
2024-02-13 16:26:35 -03:00 |
|
dependabot[bot]
|
147b4cf3e0
|
Bump hqq from 0.1.2.post1 to 0.1.3 (#5489)
|
2024-02-13 16:25:02 -03:00 |
|
oobabooga
|
e9fea353c5
|
Bump llama-cpp-python to 0.2.40
|
2024-02-13 11:22:34 -08:00 |
|
oobabooga
|
acea6a6669
|
Add more exllamav2 wheels
|
2024-02-07 08:24:29 -08:00 |
|
oobabooga
|
35537ad3d1
|
Bump exllamav2 to 0.0.13.1 (#5463)
|
2024-02-07 13:17:04 -03:00 |
|
oobabooga
|
b8e25e8678
|
Bump llama-cpp-python to 0.2.39
|
2024-02-07 06:50:47 -08:00 |
|
oobabooga
|
a210999255
|
Bump safetensors version
|
2024-02-04 18:40:25 -08:00 |
|
oobabooga
|
e98d1086f5
|
Bump llama-cpp-python to 0.2.38 (#5420)
|
2024-02-01 20:09:30 -03:00 |
|
oobabooga
|
89f6036e98
|
Bump llama-cpp-python, remove python 3.8/3.9, cuda 11.7 (#5397)
|
2024-01-30 13:19:20 -03:00 |
|
dependabot[bot]
|
bfe2326a24
|
Bump hqq from 0.1.2 to 0.1.2.post1 (#5349)
|
2024-01-26 11:10:18 -03:00 |
|
oobabooga
|
87dc421ee8
|
Bump exllamav2 to 0.0.12 (#5352)
|
2024-01-22 22:40:12 -03:00 |
|
oobabooga
|
b9d1873301
|
Bump transformers to 4.37
|
2024-01-22 04:07:12 -08:00 |
|
oobabooga
|
b5cabb6e9d
|
Bump llama-cpp-python to 0.2.31 (#5345)
|
2024-01-22 08:05:59 -03:00 |
|
oobabooga
|
8962bb173e
|
Bump llama-cpp-python to 0.2.29 (#5307)
|
2024-01-18 14:24:17 -03:00 |
|
oobabooga
|
7916cf863b
|
Bump transformers (necesary for e055967974 )
|
2024-01-17 12:37:31 -08:00 |
|
Rimmy J
|
d80b191b1c
|
Add requirement jinja2==3.1.* to fix error as described in issue #5240 (#5249)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Rim <anonymous@mail.com>
|
2024-01-13 21:47:13 -03:00 |
|
dependabot[bot]
|
32cdc66cf1
|
Bump hqq from 0.1.1.post1 to 0.1.2 (#5204)
|
2024-01-08 22:51:44 -03:00 |
|
oobabooga
|
f6a204d7c9
|
Bump llama-cpp-python to 0.2.26
|
2024-01-03 11:06:36 -08:00 |
|
oobabooga
|
0e54a09bcb
|
Remove exllamav1 loaders (#5128)
|
2023-12-31 01:57:06 -03:00 |
|
oobabooga
|
29b0f14d5a
|
Bump llama-cpp-python to 0.2.25 (#5077)
|
2023-12-25 12:36:32 -03:00 |
|
Casper
|
92d5e64a82
|
Bump AutoAWQ to 0.1.8 (#5061)
|
2023-12-24 14:27:34 -03:00 |
|
oobabooga
|
d76b00c211
|
Pin lm_eval package version
|
2023-12-24 09:22:31 -08:00 |
|
oobabooga
|
f0f6d9bdf9
|
Add HQQ back & update version
This reverts commit 2289e9031e .
|
2023-12-20 07:46:09 -08:00 |
|
oobabooga
|
258c695ead
|
Add rich requirement
|
2023-12-19 21:58:36 -08:00 |
|
oobabooga
|
2289e9031e
|
Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993)
|
2023-12-19 21:33:49 -08:00 |
|
oobabooga
|
de138b8ba6
|
Add llama-cpp-python wheels with tensor cores support (#5003)
|
2023-12-19 17:30:53 -03:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
dependabot[bot]
|
9e48e50428
|
Update optimum requirement from ==1.15.* to ==1.16.* (#4986)
|
2023-12-18 21:43:29 -03:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
12690d3ffc
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
|
oobabooga
|
d2ed0a06bf
|
Bump ExLlamav2 to 0.0.11 (adds Mixtral support)
|
2023-12-16 16:34:15 -08:00 |
|
oobabooga
|
7de10f4c8e
|
Bump AutoGPTQ to 0.6.0 (adds Mixtral support)
|
2023-12-15 06:18:49 -08:00 |
|
oobabooga
|
85816898f9
|
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930)
|
2023-12-15 01:58:08 -03:00 |
|
oobabooga
|
8acecf3aee
|
Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924)
|
2023-12-14 09:41:36 -08:00 |
|
oobabooga
|
21a5bfc67f
|
Relax optimum requirement
|
2023-12-12 14:05:58 -08:00 |
|
dependabot[bot]
|
7a987417bb
|
Bump optimum from 1.14.0 to 1.15.0 (#4885)
|
2023-12-12 02:32:19 -03:00 |
|
dependabot[bot]
|
a17750db91
|
Update peft requirement from ==0.6.* to ==0.7.* (#4886)
|
2023-12-12 02:31:30 -03:00 |
|
dependabot[bot]
|
a8a92c6c87
|
Update transformers requirement from ==4.35.* to ==4.36.* (#4882)
|
2023-12-12 02:30:25 -03:00 |
|
俞航
|
ac9f154bcc
|
Bump exllamav2 from 0.0.8 to 0.0.10 & Fix code change (#4782)
|
2023-12-04 21:15:05 -03:00 |
|
dependabot[bot]
|
801ba87c68
|
Update accelerate requirement from ==0.24.* to ==0.25.* (#4810)
|
2023-12-04 20:36:01 -03:00 |
|
dependabot[bot]
|
2e83844f35
|
Bump safetensors from 0.4.0 to 0.4.1 (#4750)
|
2023-12-03 22:50:10 -03:00 |
|
oobabooga
|
0589ff5b12
|
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701)
|
2023-11-21 20:59:39 -03:00 |
|
oobabooga
|
fb124ab6e2
|
Bump to flash-attention 2.3.4 + switch to Github Actions wheels on Windows (#4700)
|
2023-11-21 15:07:17 -08:00 |
|
oobabooga
|
4b84e45116
|
Use +cpuavx2 instead of +cpuavx
|
2023-11-20 11:46:38 -08:00 |
|
oobabooga
|
d7f1bc102b
|
Fix "Illegal instruction" bug in llama.cpp CPU only version (#4677)
|
2023-11-20 16:36:38 -03:00 |
|
oobabooga
|
e0ca49ed9c
|
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt
* Add back seed
|
2023-11-18 00:31:27 -03:00 |
|
oobabooga
|
9d6f79db74
|
Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb .
|
2023-11-17 05:14:25 -08:00 |
|
oobabooga
|
923c8e25fb
|
Bump llama-cpp-python to 0.2.18 (#4611)
|
2023-11-16 22:55:14 -03:00 |
|
Casper
|
61f429563e
|
Bump AutoAWQ to 0.1.7 (#4620)
|
2023-11-16 17:08:08 -03:00 |
|
Anton Rogozin
|
8a9d5a0cea
|
update AutoGPTQ to higher version for lora applying error fixing (#4604)
|
2023-11-15 20:23:22 -03:00 |
|
oobabooga
|
dea90c7b67
|
Bump exllamav2 to 0.0.8
|
2023-11-13 10:34:10 -08:00 |
|
oobabooga
|
2af7e382b1
|
Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6 .
The new version has issues:
https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
|
2023-11-09 10:02:13 -08:00 |
|
oobabooga
|
5c3eb22ce6
|
Bump llama-cpp-python to 0.2.14
|
2023-11-07 14:20:43 -08:00 |
|
dependabot[bot]
|
fd893baba1
|
Bump optimum from 1.13.1 to 1.14.0 (#4492)
|
2023-11-07 00:13:41 -03:00 |
|
dependabot[bot]
|
18739c8b3a
|
Update peft requirement from ==0.5.* to ==0.6.* (#4494)
|
2023-11-07 00:12:59 -03:00 |
|
Orang
|
2081f43ac2
|
Bump transformers to 4.35.* (#4474)
|
2023-11-04 14:00:24 -03:00 |
|
Casper
|
cfbd108826
|
Bump AWQ to 0.1.6 (#4470)
|
2023-11-04 13:09:41 -03:00 |
|
Orang
|
6b7fa45cc3
|
Update exllamav2 version (#4417)
|
2023-10-31 19:12:14 -03:00 |
|
Casper
|
41e159e88f
|
Bump AutoAWQ to v0.1.5 (#4410)
|
2023-10-31 19:11:22 -03:00 |
|
James Braza
|
f481ce3dd8
|
Adding platform_system to autoawq (#4390)
|
2023-10-27 01:02:28 -03:00 |
|
oobabooga
|
839a87bac8
|
Fix is_ccl_available & is_xpu_available imports
|
2023-10-26 20:27:04 -07:00 |
|
oobabooga
|
6086768309
|
Bump gradio to 3.50.*
|
2023-10-22 21:21:26 -07:00 |
|
Brian Dashore
|
3345da2ea4
|
Add flash-attention 2 for windows (#4235)
|
2023-10-21 03:46:23 -03:00 |
|
mjbogusz
|
8f6405d2fa
|
Python 3.11, 3.9, 3.8 support (#4233)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-20 21:13:33 -03:00 |
|
Johan
|
2706394bfe
|
Relax numpy version requirements (#4291)
|
2023-10-15 12:05:06 -03:00 |
|
jllllll
|
1f5a2c5597
|
Use Pytorch 2.1 exllama wheels (#4285)
|
2023-10-14 15:27:59 -03:00 |
|
oobabooga
|
cd1cad1b47
|
Bump exllamav2
|
2023-10-14 11:23:07 -07:00 |
|
oobabooga
|
fae8062d39
|
Bump to latest gradio (3.47) (#4258)
|
2023-10-10 22:20:49 -03:00 |
|
dependabot[bot]
|
520cbb2ab1
|
Bump safetensors from 0.3.2 to 0.4.0 (#4249)
|
2023-10-10 17:41:09 -03:00 |
|
jllllll
|
0eda9a0549
|
Use GPTQ wheels compatible with Pytorch 2.1 (#4210)
|
2023-10-07 00:35:41 -03:00 |
|
oobabooga
|
d33facc9fe
|
Bump to pytorch 11.8 (#4209)
|
2023-10-07 00:23:49 -03:00 |
|
Casper
|
0aa853f575
|
Bump AutoAWQ to v0.1.4 (#4203)
|
2023-10-06 15:30:01 -03:00 |
|
oobabooga
|
7d3201923b
|
Bump AutoAWQ
|
2023-10-05 15:14:15 -07:00 |
|
turboderp
|
8a98646a21
|
Bump ExLlamaV2 to 0.0.5 (#4186)
|
2023-10-05 19:12:22 -03:00 |
|
cal066
|
cc632c3f33
|
AutoAWQ: initial support (#3999)
|
2023-10-05 13:19:18 -03:00 |
|
oobabooga
|
3f56151f03
|
Bump to transformers 4.34
|
2023-10-05 08:55:14 -07:00 |
|
oobabooga
|
ae4ba3007f
|
Add grammar to transformers and _HF loaders (#4091)
|
2023-10-05 10:01:36 -03:00 |
|
jllllll
|
41a2de96e5
|
Bump llama-cpp-python to 0.2.11
|
2023-10-01 18:08:10 -05:00 |
|
oobabooga
|
92a39c619b
|
Add Mistral support
|
2023-09-28 15:41:03 -07:00 |
|
oobabooga
|
f46ba12b42
|
Add flash-attn wheels for Linux
|
2023-09-28 14:45:52 -07:00 |
|
jllllll
|
2bd23c29cb
|
Bump llama-cpp-python to 0.2.7 (#4110)
|
2023-09-27 23:45:36 -03:00 |
|
jllllll
|
13a54729b1
|
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095)
|
2023-09-26 21:36:14 -03:00 |
|
oobabooga
|
2e7b6b0014
|
Create alternative requirements.txt with AMD and Metal wheels (#4052)
|
2023-09-24 09:58:29 -03:00 |
|
oobabooga
|
05c4a4f83c
|
Bump exllamav2
|
2023-09-21 14:56:01 -07:00 |
|
jllllll
|
b7c55665c1
|
Bump llama-cpp-python to 0.2.6 (#3982)
|
2023-09-18 14:08:37 -03:00 |
|
dependabot[bot]
|
661bfaac8e
|
Update accelerate from ==0.22.* to ==0.23.* (#3981)
|
2023-09-17 22:42:12 -03:00 |
|
Thireus ☠
|
45335fa8f4
|
Bump ExLlamav2 to v0.0.2 (#3970)
|
2023-09-17 19:24:40 -03:00 |
|
dependabot[bot]
|
eb9ebabec7
|
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896)
|
2023-09-13 02:13:51 -03:00 |
|
cal066
|
a4e4e887d7
|
Bump ctransformers to 0.2.27 (#3893)
|
2023-09-13 00:37:31 -03:00 |
|
jllllll
|
1a5d68015a
|
Bump llama-cpp-python to 0.1.85 (#3887)
|
2023-09-12 19:41:41 -03:00 |
|
oobabooga
|
833bc59f1b
|
Remove ninja from requirements.txt
It's installed with exllamav2 automatically
|
2023-09-12 15:12:56 -07:00 |
|
dependabot[bot]
|
0efbe5ef76
|
Bump optimum from 1.12.0 to 1.13.1 (#3872)
|
2023-09-12 15:53:21 -03:00 |
|
oobabooga
|
c2a309f56e
|
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881)
|
2023-09-12 14:33:07 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
jllllll
|
859b4fd737
|
Bump exllama to 0.1.17 (#3847)
|
2023-09-11 01:13:14 -03:00 |
|
dependabot[bot]
|
1d6b384828
|
Update transformers requirement from ==4.32.* to ==4.33.* (#3865)
|
2023-09-11 01:12:22 -03:00 |
|
jllllll
|
e8f234ca8f
|
Bump llama-cpp-python to 0.1.84 (#3854)
|
2023-09-11 01:11:33 -03:00 |
|
oobabooga
|
66d5caba1b
|
Pin pydantic version (closes #3850)
|
2023-09-10 21:09:04 -07:00 |
|
oobabooga
|
0576691538
|
Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
|
2023-08-31 08:45:38 -07:00 |
|
jllllll
|
9626f57721
|
Bump exllama to 0.0.14 (#3758)
|
2023-08-30 13:43:38 -03:00 |
|
jllllll
|
dac5f4b912
|
Bump llama-cpp-python to 0.1.83 (#3745)
|
2023-08-29 22:35:59 -03:00 |
|
VishwasKukreti
|
a9a1784420
|
Update accelerate to 0.22 in requirements.txt (#3725)
|
2023-08-29 17:47:37 -03:00 |
|
jllllll
|
fe1f7c6513
|
Bump ctransformers to 0.2.25 (#3740)
|
2023-08-29 17:24:36 -03:00 |
|
jllllll
|
22b2a30ec7
|
Bump llama-cpp-python to 0.1.82 (#3730)
|
2023-08-28 18:02:24 -03:00 |
|
jllllll
|
7d3a0b5387
|
Bump llama-cpp-python to 0.1.81 (#3716)
|
2023-08-27 22:38:41 -03:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
6e6431e73f
|
Update requirements.txt
|
2023-08-26 01:07:28 -07:00 |
|