Commit Graph

204 Commits

Author SHA1 Message Date
dependabot[bot]
2de586f586
Update accelerate requirement from ==0.27.* to ==0.30.* (#5989) 2024-05-19 20:03:18 -03:00
oobabooga
0d90b3a25c Bump llama-cpp-python to 0.2.75 2024-05-18 05:26:26 -07:00
oobabooga
9557f49f2f Bump llama-cpp-python to 0.2.73 2024-05-11 10:53:19 -07:00
oobabooga
e61055253c Bump llama-cpp-python to 0.2.69, add --flash-attn option 2024-05-03 04:31:22 -07:00
oobabooga
0476f9fe70 Bump ExLlamaV2 to 0.0.20 2024-05-01 16:20:50 -07:00
oobabooga
ae0f28530c Bump llama-cpp-python to 0.2.68 2024-05-01 08:40:50 -07:00
oobabooga
51fb766bea
Add back my llama-cpp-python wheels, bump to 0.2.65 (#5964) 2024-04-30 09:11:31 -03:00
oobabooga
9b623b8a78
Bump llama-cpp-python to 0.2.64, use official wheels (#5921) 2024-04-23 23:17:05 -03:00
Ashley Kleynhans
0877741b03
Bumped ExLlamaV2 to version 0.0.19 to resolve #5851 (#5880) 2024-04-19 19:04:40 -03:00
oobabooga
b30bce3b2f Bump transformers to 4.40 2024-04-18 16:19:31 -07:00
Philipp Emanuel Weidmann
a0c69749e6
Revert sse-starlette version bump because it breaks API request cancellation (#5873) 2024-04-18 15:05:00 -03:00
dependabot[bot]
597556cb77
Bump sse-starlette from 1.6.5 to 2.1.0 (#5831) 2024-04-11 18:54:05 -03:00
oobabooga
3e3a7c4250 Bump llama-cpp-python to 0.2.61 & fix the crash 2024-04-11 14:15:34 -07:00
oobabooga
5f5ceaf025 Revert "Bump llama-cpp-python to 0.2.61"
This reverts commit 3ae61c0338.
2024-04-11 13:24:57 -07:00
dependabot[bot]
bd71a504b8
Update gradio requirement from ==4.25.* to ==4.26.* (#5832) 2024-04-11 02:24:53 -03:00
oobabooga
3ae61c0338 Bump llama-cpp-python to 0.2.61 2024-04-10 21:39:46 -07:00
oobabooga
ed4001e324 Bump ExLlamaV2 to 0.0.18 2024-04-08 18:05:16 -07:00
oobabooga
f6828de3f2 Downgrade llama-cpp-python to 0.2.56 2024-04-07 07:00:12 -07:00
Jared Van Bortel
39ff9c9dcf
requirements: add psutil (#5819) 2024-04-06 23:02:20 -03:00
oobabooga
dfb01f9a63 Bump llama-cpp-python to 0.2.60 2024-04-06 18:32:36 -07:00
dependabot[bot]
a4c67e1974
Bump aqlm[cpu,gpu] from 1.1.2 to 1.1.3 (#5790) 2024-04-05 13:26:49 -03:00
oobabooga
14f6194211 Bump Gradio to 4.25 2024-04-05 09:22:44 -07:00
oobabooga
d423021a48
Remove CTransformers support (#5807) 2024-04-04 20:23:58 -03:00
oobabooga
3952560da8 Bump llama-cpp-python to 0.2.59 2024-04-04 11:20:48 -07:00
oobabooga
70c58b5fc2 Bump ExLlamaV2 to 0.0.17 2024-03-30 21:08:26 -07:00
oobabooga
3ce0d9221b Bump transformers to 4.39 2024-03-28 19:40:31 -07:00
dependabot[bot]
3609ea69e4
Bump aqlm[cpu,gpu] from 1.1.0 to 1.1.2 (#5728) 2024-03-26 16:36:16 -03:00
oobabooga
2a92a842ce
Bump gradio to 4.23 (#5758) 2024-03-26 16:32:20 -03:00
oobabooga
a102c704f5 Add numba to requirements.txt 2024-03-10 16:13:29 -07:00
oobabooga
b3ade5832b Keep AQLM only for Linux (fails to install on Windows) 2024-03-10 09:41:17 -07:00
oobabooga
67b24b0b88 Bump llama-cpp-python to 0.2.56 2024-03-10 09:07:27 -07:00
oobabooga
763f9beb7e Bump bitsandbytes to 0.43, add official Windows wheel 2024-03-10 08:30:53 -07:00
oobabooga
9271e80914 Add back AutoAWQ for Windows
https://github.com/casper-hansen/AutoAWQ/issues/377#issuecomment-1986440695
2024-03-08 14:54:56 -08:00
oobabooga
d0663bae31
Bump AutoAWQ to 0.2.3 (Linux only) (#5658) 2024-03-08 17:36:28 -03:00
oobabooga
0e6eb7c27a
Add AQLM support (transformers loader) (#5466) 2024-03-08 17:30:36 -03:00
oobabooga
bde7f00cae Change the exllamav2 version number 2024-03-06 21:08:29 -08:00
oobabooga
2ec1d96c91
Add cache_4bit option for ExLlamaV2 (#5645) 2024-03-06 23:02:25 -03:00
oobabooga
2174958362
Revert gradio to 3.50.2 (#5640) 2024-03-06 11:52:46 -03:00
oobabooga
03f03af535 Revert "Update peft requirement from ==0.8.* to ==0.9.* (#5626)"
This reverts commit 72a498ddd4.
2024-03-05 02:56:37 -08:00
oobabooga
ae12d045ea Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-03-05 02:35:04 -08:00
dependabot[bot]
72a498ddd4
Update peft requirement from ==0.8.* to ==0.9.* (#5626) 2024-03-05 07:34:32 -03:00
oobabooga
1437f757a1 Bump HQQ to 0.1.5 2024-03-05 02:33:51 -08:00
oobabooga
63a1d4afc8
Bump gradio to 4.19 (#5522) 2024-03-05 07:32:28 -03:00
oobabooga
527ba98105
Do not install extensions requirements by default (#5621) 2024-03-04 04:46:39 -03:00
oobabooga
8bd4960d05
Update PyTorch to 2.2 (also update flash-attn to 2.5.6) (#5618) 2024-03-03 19:40:32 -03:00
oobabooga
70047a5c57 Bump bitsandytes to 0.42.0 on Windows 2024-03-03 13:19:27 -08:00
oobabooga
24e86bb21b Bump llama-cpp-python to 0.2.55 2024-03-03 12:14:48 -08:00
oobabooga
314e42fd98 Fix transformers requirement 2024-03-03 10:49:28 -08:00
dependabot[bot]
dfdf6eb5b4
Bump hqq from 0.1.3 to 0.1.3.post1 (#5582) 2024-02-26 20:51:39 -03:00
oobabooga
332957ffec Bump llama-cpp-python to 0.2.52 2024-02-26 15:05:53 -08:00
Bartowski
21acf504ce
Bump transformers to 4.38 for gemma compatibility (#5575) 2024-02-25 20:15:13 -03:00
oobabooga
c07dc56736 Bump llama-cpp-python to 0.2.50 2024-02-24 21:34:11 -08:00
oobabooga
98580cad8e Bump exllamav2 to 0.0.14 2024-02-24 18:35:42 -08:00
oobabooga
527f2652af Bump llama-cpp-python to 0.2.47 2024-02-22 19:48:49 -08:00
oobabooga
3f42e3292a Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)"
This reverts commit d04fef6a07.
2024-02-22 19:48:04 -08:00
dependabot[bot]
5f7dbf454a
Update optimum requirement from ==1.16.* to ==1.17.* (#5548) 2024-02-19 19:15:21 -03:00
dependabot[bot]
d04fef6a07
Bump autoawq from 0.1.8 to 0.2.2 (#5547) 2024-02-19 19:14:55 -03:00
dependabot[bot]
ed6ff49431
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546) 2024-02-19 19:14:04 -03:00
oobabooga
0b2279d031 Bump llama-cpp-python to 0.2.44 2024-02-19 13:42:31 -08:00
oobabooga
c375c753d6 Bump bitsandbytes to 0.42 (Linux only) 2024-02-16 10:47:57 -08:00
oobabooga
080f7132c0
Revert gradio to 3.50.2 (#5513) 2024-02-15 20:40:23 -03:00
oobabooga
ea0e1feee7 Bump llama-cpp-python to 0.2.43 2024-02-14 21:58:24 -08:00
oobabooga
549f106879 Bump ExLlamaV2 to v0.0.13.2 2024-02-14 21:57:48 -08:00
DominikKowalczyk
33c4ce0720
Bump gradio to 4.19 (#5419)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-14 23:28:26 -03:00
oobabooga
04d8bdf929 Fix ExLlamaV2 requirement on Windows 2024-02-14 06:31:20 -08:00
oobabooga
193548edce Minor fix to ExLlamaV2 requirements 2024-02-13 16:00:06 -08:00
oobabooga
25b655faeb Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-02-13 15:49:53 -08:00
oobabooga
f99f1fc68e Bump llama-cpp-python to 0.2.42 2024-02-13 15:49:20 -08:00
dependabot[bot]
d8081e85ec
Update peft requirement from ==0.7.* to ==0.8.* (#5446) 2024-02-13 16:27:18 -03:00
dependabot[bot]
653b195b1e
Update numpy requirement from ==1.24.* to ==1.26.* (#5490) 2024-02-13 16:26:35 -03:00
dependabot[bot]
147b4cf3e0
Bump hqq from 0.1.2.post1 to 0.1.3 (#5489) 2024-02-13 16:25:02 -03:00
oobabooga
e9fea353c5 Bump llama-cpp-python to 0.2.40 2024-02-13 11:22:34 -08:00
oobabooga
acea6a6669 Add more exllamav2 wheels 2024-02-07 08:24:29 -08:00
oobabooga
35537ad3d1
Bump exllamav2 to 0.0.13.1 (#5463) 2024-02-07 13:17:04 -03:00
oobabooga
b8e25e8678 Bump llama-cpp-python to 0.2.39 2024-02-07 06:50:47 -08:00
oobabooga
a210999255 Bump safetensors version 2024-02-04 18:40:25 -08:00
oobabooga
e98d1086f5
Bump llama-cpp-python to 0.2.38 (#5420) 2024-02-01 20:09:30 -03:00
oobabooga
89f6036e98
Bump llama-cpp-python, remove python 3.8/3.9, cuda 11.7 (#5397) 2024-01-30 13:19:20 -03:00
dependabot[bot]
bfe2326a24
Bump hqq from 0.1.2 to 0.1.2.post1 (#5349) 2024-01-26 11:10:18 -03:00
oobabooga
87dc421ee8
Bump exllamav2 to 0.0.12 (#5352) 2024-01-22 22:40:12 -03:00
oobabooga
b9d1873301 Bump transformers to 4.37 2024-01-22 04:07:12 -08:00
oobabooga
b5cabb6e9d
Bump llama-cpp-python to 0.2.31 (#5345) 2024-01-22 08:05:59 -03:00
oobabooga
8962bb173e
Bump llama-cpp-python to 0.2.29 (#5307) 2024-01-18 14:24:17 -03:00
oobabooga
7916cf863b Bump transformers (necesary for e055967974) 2024-01-17 12:37:31 -08:00
Rimmy J
d80b191b1c
Add requirement jinja2==3.1.* to fix error as described in issue #5240 (#5249)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Rim <anonymous@mail.com>
2024-01-13 21:47:13 -03:00
dependabot[bot]
32cdc66cf1
Bump hqq from 0.1.1.post1 to 0.1.2 (#5204) 2024-01-08 22:51:44 -03:00
oobabooga
f6a204d7c9 Bump llama-cpp-python to 0.2.26 2024-01-03 11:06:36 -08:00
oobabooga
0e54a09bcb
Remove exllamav1 loaders (#5128) 2023-12-31 01:57:06 -03:00
oobabooga
29b0f14d5a
Bump llama-cpp-python to 0.2.25 (#5077) 2023-12-25 12:36:32 -03:00
Casper
92d5e64a82
Bump AutoAWQ to 0.1.8 (#5061) 2023-12-24 14:27:34 -03:00
oobabooga
d76b00c211 Pin lm_eval package version 2023-12-24 09:22:31 -08:00
oobabooga
f0f6d9bdf9 Add HQQ back & update version
This reverts commit 2289e9031e.
2023-12-20 07:46:09 -08:00
oobabooga
258c695ead Add rich requirement 2023-12-19 21:58:36 -08:00
oobabooga
2289e9031e Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993) 2023-12-19 21:33:49 -08:00
oobabooga
de138b8ba6
Add llama-cpp-python wheels with tensor cores support (#5003) 2023-12-19 17:30:53 -03:00
oobabooga
0a299d5959
Bump llama-cpp-python to 0.2.24 (#5001) 2023-12-19 15:22:21 -03:00
dependabot[bot]
9e48e50428
Update optimum requirement from ==1.15.* to ==1.16.* (#4986) 2023-12-18 21:43:29 -03:00
Water
674be9a09a
Add HQQ quant loader (#4888)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-12-18 21:23:16 -03:00
oobabooga
12690d3ffc
Better HF grammar implementation (#4953) 2023-12-17 02:01:23 -03:00
oobabooga
d2ed0a06bf Bump ExLlamav2 to 0.0.11 (adds Mixtral support) 2023-12-16 16:34:15 -08:00