Commit Graph

208 Commits

Author SHA1 Message Date
oobabooga
617cd7b705 Revert "Update accelerate requirement from ==0.33.* to ==0.34.* (#6416)"
This reverts commit 6063a66414.
2024-10-01 09:06:25 -07:00
dependabot[bot]
6063a66414
Update accelerate requirement from ==0.33.* to ==0.34.* (#6416) 2024-09-30 18:50:38 -03:00
oobabooga
9ca0cd7749 Bump llama-cpp-python to 0.3.1 2024-09-29 20:47:04 -07:00
oobabooga
01362681f2 Bump exllamav2 to 0.2.4 2024-09-29 07:42:44 -07:00
oobabooga
3b99532e02 Remove HQQ and AQLM from requirements 2024-09-28 20:34:59 -07:00
oobabooga
1a870b3ea7 Remove AutoAWQ and AutoGPTQ from requirements (no wheels available) 2024-09-28 19:38:56 -07:00
oobabooga
85994e3ef0 Bump pytorch to 2.4.1 2024-09-28 09:44:08 -07:00
oobabooga
3492e33fd5 Bump bitsandbytes to 0.44 2024-09-27 16:59:30 -07:00
oobabooga
78b8705400 Bump llama-cpp-python to 0.3.0 (except for AMD) 2024-09-27 15:06:31 -07:00
oobabooga
c5f048e912 Bump ExLlamaV2 to 0.2.2 2024-09-27 15:04:08 -07:00
oobabooga
c497a32372 Bump transformers to 4.45 2024-09-26 11:55:51 -07:00
oobabooga
a50477ec85 Apply the change to all requirements (oops) 2024-09-06 18:47:25 -07:00
oobabooga
2cb8d4c96e Bump llama-cpp-python to 0.2.90 2024-09-03 05:53:18 -07:00
oobabooga
64919e0d69 Bump flash-attention to 2.6.3 2024-09-03 05:51:46 -07:00
oobabooga
d1168afa76 Bump ExLlamaV2 to 0.2.0 2024-09-02 21:15:51 -07:00
oobabooga
1f288b4072 Bump ExLlamaV2 to 0.1.9 2024-08-22 12:40:15 -07:00
dependabot[bot]
64e16e9a46
Update accelerate requirement from ==0.32.* to ==0.33.* (#6291) 2024-08-19 23:34:10 -03:00
dependabot[bot]
68f928b5e0
Update peft requirement from ==0.8.* to ==0.12.* (#6292) 2024-08-19 23:33:56 -03:00
oobabooga
4d8c1801c2 Bump llama-cpp-python to 0.2.89 2024-08-19 17:45:01 -07:00
oobabooga
bf8187124d Bump llama-cpp-python to 0.2.88 2024-08-13 12:40:18 -07:00
oobabooga
089d5a9415 Bump llama-cpp-python to 0.2.87 2024-08-07 20:36:28 -07:00
oobabooga
81773f7f36 Bump transformers to 4.44 2024-08-06 20:07:05 -07:00
oobabooga
608545d282 Bump llama-cpp-python to 0.2.85 2024-07-31 18:44:46 -07:00
oobabooga
92ab3a9a6a Bump llama-cpp-python to 0.2.84 2024-07-28 15:13:06 -07:00
oobabooga
e4624fbc68
Merge branch 'main' into dev 2024-07-25 12:03:45 -03:00
oobabooga
3b2c23dfb5 Add AutoAWQ 0.2.6 wheels for PyTorch 2.2.2 2024-07-24 11:15:00 -07:00
oobabooga
8a5f110c14 Bump ExLlamaV2 to 0.1.8 2024-07-24 09:22:48 -07:00
oobabooga
af839d20ac Remove the AutoAWQ requirement 2024-07-23 19:38:39 -07:00
oobabooga
9d5513fda0 Remove the AutoAWQ requirement 2024-07-23 19:38:04 -07:00
oobabooga
f66ab63d64 Bump transformers to 4.43 2024-07-23 14:06:34 -07:00
oobabooga
3ee682208c Revert "Bump hqq from 0.1.7.post3 to 0.1.8 (#6238)"
This reverts commit 1c3671699c.
2024-07-22 19:53:56 -07:00
oobabooga
aa809e420e Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
2024-07-22 18:05:11 -07:00
oobabooga
11bbf71aa5
Bump back llama-cpp-python (#6257) 2024-07-22 16:19:41 -03:00
oobabooga
0f53a736c1 Revert the llama-cpp-python update 2024-07-22 12:02:25 -07:00
oobabooga
a687f950ba Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
2024-07-22 11:54:35 -07:00
oobabooga
7d2449f8b0 Bump llama-cpp-python to 0.2.82.3 (unofficial build) 2024-07-22 11:49:20 -07:00
dependabot[bot]
1c3671699c
Bump hqq from 0.1.7.post3 to 0.1.8 (#6238) 2024-07-20 18:20:26 -03:00
oobabooga
b19d239a60 Bump flash-attention to 2.6.1 2024-07-12 20:16:11 -07:00
dependabot[bot]
063d2047dd
Update accelerate requirement from ==0.31.* to ==0.32.* (#6217) 2024-07-11 19:56:42 -03:00
oobabooga
01e4721da7 Bump ExLlamaV2 to 0.1.7 2024-07-11 12:33:46 -07:00
oobabooga
fa075e41f4 Bump llama-cpp-python to 0.2.82 2024-07-10 06:03:24 -07:00
oobabooga
7e22eaa36c Bump llama-cpp-python to 0.2.81 2024-07-02 20:29:35 -07:00
dependabot[bot]
9660f6f10e
Bump aqlm[cpu,gpu] from 1.1.5 to 1.1.6 (#6157) 2024-06-27 21:13:02 -03:00
dependabot[bot]
a5df8f4e3c
Bump jinja2 from 3.1.2 to 3.1.4 (#6172) 2024-06-27 21:12:39 -03:00
dependabot[bot]
c6cec0588c
Update accelerate requirement from ==0.30.* to ==0.31.* (#6156) 2024-06-27 21:12:02 -03:00
oobabooga
66090758df Bump transformers to 4.42 (for gemma support) 2024-06-27 11:26:02 -07:00
oobabooga
602b455507 Bump llama-cpp-python to 0.2.79 2024-06-24 20:26:38 -07:00
oobabooga
7db8b3b532 Bump ExLlamaV2 to 0.1.6 2024-06-24 05:38:11 -07:00
oobabooga
125bb7b03b Revert "Bump llama-cpp-python to 0.2.78"
This reverts commit b6eaf7923e.
2024-06-23 19:54:28 -07:00
oobabooga
b6eaf7923e Bump llama-cpp-python to 0.2.78 2024-06-14 21:22:09 -07:00