oobabooga
|
f66ab63d64
|
Bump transformers to 4.43
|
2024-07-23 14:06:34 -07:00 |
|
oobabooga
|
3ee682208c
|
Revert "Bump hqq from 0.1.7.post3 to 0.1.8 (#6238)"
This reverts commit 1c3671699c .
|
2024-07-22 19:53:56 -07:00 |
|
oobabooga
|
aa809e420e
|
Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
|
2024-07-22 18:05:11 -07:00 |
|
oobabooga
|
11bbf71aa5
|
Bump back llama-cpp-python (#6257)
|
2024-07-22 16:19:41 -03:00 |
|
oobabooga
|
0f53a736c1
|
Revert the llama-cpp-python update
|
2024-07-22 12:02:25 -07:00 |
|
oobabooga
|
a687f950ba
|
Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
|
2024-07-22 11:54:35 -07:00 |
|
oobabooga
|
7d2449f8b0
|
Bump llama-cpp-python to 0.2.82.3 (unofficial build)
|
2024-07-22 11:49:20 -07:00 |
|
dependabot[bot]
|
1c3671699c
|
Bump hqq from 0.1.7.post3 to 0.1.8 (#6238)
|
2024-07-20 18:20:26 -03:00 |
|
oobabooga
|
b19d239a60
|
Bump flash-attention to 2.6.1
|
2024-07-12 20:16:11 -07:00 |
|
dependabot[bot]
|
063d2047dd
|
Update accelerate requirement from ==0.31.* to ==0.32.* (#6217)
|
2024-07-11 19:56:42 -03:00 |
|
oobabooga
|
01e4721da7
|
Bump ExLlamaV2 to 0.1.7
|
2024-07-11 12:33:46 -07:00 |
|
oobabooga
|
fa075e41f4
|
Bump llama-cpp-python to 0.2.82
|
2024-07-10 06:03:24 -07:00 |
|
oobabooga
|
7e22eaa36c
|
Bump llama-cpp-python to 0.2.81
|
2024-07-02 20:29:35 -07:00 |
|
dependabot[bot]
|
9660f6f10e
|
Bump aqlm[cpu,gpu] from 1.1.5 to 1.1.6 (#6157)
|
2024-06-27 21:13:02 -03:00 |
|
dependabot[bot]
|
a5df8f4e3c
|
Bump jinja2 from 3.1.2 to 3.1.4 (#6172)
|
2024-06-27 21:12:39 -03:00 |
|
dependabot[bot]
|
c6cec0588c
|
Update accelerate requirement from ==0.30.* to ==0.31.* (#6156)
|
2024-06-27 21:12:02 -03:00 |
|
oobabooga
|
66090758df
|
Bump transformers to 4.42 (for gemma support)
|
2024-06-27 11:26:02 -07:00 |
|
oobabooga
|
602b455507
|
Bump llama-cpp-python to 0.2.79
|
2024-06-24 20:26:38 -07:00 |
|
oobabooga
|
7db8b3b532
|
Bump ExLlamaV2 to 0.1.6
|
2024-06-24 05:38:11 -07:00 |
|
oobabooga
|
125bb7b03b
|
Revert "Bump llama-cpp-python to 0.2.78"
This reverts commit b6eaf7923e .
|
2024-06-23 19:54:28 -07:00 |
|
oobabooga
|
b6eaf7923e
|
Bump llama-cpp-python to 0.2.78
|
2024-06-14 21:22:09 -07:00 |
|
oobabooga
|
9420973b62
|
Downgrade PyTorch to 2.2.2 (#6124)
|
2024-06-14 16:42:03 -03:00 |
|
dependabot[bot]
|
fdd8fab9cf
|
Bump hqq from 0.1.7.post2 to 0.1.7.post3 (#6090)
|
2024-06-14 13:46:35 -03:00 |
|
oobabooga
|
8930bfc5f4
|
Bump PyTorch, ExLlamaV2, flash-attention (#6122)
|
2024-06-13 20:38:31 -03:00 |
|
oobabooga
|
bd7cc4234d
|
Backend cleanup (#6025)
|
2024-05-21 13:32:02 -03:00 |
|
dependabot[bot]
|
2de586f586
|
Update accelerate requirement from ==0.27.* to ==0.30.* (#5989)
|
2024-05-19 20:03:18 -03:00 |
|
oobabooga
|
0d90b3a25c
|
Bump llama-cpp-python to 0.2.75
|
2024-05-18 05:26:26 -07:00 |
|
oobabooga
|
9557f49f2f
|
Bump llama-cpp-python to 0.2.73
|
2024-05-11 10:53:19 -07:00 |
|
oobabooga
|
e61055253c
|
Bump llama-cpp-python to 0.2.69, add --flash-attn option
|
2024-05-03 04:31:22 -07:00 |
|
oobabooga
|
0476f9fe70
|
Bump ExLlamaV2 to 0.0.20
|
2024-05-01 16:20:50 -07:00 |
|
oobabooga
|
ae0f28530c
|
Bump llama-cpp-python to 0.2.68
|
2024-05-01 08:40:50 -07:00 |
|
oobabooga
|
51fb766bea
|
Add back my llama-cpp-python wheels, bump to 0.2.65 (#5964)
|
2024-04-30 09:11:31 -03:00 |
|
oobabooga
|
9b623b8a78
|
Bump llama-cpp-python to 0.2.64, use official wheels (#5921)
|
2024-04-23 23:17:05 -03:00 |
|
Ashley Kleynhans
|
0877741b03
|
Bumped ExLlamaV2 to version 0.0.19 to resolve #5851 (#5880)
|
2024-04-19 19:04:40 -03:00 |
|
oobabooga
|
b30bce3b2f
|
Bump transformers to 4.40
|
2024-04-18 16:19:31 -07:00 |
|
Philipp Emanuel Weidmann
|
a0c69749e6
|
Revert sse-starlette version bump because it breaks API request cancellation (#5873)
|
2024-04-18 15:05:00 -03:00 |
|
dependabot[bot]
|
597556cb77
|
Bump sse-starlette from 1.6.5 to 2.1.0 (#5831)
|
2024-04-11 18:54:05 -03:00 |
|
oobabooga
|
3e3a7c4250
|
Bump llama-cpp-python to 0.2.61 & fix the crash
|
2024-04-11 14:15:34 -07:00 |
|
oobabooga
|
5f5ceaf025
|
Revert "Bump llama-cpp-python to 0.2.61"
This reverts commit 3ae61c0338 .
|
2024-04-11 13:24:57 -07:00 |
|
dependabot[bot]
|
bd71a504b8
|
Update gradio requirement from ==4.25.* to ==4.26.* (#5832)
|
2024-04-11 02:24:53 -03:00 |
|
oobabooga
|
3ae61c0338
|
Bump llama-cpp-python to 0.2.61
|
2024-04-10 21:39:46 -07:00 |
|
oobabooga
|
ed4001e324
|
Bump ExLlamaV2 to 0.0.18
|
2024-04-08 18:05:16 -07:00 |
|
oobabooga
|
f6828de3f2
|
Downgrade llama-cpp-python to 0.2.56
|
2024-04-07 07:00:12 -07:00 |
|
Jared Van Bortel
|
39ff9c9dcf
|
requirements: add psutil (#5819)
|
2024-04-06 23:02:20 -03:00 |
|
oobabooga
|
dfb01f9a63
|
Bump llama-cpp-python to 0.2.60
|
2024-04-06 18:32:36 -07:00 |
|
dependabot[bot]
|
a4c67e1974
|
Bump aqlm[cpu,gpu] from 1.1.2 to 1.1.3 (#5790)
|
2024-04-05 13:26:49 -03:00 |
|
oobabooga
|
14f6194211
|
Bump Gradio to 4.25
|
2024-04-05 09:22:44 -07:00 |
|
oobabooga
|
d423021a48
|
Remove CTransformers support (#5807)
|
2024-04-04 20:23:58 -03:00 |
|
oobabooga
|
3952560da8
|
Bump llama-cpp-python to 0.2.59
|
2024-04-04 11:20:48 -07:00 |
|
oobabooga
|
70c58b5fc2
|
Bump ExLlamaV2 to 0.0.17
|
2024-03-30 21:08:26 -07:00 |
|