oobabooga
|
c07dc56736
|
Bump llama-cpp-python to 0.2.50
|
2024-02-24 21:34:11 -08:00 |
|
oobabooga
|
527f2652af
|
Bump llama-cpp-python to 0.2.47
|
2024-02-22 19:48:49 -08:00 |
|
dependabot[bot]
|
5f7dbf454a
|
Update optimum requirement from ==1.16.* to ==1.17.* (#5548)
|
2024-02-19 19:15:21 -03:00 |
|
dependabot[bot]
|
ed6ff49431
|
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546)
|
2024-02-19 19:14:04 -03:00 |
|
oobabooga
|
0b2279d031
|
Bump llama-cpp-python to 0.2.44
|
2024-02-19 13:42:31 -08:00 |
|
oobabooga
|
c375c753d6
|
Bump bitsandbytes to 0.42 (Linux only)
|
2024-02-16 10:47:57 -08:00 |
|
oobabooga
|
080f7132c0
|
Revert gradio to 3.50.2 (#5513)
|
2024-02-15 20:40:23 -03:00 |
|
oobabooga
|
ea0e1feee7
|
Bump llama-cpp-python to 0.2.43
|
2024-02-14 21:58:24 -08:00 |
|
DominikKowalczyk
|
33c4ce0720
|
Bump gradio to 4.19 (#5419)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-02-14 23:28:26 -03:00 |
|
oobabooga
|
25b655faeb
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-02-13 15:49:53 -08:00 |
|
oobabooga
|
f99f1fc68e
|
Bump llama-cpp-python to 0.2.42
|
2024-02-13 15:49:20 -08:00 |
|
dependabot[bot]
|
d8081e85ec
|
Update peft requirement from ==0.7.* to ==0.8.* (#5446)
|
2024-02-13 16:27:18 -03:00 |
|
dependabot[bot]
|
653b195b1e
|
Update numpy requirement from ==1.24.* to ==1.26.* (#5490)
|
2024-02-13 16:26:35 -03:00 |
|
dependabot[bot]
|
147b4cf3e0
|
Bump hqq from 0.1.2.post1 to 0.1.3 (#5489)
|
2024-02-13 16:25:02 -03:00 |
|
oobabooga
|
e9fea353c5
|
Bump llama-cpp-python to 0.2.40
|
2024-02-13 11:22:34 -08:00 |
|
oobabooga
|
35537ad3d1
|
Bump exllamav2 to 0.0.13.1 (#5463)
|
2024-02-07 13:17:04 -03:00 |
|
oobabooga
|
b8e25e8678
|
Bump llama-cpp-python to 0.2.39
|
2024-02-07 06:50:47 -08:00 |
|
oobabooga
|
a210999255
|
Bump safetensors version
|
2024-02-04 18:40:25 -08:00 |
|
oobabooga
|
e98d1086f5
|
Bump llama-cpp-python to 0.2.38 (#5420)
|
2024-02-01 20:09:30 -03:00 |
|
oobabooga
|
89f6036e98
|
Bump llama-cpp-python, remove python 3.8/3.9, cuda 11.7 (#5397)
|
2024-01-30 13:19:20 -03:00 |
|
dependabot[bot]
|
bfe2326a24
|
Bump hqq from 0.1.2 to 0.1.2.post1 (#5349)
|
2024-01-26 11:10:18 -03:00 |
|
oobabooga
|
87dc421ee8
|
Bump exllamav2 to 0.0.12 (#5352)
|
2024-01-22 22:40:12 -03:00 |
|
oobabooga
|
b9d1873301
|
Bump transformers to 4.37
|
2024-01-22 04:07:12 -08:00 |
|
oobabooga
|
b5cabb6e9d
|
Bump llama-cpp-python to 0.2.31 (#5345)
|
2024-01-22 08:05:59 -03:00 |
|
oobabooga
|
8962bb173e
|
Bump llama-cpp-python to 0.2.29 (#5307)
|
2024-01-18 14:24:17 -03:00 |
|
oobabooga
|
7916cf863b
|
Bump transformers (necesary for e055967974 )
|
2024-01-17 12:37:31 -08:00 |
|
Rimmy J
|
d80b191b1c
|
Add requirement jinja2==3.1.* to fix error as described in issue #5240 (#5249)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Rim <anonymous@mail.com>
|
2024-01-13 21:47:13 -03:00 |
|
dependabot[bot]
|
32cdc66cf1
|
Bump hqq from 0.1.1.post1 to 0.1.2 (#5204)
|
2024-01-08 22:51:44 -03:00 |
|
oobabooga
|
f6a204d7c9
|
Bump llama-cpp-python to 0.2.26
|
2024-01-03 11:06:36 -08:00 |
|
oobabooga
|
29b0f14d5a
|
Bump llama-cpp-python to 0.2.25 (#5077)
|
2023-12-25 12:36:32 -03:00 |
|
oobabooga
|
d76b00c211
|
Pin lm_eval package version
|
2023-12-24 09:22:31 -08:00 |
|
oobabooga
|
f0f6d9bdf9
|
Add HQQ back & update version
This reverts commit 2289e9031e .
|
2023-12-20 07:46:09 -08:00 |
|
oobabooga
|
258c695ead
|
Add rich requirement
|
2023-12-19 21:58:36 -08:00 |
|
oobabooga
|
2289e9031e
|
Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993)
|
2023-12-19 21:33:49 -08:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
dependabot[bot]
|
9e48e50428
|
Update optimum requirement from ==1.15.* to ==1.16.* (#4986)
|
2023-12-18 21:43:29 -03:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
12690d3ffc
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
|
oobabooga
|
d2ed0a06bf
|
Bump ExLlamav2 to 0.0.11 (adds Mixtral support)
|
2023-12-16 16:34:15 -08:00 |
|
oobabooga
|
85816898f9
|
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930)
|
2023-12-15 01:58:08 -03:00 |
|
oobabooga
|
8acecf3aee
|
Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924)
|
2023-12-14 09:41:36 -08:00 |
|
oobabooga
|
21a5bfc67f
|
Relax optimum requirement
|
2023-12-12 14:05:58 -08:00 |
|
dependabot[bot]
|
7a987417bb
|
Bump optimum from 1.14.0 to 1.15.0 (#4885)
|
2023-12-12 02:32:19 -03:00 |
|
dependabot[bot]
|
a17750db91
|
Update peft requirement from ==0.6.* to ==0.7.* (#4886)
|
2023-12-12 02:31:30 -03:00 |
|
dependabot[bot]
|
a8a92c6c87
|
Update transformers requirement from ==4.35.* to ==4.36.* (#4882)
|
2023-12-12 02:30:25 -03:00 |
|
俞航
|
ac9f154bcc
|
Bump exllamav2 from 0.0.8 to 0.0.10 & Fix code change (#4782)
|
2023-12-04 21:15:05 -03:00 |
|
dependabot[bot]
|
801ba87c68
|
Update accelerate requirement from ==0.24.* to ==0.25.* (#4810)
|
2023-12-04 20:36:01 -03:00 |
|
dependabot[bot]
|
2e83844f35
|
Bump safetensors from 0.4.0 to 0.4.1 (#4750)
|
2023-12-03 22:50:10 -03:00 |
|
oobabooga
|
0589ff5b12
|
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701)
|
2023-11-21 20:59:39 -03:00 |
|
oobabooga
|
4b84e45116
|
Use +cpuavx2 instead of +cpuavx
|
2023-11-20 11:46:38 -08:00 |
|