Commit Graph

413 Commits

Author SHA1 Message Date
oobabooga
b9d1873301 Bump transformers to 4.37 2024-01-22 04:07:12 -08:00
oobabooga
b5cabb6e9d
Bump llama-cpp-python to 0.2.31 (#5345) 2024-01-22 08:05:59 -03:00
oobabooga
8962bb173e
Bump llama-cpp-python to 0.2.29 (#5307) 2024-01-18 14:24:17 -03:00
oobabooga
7916cf863b Bump transformers (necesary for e055967974) 2024-01-17 12:37:31 -08:00
Rimmy J
d80b191b1c
Add requirement jinja2==3.1.* to fix error as described in issue #5240 (#5249)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: Rim <anonymous@mail.com>
2024-01-13 21:47:13 -03:00
dependabot[bot]
32cdc66cf1
Bump hqq from 0.1.1.post1 to 0.1.2 (#5204) 2024-01-08 22:51:44 -03:00
oobabooga
f6a204d7c9 Bump llama-cpp-python to 0.2.26 2024-01-03 11:06:36 -08:00
oobabooga
0e54a09bcb
Remove exllamav1 loaders (#5128) 2023-12-31 01:57:06 -03:00
oobabooga
29b0f14d5a
Bump llama-cpp-python to 0.2.25 (#5077) 2023-12-25 12:36:32 -03:00
Casper
92d5e64a82
Bump AutoAWQ to 0.1.8 (#5061) 2023-12-24 14:27:34 -03:00
oobabooga
d76b00c211 Pin lm_eval package version 2023-12-24 09:22:31 -08:00
oobabooga
f0f6d9bdf9 Add HQQ back & update version
This reverts commit 2289e9031e.
2023-12-20 07:46:09 -08:00
oobabooga
258c695ead Add rich requirement 2023-12-19 21:58:36 -08:00
oobabooga
2289e9031e Remove HQQ from requirements (after https://github.com/oobabooga/text-generation-webui/issues/4993) 2023-12-19 21:33:49 -08:00
oobabooga
de138b8ba6
Add llama-cpp-python wheels with tensor cores support (#5003) 2023-12-19 17:30:53 -03:00
oobabooga
0a299d5959
Bump llama-cpp-python to 0.2.24 (#5001) 2023-12-19 15:22:21 -03:00
dependabot[bot]
9e48e50428
Update optimum requirement from ==1.15.* to ==1.16.* (#4986) 2023-12-18 21:43:29 -03:00
Water
674be9a09a
Add HQQ quant loader (#4888)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-12-18 21:23:16 -03:00
oobabooga
12690d3ffc
Better HF grammar implementation (#4953) 2023-12-17 02:01:23 -03:00
oobabooga
d2ed0a06bf Bump ExLlamav2 to 0.0.11 (adds Mixtral support) 2023-12-16 16:34:15 -08:00
oobabooga
7de10f4c8e Bump AutoGPTQ to 0.6.0 (adds Mixtral support) 2023-12-15 06:18:49 -08:00
oobabooga
85816898f9
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930) 2023-12-15 01:58:08 -03:00
oobabooga
8acecf3aee Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924) 2023-12-14 09:41:36 -08:00
oobabooga
21a5bfc67f Relax optimum requirement 2023-12-12 14:05:58 -08:00
dependabot[bot]
7a987417bb
Bump optimum from 1.14.0 to 1.15.0 (#4885) 2023-12-12 02:32:19 -03:00
dependabot[bot]
a17750db91
Update peft requirement from ==0.6.* to ==0.7.* (#4886) 2023-12-12 02:31:30 -03:00
dependabot[bot]
a8a92c6c87
Update transformers requirement from ==4.35.* to ==4.36.* (#4882) 2023-12-12 02:30:25 -03:00
俞航
ac9f154bcc
Bump exllamav2 from 0.0.8 to 0.0.10 & Fix code change (#4782) 2023-12-04 21:15:05 -03:00
dependabot[bot]
801ba87c68
Update accelerate requirement from ==0.24.* to ==0.25.* (#4810) 2023-12-04 20:36:01 -03:00
dependabot[bot]
2e83844f35
Bump safetensors from 0.4.0 to 0.4.1 (#4750) 2023-12-03 22:50:10 -03:00
oobabooga
0589ff5b12
Bump llama-cpp-python to 0.2.19 & add min_p and typical_p parameters to llama.cpp loader (#4701) 2023-11-21 20:59:39 -03:00
oobabooga
fb124ab6e2 Bump to flash-attention 2.3.4 + switch to Github Actions wheels on Windows (#4700) 2023-11-21 15:07:17 -08:00
oobabooga
4b84e45116 Use +cpuavx2 instead of +cpuavx 2023-11-20 11:46:38 -08:00
oobabooga
d7f1bc102b
Fix "Illegal instruction" bug in llama.cpp CPU only version (#4677) 2023-11-20 16:36:38 -03:00
oobabooga
e0ca49ed9c
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt

* Add back seed
2023-11-18 00:31:27 -03:00
oobabooga
9d6f79db74 Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb.
2023-11-17 05:14:25 -08:00
oobabooga
923c8e25fb
Bump llama-cpp-python to 0.2.18 (#4611) 2023-11-16 22:55:14 -03:00
Casper
61f429563e
Bump AutoAWQ to 0.1.7 (#4620) 2023-11-16 17:08:08 -03:00
Anton Rogozin
8a9d5a0cea
update AutoGPTQ to higher version for lora applying error fixing (#4604) 2023-11-15 20:23:22 -03:00
oobabooga
dea90c7b67 Bump exllamav2 to 0.0.8 2023-11-13 10:34:10 -08:00
oobabooga
2af7e382b1 Revert "Bump llama-cpp-python to 0.2.14"
This reverts commit 5c3eb22ce6.

The new version has issues:

https://github.com/oobabooga/text-generation-webui/issues/4540
https://github.com/abetlen/llama-cpp-python/issues/893
2023-11-09 10:02:13 -08:00
oobabooga
5c3eb22ce6 Bump llama-cpp-python to 0.2.14 2023-11-07 14:20:43 -08:00
dependabot[bot]
fd893baba1
Bump optimum from 1.13.1 to 1.14.0 (#4492) 2023-11-07 00:13:41 -03:00
dependabot[bot]
18739c8b3a
Update peft requirement from ==0.5.* to ==0.6.* (#4494) 2023-11-07 00:12:59 -03:00
Orang
2081f43ac2
Bump transformers to 4.35.* (#4474) 2023-11-04 14:00:24 -03:00
Casper
cfbd108826
Bump AWQ to 0.1.6 (#4470) 2023-11-04 13:09:41 -03:00
Orang
6b7fa45cc3
Update exllamav2 version (#4417) 2023-10-31 19:12:14 -03:00
Casper
41e159e88f
Bump AutoAWQ to v0.1.5 (#4410) 2023-10-31 19:11:22 -03:00
James Braza
f481ce3dd8
Adding platform_system to autoawq (#4390) 2023-10-27 01:02:28 -03:00
oobabooga
839a87bac8 Fix is_ccl_available & is_xpu_available imports 2023-10-26 20:27:04 -07:00
oobabooga
6086768309 Bump gradio to 3.50.* 2023-10-22 21:21:26 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
Johan
2706394bfe
Relax numpy version requirements (#4291) 2023-10-15 12:05:06 -03:00
jllllll
1f5a2c5597
Use Pytorch 2.1 exllama wheels (#4285) 2023-10-14 15:27:59 -03:00
oobabooga
cd1cad1b47 Bump exllamav2 2023-10-14 11:23:07 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) (#4258) 2023-10-10 22:20:49 -03:00
dependabot[bot]
520cbb2ab1
Bump safetensors from 0.3.2 to 0.4.0 (#4249) 2023-10-10 17:41:09 -03:00
jllllll
0eda9a0549
Use GPTQ wheels compatible with Pytorch 2.1 (#4210) 2023-10-07 00:35:41 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 (#4209) 2023-10-07 00:23:49 -03:00
Casper
0aa853f575
Bump AutoAWQ to v0.1.4 (#4203) 2023-10-06 15:30:01 -03:00
oobabooga
7d3201923b Bump AutoAWQ 2023-10-05 15:14:15 -07:00
turboderp
8a98646a21
Bump ExLlamaV2 to 0.0.5 (#4186) 2023-10-05 19:12:22 -03:00
cal066
cc632c3f33
AutoAWQ: initial support (#3999) 2023-10-05 13:19:18 -03:00
oobabooga
3f56151f03 Bump to transformers 4.34 2023-10-05 08:55:14 -07:00
oobabooga
ae4ba3007f
Add grammar to transformers and _HF loaders (#4091) 2023-10-05 10:01:36 -03:00
jllllll
41a2de96e5
Bump llama-cpp-python to 0.2.11 2023-10-01 18:08:10 -05:00
oobabooga
92a39c619b Add Mistral support 2023-09-28 15:41:03 -07:00
oobabooga
f46ba12b42 Add flash-attn wheels for Linux 2023-09-28 14:45:52 -07:00
jllllll
2bd23c29cb
Bump llama-cpp-python to 0.2.7 (#4110) 2023-09-27 23:45:36 -03:00
jllllll
13a54729b1
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095) 2023-09-26 21:36:14 -03:00
oobabooga
2e7b6b0014
Create alternative requirements.txt with AMD and Metal wheels (#4052) 2023-09-24 09:58:29 -03:00
oobabooga
05c4a4f83c Bump exllamav2 2023-09-21 14:56:01 -07:00
jllllll
b7c55665c1
Bump llama-cpp-python to 0.2.6 (#3982) 2023-09-18 14:08:37 -03:00
dependabot[bot]
661bfaac8e
Update accelerate from ==0.22.* to ==0.23.* (#3981) 2023-09-17 22:42:12 -03:00
Thireus ☠
45335fa8f4
Bump ExLlamav2 to v0.0.2 (#3970) 2023-09-17 19:24:40 -03:00
dependabot[bot]
eb9ebabec7
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896) 2023-09-13 02:13:51 -03:00
cal066
a4e4e887d7
Bump ctransformers to 0.2.27 (#3893) 2023-09-13 00:37:31 -03:00
jllllll
1a5d68015a
Bump llama-cpp-python to 0.1.85 (#3887) 2023-09-12 19:41:41 -03:00
oobabooga
833bc59f1b Remove ninja from requirements.txt
It's installed with exllamav2 automatically
2023-09-12 15:12:56 -07:00
dependabot[bot]
0efbe5ef76
Bump optimum from 1.12.0 to 1.13.1 (#3872) 2023-09-12 15:53:21 -03:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881) 2023-09-12 14:33:07 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
jllllll
859b4fd737
Bump exllama to 0.1.17 (#3847) 2023-09-11 01:13:14 -03:00
dependabot[bot]
1d6b384828
Update transformers requirement from ==4.32.* to ==4.33.* (#3865) 2023-09-11 01:12:22 -03:00
jllllll
e8f234ca8f
Bump llama-cpp-python to 0.1.84 (#3854) 2023-09-11 01:11:33 -03:00
oobabooga
66d5caba1b Pin pydantic version (closes #3850) 2023-09-10 21:09:04 -07:00
oobabooga
0576691538 Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
2023-08-31 08:45:38 -07:00
jllllll
9626f57721
Bump exllama to 0.0.14 (#3758) 2023-08-30 13:43:38 -03:00
jllllll
dac5f4b912
Bump llama-cpp-python to 0.1.83 (#3745) 2023-08-29 22:35:59 -03:00
VishwasKukreti
a9a1784420
Update accelerate to 0.22 in requirements.txt (#3725) 2023-08-29 17:47:37 -03:00
jllllll
fe1f7c6513
Bump ctransformers to 0.2.25 (#3740) 2023-08-29 17:24:36 -03:00
jllllll
22b2a30ec7
Bump llama-cpp-python to 0.1.82 (#3730) 2023-08-28 18:02:24 -03:00
jllllll
7d3a0b5387
Bump llama-cpp-python to 0.1.81 (#3716) 2023-08-27 22:38:41 -03:00
oobabooga
7f5370a272 Minor fixes/cosmetics 2023-08-26 22:11:07 -07:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support 2023-08-26 10:40:08 -05:00
oobabooga
6e6431e73f Update requirements.txt 2023-08-26 01:07:28 -07:00
cal066
960980247f
ctransformers: gguf support (#3685) 2023-08-25 11:33:04 -03:00
oobabooga
26c5e5e878 Bump autogptq 2023-08-24 19:23:08 -07:00
oobabooga
2b675533f7 Un-bump safetensors
The newest one doesn't work on Windows yet
2023-08-23 14:36:03 -07:00
oobabooga
335c49cc7e Bump peft and transformers 2023-08-22 13:14:59 -07:00
tkbit
df165fe6c4
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
2023-08-22 16:55:17 -03:00
cal066
e042bf8624
ctransformers: add mlock and no-mmap options (#3649) 2023-08-22 16:51:34 -03:00
oobabooga
b96fd22a81
Refactor the training tab (#3619) 2023-08-18 16:58:38 -03:00
jllllll
1a71ab58a9
Bump llama_cpp_python_cuda to 0.1.78 (#3614) 2023-08-18 12:04:01 -03:00
oobabooga
6170b5ba31 Bump llama-cpp-python 2023-08-17 21:41:02 -07:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama) 2023-08-14 15:15:55 -03:00
oobabooga
8294eadd38 Bump AutoGPTQ wheel 2023-08-14 11:13:46 -07:00
jllllll
73421b1fed
Bump ctransformers wheel version (#3558) 2023-08-12 23:02:47 -03:00
cal066
7a4fcee069
Add ctransformers support (#3313)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
a4e48cbdb6 Bump AutoGPTQ 2023-08-09 08:31:17 -07:00
oobabooga
7c1300fab5 Pin aiofiles version to fix statvfs issue 2023-08-09 08:07:55 -07:00
oobabooga
2d0634cd07 Bump transformers commit for positive prompts 2023-08-07 08:57:19 -07:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325) 2023-08-06 17:22:48 -03:00
jllllll
5ee95d126c
Bump exllama wheels to 0.0.10 (#3467) 2023-08-05 13:46:14 -03:00
jllllll
6e30f76ba5
Bump bitsandbytes to 0.41.1 (#3457) 2023-08-04 19:28:59 -03:00
jllllll
c4e14a757c
Bump exllama module to 0.0.9 (#3338) 2023-07-29 22:16:23 -03:00
oobabooga
77d2e9f060 Remove flexgen 2 2023-07-25 15:18:25 -07:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support (#3285) 2023-07-24 16:37:03 -03:00
oobabooga
6f4830b4d3 Bump peft commit 2023-07-24 09:49:57 -07:00
jllllll
eb105b0495
Bump llama-cpp-python to 0.1.74 (#3257) 2023-07-24 11:15:42 -03:00
jllllll
152cf1e8ef
Bump bitsandbytes to 0.41.0 (#3258)
e229fbce66...a06a0f6a08
2023-07-24 11:06:18 -03:00
jllllll
8d31d20c9a
Bump exllama module to 0.0.8 (#3256)
39b3541cdd...3f83ebb378
2023-07-24 11:05:54 -03:00
oobabooga
63ece46213 Merge branch 'main' into dev 2023-07-20 07:06:41 -07:00
oobabooga
4b19b74e6c Add CUDA wheels for llama-cpp-python by jllllll 2023-07-19 19:33:43 -07:00
jllllll
87926d033d
Bump exllama module to 0.0.7 (#3211) 2023-07-19 22:24:47 -03:00
oobabooga
08c23b62c7 Bump llama-cpp-python and transformers 2023-07-19 07:19:12 -07:00
jllllll
c535f14e5f
Bump bitsandbytes Windows wheel to 0.40.2 (#3186) 2023-07-18 11:39:43 -03:00
dependabot[bot]
234c58ccd1
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178) 2023-07-17 21:24:51 -03:00
oobabooga
49a5389bd3
Bump accelerate from 0.20.3 to 0.21.0 2023-07-17 21:23:59 -03:00
dependabot[bot]
02a5fe6aa2
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)

---
updated-dependencies:
- dependency-name: accelerate
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 20:18:31 +00:00
oobabooga
4ce766414b Bump AutoGPTQ version 2023-07-17 10:02:12 -07:00
ofirkris
780a2f2e16
Bump llama cpp version (#3160)
Bump llama cpp version to support better 8K RoPE scaling

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-16 01:54:56 -03:00
jllllll
ed3ffd212d
Bump bitsandbytes to 0.40.1.post1 (#3156)
817bdf6325...6ec4f0c374
2023-07-16 01:53:32 -03:00
jllllll
32f12b8bbf
Bump bitsandbytes Windows wheel to 0.40.0.post4 (#3135) 2023-07-13 17:32:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training (#2624) 2023-07-12 11:53:31 -03:00
oobabooga
a12dae51b9 Bump bitsandbytes 2023-07-11 18:29:08 -07:00
jllllll
fdd596f98f
Bump bitsandbytes Windows wheel (#3097) 2023-07-11 18:41:24 -03:00
ofirkris
a81cdd1367
Bump cpp llama version (#3081)
Bump cpp llama version to 0.1.70
2023-07-10 19:36:15 -03:00
jllllll
f8dbd7519b
Bump exllama module version (#3087)
d769533b6f...e61d4d31d4
2023-07-10 19:35:59 -03:00
ofirkris
161d984e80
Bump llama-cpp-python version (#3072)
Bump llama-cpp-python version to 0.1.69
2023-07-09 17:22:24 -03:00
oobabooga
79679b3cfd Pin fastapi version (for #3042) 2023-07-07 16:40:57 -07:00
ofirkris
b67c362735
Bump llama-cpp-python (#3011)
Bump llama-cpp-python to V0.1.68
2023-07-05 11:33:28 -03:00
jllllll
1610d5ffb2
Bump exllama module to 0.0.5 (#2993) 2023-07-04 00:15:55 -03:00
oobabooga
c6cae106e7 Bump llama-cpp-python 2023-06-28 18:14:45 -03:00
jllllll
7b048dcf67
Bump exllama module version to 0.0.4 (#2915) 2023-06-28 18:09:58 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama (#2770) 2023-06-24 20:24:17 -03:00
jllllll
a06acd6d09
Update bitsandbytes to 0.39.1 (#2799) 2023-06-21 15:04:45 -03:00
oobabooga
c623e142ac Bump llama-cpp-python 2023-06-20 00:49:38 -03:00
oobabooga
490a1795f0 Bump peft commit 2023-06-18 16:42:11 -03:00
dependabot[bot]
909d8c6ae3
Bump transformers from 4.30.0 to 4.30.2 (#2695) 2023-06-14 19:56:28 -03:00
oobabooga
ea0eabd266 Bump llama-cpp-python version 2023-06-10 21:59:29 -03:00
oobabooga
0f8140e99d Bump transformers/accelerate/peft/autogptq 2023-06-09 00:25:13 -03:00
oobabooga
5d515eeb8c Bump llama-cpp-python wheel 2023-06-06 13:01:15 -03:00
dependabot[bot]
97f3fa843f
Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537) 2023-06-05 23:45:58 -03:00
oobabooga
4e9937aa99 Bump gradio 2023-06-05 17:29:21 -03:00
jllllll
5216117a63
Fix MacOS incompatibility in requirements.txt (#2485) 2023-06-02 01:46:16 -03:00
oobabooga
b4ad060c1f Use cuda 11.7 instead of 11.8 2023-06-02 01:04:44 -03:00
oobabooga
d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga
2cdf525d3b Bump llama-cpp-python version 2023-05-31 23:29:02 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
jllllll
78dbec4c4e
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
2023-05-25 23:26:25 -03:00
oobabooga
548f05e106 Add windows bitsandbytes wheel by jllllll 2023-05-25 10:48:22 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
eiery
9967e08b1f
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264) 2023-05-24 10:25:28 -03:00
oobabooga
1490c0af68 Remove RWKV from requirements.txt 2023-05-23 20:49:20 -03:00
dependabot[bot]
baf75356d4
Bump transformers from 4.29.1 to 4.29.2 (#2268) 2023-05-22 02:50:18 -03:00
jllllll
2aa01e2303
Fix broken version of peft (#2229) 2023-05-20 17:54:51 -03:00
oobabooga
511470a89b Bump llama-cpp-python version 2023-05-19 12:13:25 -03:00
oobabooga
259020a0be Bump gradio to 3.31.0
This fixes Google Colab lagging.
2023-05-16 22:21:15 -03:00
dependabot[bot]
ae54d83455
Bump transformers from 4.28.1 to 4.29.1 (#2089) 2023-05-15 19:25:24 -03:00
feeelX
eee986348c
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058) 2023-05-14 22:41:14 -03:00
dependabot[bot]
a5bb278631
Bump accelerate from 0.18.0 to 0.19.0 (#1925) 2023-05-09 02:17:27 -03:00
oobabooga
b040b4110d Bump llama-cpp-python version 2023-05-08 00:21:17 -03:00
oobabooga
81be7c2dd4 Specify gradio_client version 2023-05-06 21:50:04 -03:00
oobabooga
60be76f0fc Revert gradio bump (gallery is broken) 2023-05-03 11:53:30 -03:00
oobabooga
d016c38640 Bump gradio version 2023-05-02 19:19:33 -03:00
dependabot[bot]
280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 (#1720) 2023-05-02 00:42:39 -03:00
oobabooga
56b13d5d48 Bump llama-cpp-python version 2023-05-02 00:41:54 -03:00
oobabooga
2f6e2ddeac Bump llama-cpp-python version 2023-04-24 03:42:03 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models (#1322) 2023-04-21 00:20:33 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
dependabot[bot]
4cd2a9d824
Bump transformers from 4.28.0 to 4.28.1 (#1288) 2023-04-16 21:12:57 -03:00
oobabooga
d2ea925fa5 Bump llama-cpp-python to use LlamaCache 2023-04-16 00:53:40 -03:00
catalpaaa
94700cc7a5
Bump gradio to 3.25 (#1089) 2023-04-14 23:45:25 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support (#1103)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
dependabot[bot]
852a5aa13d
Bump bitsandbytes from 0.37.2 to 0.38.1 (#1158) 2023-04-13 21:23:14 -03:00
dependabot[bot]
84576a80d2
Bump llama-cpp-python from 0.1.30 to 0.1.33 (#1157) 2023-04-13 21:17:59 -03:00
oobabooga
2908a51587 Settle for transformers 4.28.0 2023-04-13 21:07:00 -03:00
oobabooga
32d078487e Add llama-cpp-python to requirements.txt 2023-04-10 10:45:51 -03:00
oobabooga
d272ac46dd Add Pillow as a requirement 2023-04-08 18:48:46 -03:00
oobabooga
58ed87e5d9
Update requirements.txt 2023-04-06 18:42:54 -03:00
dependabot[bot]
21be80242e
Bump rwkv from 0.7.2 to 0.7.3 (#842) 2023-04-06 17:52:27 -03:00
oobabooga
113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
oobabooga
59058576b5 Remove unused requirement 2023-04-06 13:28:21 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
oobabooga
b2ce7282a1
Use past transformers version #773 2023-04-04 16:11:42 -03:00
dependabot[bot]
ad37f396fc
Bump rwkv from 0.7.1 to 0.7.2 (#747) 2023-04-03 14:29:57 -03:00
dependabot[bot]
18f756ada6
Bump gradio from 3.24.0 to 3.24.1 (#746) 2023-04-03 14:29:37 -03:00