dependabot[bot]
|
234c58ccd1
|
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178)
|
2023-07-17 21:24:51 -03:00 |
|
oobabooga
|
49a5389bd3
|
Bump accelerate from 0.20.3 to 0.21.0
|
2023-07-17 21:23:59 -03:00 |
|
dependabot[bot]
|
02a5fe6aa2
|
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-17 20:18:31 +00:00 |
|
oobabooga
|
4ce766414b
|
Bump AutoGPTQ version
|
2023-07-17 10:02:12 -07:00 |
|
ofirkris
|
780a2f2e16
|
Bump llama cpp version (#3160)
Bump llama cpp version to support better 8K RoPE scaling
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-07-16 01:54:56 -03:00 |
|
jllllll
|
ed3ffd212d
|
Bump bitsandbytes to 0.40.1.post1 (#3156)
817bdf6325...6ec4f0c374
|
2023-07-16 01:53:32 -03:00 |
|
jllllll
|
32f12b8bbf
|
Bump bitsandbytes Windows wheel to 0.40.0.post4 (#3135)
|
2023-07-13 17:32:37 -03:00 |
|
kabachuha
|
3f19e94c93
|
Add Tensorboard/Weights and biases integration for training (#2624)
|
2023-07-12 11:53:31 -03:00 |
|
oobabooga
|
a12dae51b9
|
Bump bitsandbytes
|
2023-07-11 18:29:08 -07:00 |
|
jllllll
|
fdd596f98f
|
Bump bitsandbytes Windows wheel (#3097)
|
2023-07-11 18:41:24 -03:00 |
|
ofirkris
|
a81cdd1367
|
Bump cpp llama version (#3081)
Bump cpp llama version to 0.1.70
|
2023-07-10 19:36:15 -03:00 |
|
jllllll
|
f8dbd7519b
|
Bump exllama module version (#3087)
d769533b6f...e61d4d31d4
|
2023-07-10 19:35:59 -03:00 |
|
ofirkris
|
161d984e80
|
Bump llama-cpp-python version (#3072)
Bump llama-cpp-python version to 0.1.69
|
2023-07-09 17:22:24 -03:00 |
|
oobabooga
|
79679b3cfd
|
Pin fastapi version (for #3042)
|
2023-07-07 16:40:57 -07:00 |
|
ofirkris
|
b67c362735
|
Bump llama-cpp-python (#3011)
Bump llama-cpp-python to V0.1.68
|
2023-07-05 11:33:28 -03:00 |
|
jllllll
|
1610d5ffb2
|
Bump exllama module to 0.0.5 (#2993)
|
2023-07-04 00:15:55 -03:00 |
|
oobabooga
|
c6cae106e7
|
Bump llama-cpp-python
|
2023-06-28 18:14:45 -03:00 |
|
jllllll
|
7b048dcf67
|
Bump exllama module version to 0.0.4 (#2915)
|
2023-06-28 18:09:58 -03:00 |
|
jllllll
|
bef67af23c
|
Use pre-compiled python module for ExLlama (#2770)
|
2023-06-24 20:24:17 -03:00 |
|
jllllll
|
a06acd6d09
|
Update bitsandbytes to 0.39.1 (#2799)
|
2023-06-21 15:04:45 -03:00 |
|
oobabooga
|
c623e142ac
|
Bump llama-cpp-python
|
2023-06-20 00:49:38 -03:00 |
|
oobabooga
|
490a1795f0
|
Bump peft commit
|
2023-06-18 16:42:11 -03:00 |
|
dependabot[bot]
|
909d8c6ae3
|
Bump transformers from 4.30.0 to 4.30.2 (#2695)
|
2023-06-14 19:56:28 -03:00 |
|
oobabooga
|
ea0eabd266
|
Bump llama-cpp-python version
|
2023-06-10 21:59:29 -03:00 |
|
oobabooga
|
0f8140e99d
|
Bump transformers/accelerate/peft/autogptq
|
2023-06-09 00:25:13 -03:00 |
|
oobabooga
|
5d515eeb8c
|
Bump llama-cpp-python wheel
|
2023-06-06 13:01:15 -03:00 |
|
dependabot[bot]
|
97f3fa843f
|
Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537)
|
2023-06-05 23:45:58 -03:00 |
|
oobabooga
|
4e9937aa99
|
Bump gradio
|
2023-06-05 17:29:21 -03:00 |
|
jllllll
|
5216117a63
|
Fix MacOS incompatibility in requirements.txt (#2485)
|
2023-06-02 01:46:16 -03:00 |
|
oobabooga
|
b4ad060c1f
|
Use cuda 11.7 instead of 11.8
|
2023-06-02 01:04:44 -03:00 |
|
oobabooga
|
d0aca83b53
|
Add AutoGPTQ wheels to requirements.txt
|
2023-06-02 00:47:11 -03:00 |
|
oobabooga
|
2cdf525d3b
|
Bump llama-cpp-python version
|
2023-05-31 23:29:02 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
jllllll
|
78dbec4c4e
|
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
|
2023-05-25 23:26:25 -03:00 |
|
oobabooga
|
548f05e106
|
Add windows bitsandbytes wheel by jllllll
|
2023-05-25 10:48:22 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
eiery
|
9967e08b1f
|
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264)
|
2023-05-24 10:25:28 -03:00 |
|
oobabooga
|
1490c0af68
|
Remove RWKV from requirements.txt
|
2023-05-23 20:49:20 -03:00 |
|
dependabot[bot]
|
baf75356d4
|
Bump transformers from 4.29.1 to 4.29.2 (#2268)
|
2023-05-22 02:50:18 -03:00 |
|
jllllll
|
2aa01e2303
|
Fix broken version of peft (#2229)
|
2023-05-20 17:54:51 -03:00 |
|
oobabooga
|
511470a89b
|
Bump llama-cpp-python version
|
2023-05-19 12:13:25 -03:00 |
|
oobabooga
|
259020a0be
|
Bump gradio to 3.31.0
This fixes Google Colab lagging.
|
2023-05-16 22:21:15 -03:00 |
|
dependabot[bot]
|
ae54d83455
|
Bump transformers from 4.28.1 to 4.29.1 (#2089)
|
2023-05-15 19:25:24 -03:00 |
|
feeelX
|
eee986348c
|
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058)
|
2023-05-14 22:41:14 -03:00 |
|
dependabot[bot]
|
a5bb278631
|
Bump accelerate from 0.18.0 to 0.19.0 (#1925)
|
2023-05-09 02:17:27 -03:00 |
|
oobabooga
|
b040b4110d
|
Bump llama-cpp-python version
|
2023-05-08 00:21:17 -03:00 |
|
oobabooga
|
81be7c2dd4
|
Specify gradio_client version
|
2023-05-06 21:50:04 -03:00 |
|
oobabooga
|
60be76f0fc
|
Revert gradio bump (gallery is broken)
|
2023-05-03 11:53:30 -03:00 |
|
oobabooga
|
d016c38640
|
Bump gradio version
|
2023-05-02 19:19:33 -03:00 |
|
dependabot[bot]
|
280c2f285f
|
Bump safetensors from 0.3.0 to 0.3.1 (#1720)
|
2023-05-02 00:42:39 -03:00 |
|