oobabooga
|
92a39c619b
|
Add Mistral support
|
2023-09-28 15:41:03 -07:00 |
|
oobabooga
|
f46ba12b42
|
Add flash-attn wheels for Linux
|
2023-09-28 14:45:52 -07:00 |
|
jllllll
|
2bd23c29cb
|
Bump llama-cpp-python to 0.2.7 (#4110)
|
2023-09-27 23:45:36 -03:00 |
|
jllllll
|
13a54729b1
|
Bump exllamav2 to 0.0.4 and use pre-built wheels (#4095)
|
2023-09-26 21:36:14 -03:00 |
|
oobabooga
|
2e7b6b0014
|
Create alternative requirements.txt with AMD and Metal wheels (#4052)
|
2023-09-24 09:58:29 -03:00 |
|
oobabooga
|
05c4a4f83c
|
Bump exllamav2
|
2023-09-21 14:56:01 -07:00 |
|
jllllll
|
b7c55665c1
|
Bump llama-cpp-python to 0.2.6 (#3982)
|
2023-09-18 14:08:37 -03:00 |
|
dependabot[bot]
|
661bfaac8e
|
Update accelerate from ==0.22.* to ==0.23.* (#3981)
|
2023-09-17 22:42:12 -03:00 |
|
Thireus ☠
|
45335fa8f4
|
Bump ExLlamav2 to v0.0.2 (#3970)
|
2023-09-17 19:24:40 -03:00 |
|
dependabot[bot]
|
eb9ebabec7
|
Bump exllamav2 from 0.0.0 to 0.0.1 (#3896)
|
2023-09-13 02:13:51 -03:00 |
|
cal066
|
a4e4e887d7
|
Bump ctransformers to 0.2.27 (#3893)
|
2023-09-13 00:37:31 -03:00 |
|
jllllll
|
1a5d68015a
|
Bump llama-cpp-python to 0.1.85 (#3887)
|
2023-09-12 19:41:41 -03:00 |
|
oobabooga
|
833bc59f1b
|
Remove ninja from requirements.txt
It's installed with exllamav2 automatically
|
2023-09-12 15:12:56 -07:00 |
|
dependabot[bot]
|
0efbe5ef76
|
Bump optimum from 1.12.0 to 1.13.1 (#3872)
|
2023-09-12 15:53:21 -03:00 |
|
oobabooga
|
c2a309f56e
|
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881)
|
2023-09-12 14:33:07 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
jllllll
|
859b4fd737
|
Bump exllama to 0.1.17 (#3847)
|
2023-09-11 01:13:14 -03:00 |
|
dependabot[bot]
|
1d6b384828
|
Update transformers requirement from ==4.32.* to ==4.33.* (#3865)
|
2023-09-11 01:12:22 -03:00 |
|
jllllll
|
e8f234ca8f
|
Bump llama-cpp-python to 0.1.84 (#3854)
|
2023-09-11 01:11:33 -03:00 |
|
oobabooga
|
66d5caba1b
|
Pin pydantic version (closes #3850)
|
2023-09-10 21:09:04 -07:00 |
|
oobabooga
|
0576691538
|
Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
|
2023-08-31 08:45:38 -07:00 |
|
jllllll
|
9626f57721
|
Bump exllama to 0.0.14 (#3758)
|
2023-08-30 13:43:38 -03:00 |
|
jllllll
|
dac5f4b912
|
Bump llama-cpp-python to 0.1.83 (#3745)
|
2023-08-29 22:35:59 -03:00 |
|
VishwasKukreti
|
a9a1784420
|
Update accelerate to 0.22 in requirements.txt (#3725)
|
2023-08-29 17:47:37 -03:00 |
|
jllllll
|
fe1f7c6513
|
Bump ctransformers to 0.2.25 (#3740)
|
2023-08-29 17:24:36 -03:00 |
|
jllllll
|
22b2a30ec7
|
Bump llama-cpp-python to 0.1.82 (#3730)
|
2023-08-28 18:02:24 -03:00 |
|
jllllll
|
7d3a0b5387
|
Bump llama-cpp-python to 0.1.81 (#3716)
|
2023-08-27 22:38:41 -03:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
6e6431e73f
|
Update requirements.txt
|
2023-08-26 01:07:28 -07:00 |
|
cal066
|
960980247f
|
ctransformers: gguf support (#3685)
|
2023-08-25 11:33:04 -03:00 |
|
oobabooga
|
26c5e5e878
|
Bump autogptq
|
2023-08-24 19:23:08 -07:00 |
|
oobabooga
|
2b675533f7
|
Un-bump safetensors
The newest one doesn't work on Windows yet
|
2023-08-23 14:36:03 -07:00 |
|
oobabooga
|
335c49cc7e
|
Bump peft and transformers
|
2023-08-22 13:14:59 -07:00 |
|
tkbit
|
df165fe6c4
|
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
|
2023-08-22 16:55:17 -03:00 |
|
cal066
|
e042bf8624
|
ctransformers: add mlock and no-mmap options (#3649)
|
2023-08-22 16:51:34 -03:00 |
|
oobabooga
|
b96fd22a81
|
Refactor the training tab (#3619)
|
2023-08-18 16:58:38 -03:00 |
|
jllllll
|
1a71ab58a9
|
Bump llama_cpp_python_cuda to 0.1.78 (#3614)
|
2023-08-18 12:04:01 -03:00 |
|
oobabooga
|
6170b5ba31
|
Bump llama-cpp-python
|
2023-08-17 21:41:02 -07:00 |
|
oobabooga
|
ccfc02a28d
|
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama)
|
2023-08-14 15:15:55 -03:00 |
|
oobabooga
|
8294eadd38
|
Bump AutoGPTQ wheel
|
2023-08-14 11:13:46 -07:00 |
|
jllllll
|
73421b1fed
|
Bump ctransformers wheel version (#3558)
|
2023-08-12 23:02:47 -03:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
jllllll
|
bee73cedbd
|
Streamline GPTQ-for-LLaMa support
|
2023-08-09 23:42:34 -05:00 |
|
oobabooga
|
a4e48cbdb6
|
Bump AutoGPTQ
|
2023-08-09 08:31:17 -07:00 |
|
oobabooga
|
7c1300fab5
|
Pin aiofiles version to fix statvfs issue
|
2023-08-09 08:07:55 -07:00 |
|
oobabooga
|
2d0634cd07
|
Bump transformers commit for positive prompts
|
2023-08-07 08:57:19 -07:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
jllllll
|
5ee95d126c
|
Bump exllama wheels to 0.0.10 (#3467)
|
2023-08-05 13:46:14 -03:00 |
|
jllllll
|
6e30f76ba5
|
Bump bitsandbytes to 0.41.1 (#3457)
|
2023-08-04 19:28:59 -03:00 |
|
jllllll
|
c4e14a757c
|
Bump exllama module to 0.0.9 (#3338)
|
2023-07-29 22:16:23 -03:00 |
|
oobabooga
|
77d2e9f060
|
Remove flexgen 2
|
2023-07-25 15:18:25 -07:00 |
|
oobabooga
|
a07d070b6c
|
Add llama-2-70b GGML support (#3285)
|
2023-07-24 16:37:03 -03:00 |
|
oobabooga
|
6f4830b4d3
|
Bump peft commit
|
2023-07-24 09:49:57 -07:00 |
|
jllllll
|
eb105b0495
|
Bump llama-cpp-python to 0.1.74 (#3257)
|
2023-07-24 11:15:42 -03:00 |
|
jllllll
|
152cf1e8ef
|
Bump bitsandbytes to 0.41.0 (#3258)
e229fbce66...a06a0f6a08
|
2023-07-24 11:06:18 -03:00 |
|
jllllll
|
8d31d20c9a
|
Bump exllama module to 0.0.8 (#3256)
39b3541cdd...3f83ebb378
|
2023-07-24 11:05:54 -03:00 |
|
oobabooga
|
63ece46213
|
Merge branch 'main' into dev
|
2023-07-20 07:06:41 -07:00 |
|
oobabooga
|
4b19b74e6c
|
Add CUDA wheels for llama-cpp-python by jllllll
|
2023-07-19 19:33:43 -07:00 |
|
jllllll
|
87926d033d
|
Bump exllama module to 0.0.7 (#3211)
|
2023-07-19 22:24:47 -03:00 |
|
oobabooga
|
08c23b62c7
|
Bump llama-cpp-python and transformers
|
2023-07-19 07:19:12 -07:00 |
|
jllllll
|
c535f14e5f
|
Bump bitsandbytes Windows wheel to 0.40.2 (#3186)
|
2023-07-18 11:39:43 -03:00 |
|
dependabot[bot]
|
234c58ccd1
|
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178)
|
2023-07-17 21:24:51 -03:00 |
|
oobabooga
|
49a5389bd3
|
Bump accelerate from 0.20.3 to 0.21.0
|
2023-07-17 21:23:59 -03:00 |
|
dependabot[bot]
|
02a5fe6aa2
|
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-17 20:18:31 +00:00 |
|
oobabooga
|
4ce766414b
|
Bump AutoGPTQ version
|
2023-07-17 10:02:12 -07:00 |
|
ofirkris
|
780a2f2e16
|
Bump llama cpp version (#3160)
Bump llama cpp version to support better 8K RoPE scaling
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-07-16 01:54:56 -03:00 |
|
jllllll
|
ed3ffd212d
|
Bump bitsandbytes to 0.40.1.post1 (#3156)
817bdf6325...6ec4f0c374
|
2023-07-16 01:53:32 -03:00 |
|
jllllll
|
32f12b8bbf
|
Bump bitsandbytes Windows wheel to 0.40.0.post4 (#3135)
|
2023-07-13 17:32:37 -03:00 |
|
kabachuha
|
3f19e94c93
|
Add Tensorboard/Weights and biases integration for training (#2624)
|
2023-07-12 11:53:31 -03:00 |
|
oobabooga
|
a12dae51b9
|
Bump bitsandbytes
|
2023-07-11 18:29:08 -07:00 |
|
jllllll
|
fdd596f98f
|
Bump bitsandbytes Windows wheel (#3097)
|
2023-07-11 18:41:24 -03:00 |
|
ofirkris
|
a81cdd1367
|
Bump cpp llama version (#3081)
Bump cpp llama version to 0.1.70
|
2023-07-10 19:36:15 -03:00 |
|
jllllll
|
f8dbd7519b
|
Bump exllama module version (#3087)
d769533b6f...e61d4d31d4
|
2023-07-10 19:35:59 -03:00 |
|
ofirkris
|
161d984e80
|
Bump llama-cpp-python version (#3072)
Bump llama-cpp-python version to 0.1.69
|
2023-07-09 17:22:24 -03:00 |
|
oobabooga
|
79679b3cfd
|
Pin fastapi version (for #3042)
|
2023-07-07 16:40:57 -07:00 |
|
ofirkris
|
b67c362735
|
Bump llama-cpp-python (#3011)
Bump llama-cpp-python to V0.1.68
|
2023-07-05 11:33:28 -03:00 |
|
jllllll
|
1610d5ffb2
|
Bump exllama module to 0.0.5 (#2993)
|
2023-07-04 00:15:55 -03:00 |
|
oobabooga
|
c6cae106e7
|
Bump llama-cpp-python
|
2023-06-28 18:14:45 -03:00 |
|
jllllll
|
7b048dcf67
|
Bump exllama module version to 0.0.4 (#2915)
|
2023-06-28 18:09:58 -03:00 |
|
jllllll
|
bef67af23c
|
Use pre-compiled python module for ExLlama (#2770)
|
2023-06-24 20:24:17 -03:00 |
|
jllllll
|
a06acd6d09
|
Update bitsandbytes to 0.39.1 (#2799)
|
2023-06-21 15:04:45 -03:00 |
|
oobabooga
|
c623e142ac
|
Bump llama-cpp-python
|
2023-06-20 00:49:38 -03:00 |
|
oobabooga
|
490a1795f0
|
Bump peft commit
|
2023-06-18 16:42:11 -03:00 |
|
dependabot[bot]
|
909d8c6ae3
|
Bump transformers from 4.30.0 to 4.30.2 (#2695)
|
2023-06-14 19:56:28 -03:00 |
|
oobabooga
|
ea0eabd266
|
Bump llama-cpp-python version
|
2023-06-10 21:59:29 -03:00 |
|
oobabooga
|
0f8140e99d
|
Bump transformers/accelerate/peft/autogptq
|
2023-06-09 00:25:13 -03:00 |
|
oobabooga
|
5d515eeb8c
|
Bump llama-cpp-python wheel
|
2023-06-06 13:01:15 -03:00 |
|
dependabot[bot]
|
97f3fa843f
|
Bump llama-cpp-python from 0.1.56 to 0.1.57 (#2537)
|
2023-06-05 23:45:58 -03:00 |
|
oobabooga
|
4e9937aa99
|
Bump gradio
|
2023-06-05 17:29:21 -03:00 |
|
jllllll
|
5216117a63
|
Fix MacOS incompatibility in requirements.txt (#2485)
|
2023-06-02 01:46:16 -03:00 |
|
oobabooga
|
b4ad060c1f
|
Use cuda 11.7 instead of 11.8
|
2023-06-02 01:04:44 -03:00 |
|
oobabooga
|
d0aca83b53
|
Add AutoGPTQ wheels to requirements.txt
|
2023-06-02 00:47:11 -03:00 |
|
oobabooga
|
2cdf525d3b
|
Bump llama-cpp-python version
|
2023-05-31 23:29:02 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
jllllll
|
78dbec4c4e
|
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
|
2023-05-25 23:26:25 -03:00 |
|
oobabooga
|
548f05e106
|
Add windows bitsandbytes wheel by jllllll
|
2023-05-25 10:48:22 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
eiery
|
9967e08b1f
|
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264)
|
2023-05-24 10:25:28 -03:00 |
|
oobabooga
|
1490c0af68
|
Remove RWKV from requirements.txt
|
2023-05-23 20:49:20 -03:00 |
|
dependabot[bot]
|
baf75356d4
|
Bump transformers from 4.29.1 to 4.29.2 (#2268)
|
2023-05-22 02:50:18 -03:00 |
|
jllllll
|
2aa01e2303
|
Fix broken version of peft (#2229)
|
2023-05-20 17:54:51 -03:00 |
|
oobabooga
|
511470a89b
|
Bump llama-cpp-python version
|
2023-05-19 12:13:25 -03:00 |
|
oobabooga
|
259020a0be
|
Bump gradio to 3.31.0
This fixes Google Colab lagging.
|
2023-05-16 22:21:15 -03:00 |
|
dependabot[bot]
|
ae54d83455
|
Bump transformers from 4.28.1 to 4.29.1 (#2089)
|
2023-05-15 19:25:24 -03:00 |
|
feeelX
|
eee986348c
|
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058)
|
2023-05-14 22:41:14 -03:00 |
|
dependabot[bot]
|
a5bb278631
|
Bump accelerate from 0.18.0 to 0.19.0 (#1925)
|
2023-05-09 02:17:27 -03:00 |
|
oobabooga
|
b040b4110d
|
Bump llama-cpp-python version
|
2023-05-08 00:21:17 -03:00 |
|
oobabooga
|
81be7c2dd4
|
Specify gradio_client version
|
2023-05-06 21:50:04 -03:00 |
|
oobabooga
|
60be76f0fc
|
Revert gradio bump (gallery is broken)
|
2023-05-03 11:53:30 -03:00 |
|
oobabooga
|
d016c38640
|
Bump gradio version
|
2023-05-02 19:19:33 -03:00 |
|
dependabot[bot]
|
280c2f285f
|
Bump safetensors from 0.3.0 to 0.3.1 (#1720)
|
2023-05-02 00:42:39 -03:00 |
|
oobabooga
|
56b13d5d48
|
Bump llama-cpp-python version
|
2023-05-02 00:41:54 -03:00 |
|
oobabooga
|
2f6e2ddeac
|
Bump llama-cpp-python version
|
2023-04-24 03:42:03 -03:00 |
|
oobabooga
|
c4f4f41389
|
Add an "Evaluate" tab to calculate the perplexities of models (#1322)
|
2023-04-21 00:20:33 -03:00 |
|
oobabooga
|
39099663a0
|
Add 4-bit LoRA support (#1200)
|
2023-04-16 23:26:52 -03:00 |
|
dependabot[bot]
|
4cd2a9d824
|
Bump transformers from 4.28.0 to 4.28.1 (#1288)
|
2023-04-16 21:12:57 -03:00 |
|
oobabooga
|
d2ea925fa5
|
Bump llama-cpp-python to use LlamaCache
|
2023-04-16 00:53:40 -03:00 |
|
catalpaaa
|
94700cc7a5
|
Bump gradio to 3.25 (#1089)
|
2023-04-14 23:45:25 -03:00 |
|
Alex "mcmonkey" Goodwin
|
64e3b44e0f
|
initial multi-lora support (#1103)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-04-14 14:52:06 -03:00 |
|
dependabot[bot]
|
852a5aa13d
|
Bump bitsandbytes from 0.37.2 to 0.38.1 (#1158)
|
2023-04-13 21:23:14 -03:00 |
|
dependabot[bot]
|
84576a80d2
|
Bump llama-cpp-python from 0.1.30 to 0.1.33 (#1157)
|
2023-04-13 21:17:59 -03:00 |
|
oobabooga
|
2908a51587
|
Settle for transformers 4.28.0
|
2023-04-13 21:07:00 -03:00 |
|
oobabooga
|
32d078487e
|
Add llama-cpp-python to requirements.txt
|
2023-04-10 10:45:51 -03:00 |
|
oobabooga
|
d272ac46dd
|
Add Pillow as a requirement
|
2023-04-08 18:48:46 -03:00 |
|
oobabooga
|
58ed87e5d9
|
Update requirements.txt
|
2023-04-06 18:42:54 -03:00 |
|
dependabot[bot]
|
21be80242e
|
Bump rwkv from 0.7.2 to 0.7.3 (#842)
|
2023-04-06 17:52:27 -03:00 |
|
oobabooga
|
113f94b61e
|
Bump transformers (16-bit llama must be reconverted/redownloaded)
|
2023-04-06 16:04:03 -03:00 |
|
oobabooga
|
59058576b5
|
Remove unused requirement
|
2023-04-06 13:28:21 -03:00 |
|
oobabooga
|
03cb44fc8c
|
Add new llama.cpp library (2048 context, temperature, etc now work)
|
2023-04-06 13:12:14 -03:00 |
|
oobabooga
|
b2ce7282a1
|
Use past transformers version #773
|
2023-04-04 16:11:42 -03:00 |
|
dependabot[bot]
|
ad37f396fc
|
Bump rwkv from 0.7.1 to 0.7.2 (#747)
|
2023-04-03 14:29:57 -03:00 |
|
dependabot[bot]
|
18f756ada6
|
Bump gradio from 3.24.0 to 3.24.1 (#746)
|
2023-04-03 14:29:37 -03:00 |
|
TheTerrasque
|
2157bb4319
|
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
|
2023-04-02 20:34:25 -03:00 |
|
oobabooga
|
a5c9b7d977
|
Bump llamacpp version
|
2023-03-31 15:08:01 -03:00 |
|
oobabooga
|
4d98623041
|
Merge branch 'main' into feature/llamacpp
|
2023-03-31 14:37:04 -03:00 |
|
oobabooga
|
9d1dcf880a
|
General improvements
|
2023-03-31 14:27:01 -03:00 |
|
oobabooga
|
f27a66b014
|
Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
|
2023-03-31 00:42:26 -03:00 |
|
Thomas Antony
|
8953a262cb
|
Add llamacpp to requirements.txt
|
2023-03-30 11:22:38 +01:00 |
|
Alex "mcmonkey" Goodwin
|
b0f05046b3
|
remove duplicate import
|
2023-03-27 22:50:37 -07:00 |
|
Alex "mcmonkey" Goodwin
|
31f04dc615
|
Merge branch 'main' into add-train-lora-tab
|
2023-03-27 20:03:30 -07:00 |
|
dependabot[bot]
|
1e02f75f2b
|
Bump accelerate from 0.17.1 to 0.18.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.17.1 to 0.18.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.17.1...v0.18.0)
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-03-28 01:19:34 +00:00 |
|
oobabooga
|
37f11803e3
|
Merge pull request #603 from oobabooga/dependabot/pip/rwkv-0.7.1
Bump rwkv from 0.7.0 to 0.7.1
|
2023-03-27 22:19:08 -03:00 |
|
dependabot[bot]
|
e9c0226b09
|
Bump rwkv from 0.7.0 to 0.7.1
Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/BlinkDL/ChatRWKV/releases)
- [Commits](https://github.com/BlinkDL/ChatRWKV/commits)
---
updated-dependencies:
- dependency-name: rwkv
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-03-27 21:05:35 +00:00 |
|
dependabot[bot]
|
9c96919121
|
Bump bitsandbytes from 0.37.1 to 0.37.2
Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) from 0.37.1 to 0.37.2.
- [Release notes](https://github.com/TimDettmers/bitsandbytes/releases)
- [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md)
- [Commits](https://github.com/TimDettmers/bitsandbytes/commits)
---
updated-dependencies:
- dependency-name: bitsandbytes
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-03-27 21:05:19 +00:00 |
|
Alex "mcmonkey" Goodwin
|
e439228ed8
|
Merge branch 'main' into add-train-lora-tab
|
2023-03-27 08:21:19 -07:00 |
|
oobabooga
|
9ff6a538b6
|
Bump gradio version
Make sure to upgrade with
`pip install -r requirements.txt --upgrade`
|
2023-03-26 22:11:19 -03:00 |
|
Alex "mcmonkey" Goodwin
|
566898a79a
|
initial lora training tab
|
2023-03-25 12:08:26 -07:00 |
|
oobabooga
|
7073e96093
|
Add back RWKV dependency #98
|
2023-03-19 12:05:28 -03:00 |
|
oobabooga
|
86b99006d9
|
Remove rwkv dependency
|
2023-03-18 10:27:52 -03:00 |
|