jllllll
|
5216117a63
|
Fix MacOS incompatibility in requirements.txt (#2485)
|
2023-06-02 01:46:16 -03:00 |
|
oobabooga
|
b4ad060c1f
|
Use cuda 11.7 instead of 11.8
|
2023-06-02 01:04:44 -03:00 |
|
oobabooga
|
d0aca83b53
|
Add AutoGPTQ wheels to requirements.txt
|
2023-06-02 00:47:11 -03:00 |
|
oobabooga
|
2cdf525d3b
|
Bump llama-cpp-python version
|
2023-05-31 23:29:02 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
jllllll
|
78dbec4c4e
|
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
|
2023-05-25 23:26:25 -03:00 |
|
oobabooga
|
548f05e106
|
Add windows bitsandbytes wheel by jllllll
|
2023-05-25 10:48:22 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
eiery
|
9967e08b1f
|
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264)
|
2023-05-24 10:25:28 -03:00 |
|
oobabooga
|
1490c0af68
|
Remove RWKV from requirements.txt
|
2023-05-23 20:49:20 -03:00 |
|
dependabot[bot]
|
baf75356d4
|
Bump transformers from 4.29.1 to 4.29.2 (#2268)
|
2023-05-22 02:50:18 -03:00 |
|
jllllll
|
2aa01e2303
|
Fix broken version of peft (#2229)
|
2023-05-20 17:54:51 -03:00 |
|
oobabooga
|
511470a89b
|
Bump llama-cpp-python version
|
2023-05-19 12:13:25 -03:00 |
|
oobabooga
|
259020a0be
|
Bump gradio to 3.31.0
This fixes Google Colab lagging.
|
2023-05-16 22:21:15 -03:00 |
|
dependabot[bot]
|
ae54d83455
|
Bump transformers from 4.28.1 to 4.29.1 (#2089)
|
2023-05-15 19:25:24 -03:00 |
|
feeelX
|
eee986348c
|
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058)
|
2023-05-14 22:41:14 -03:00 |
|
dependabot[bot]
|
a5bb278631
|
Bump accelerate from 0.18.0 to 0.19.0 (#1925)
|
2023-05-09 02:17:27 -03:00 |
|
oobabooga
|
b040b4110d
|
Bump llama-cpp-python version
|
2023-05-08 00:21:17 -03:00 |
|
oobabooga
|
81be7c2dd4
|
Specify gradio_client version
|
2023-05-06 21:50:04 -03:00 |
|
oobabooga
|
60be76f0fc
|
Revert gradio bump (gallery is broken)
|
2023-05-03 11:53:30 -03:00 |
|
oobabooga
|
d016c38640
|
Bump gradio version
|
2023-05-02 19:19:33 -03:00 |
|
dependabot[bot]
|
280c2f285f
|
Bump safetensors from 0.3.0 to 0.3.1 (#1720)
|
2023-05-02 00:42:39 -03:00 |
|
oobabooga
|
56b13d5d48
|
Bump llama-cpp-python version
|
2023-05-02 00:41:54 -03:00 |
|
oobabooga
|
2f6e2ddeac
|
Bump llama-cpp-python version
|
2023-04-24 03:42:03 -03:00 |
|
oobabooga
|
c4f4f41389
|
Add an "Evaluate" tab to calculate the perplexities of models (#1322)
|
2023-04-21 00:20:33 -03:00 |
|
oobabooga
|
39099663a0
|
Add 4-bit LoRA support (#1200)
|
2023-04-16 23:26:52 -03:00 |
|
dependabot[bot]
|
4cd2a9d824
|
Bump transformers from 4.28.0 to 4.28.1 (#1288)
|
2023-04-16 21:12:57 -03:00 |
|
oobabooga
|
d2ea925fa5
|
Bump llama-cpp-python to use LlamaCache
|
2023-04-16 00:53:40 -03:00 |
|
catalpaaa
|
94700cc7a5
|
Bump gradio to 3.25 (#1089)
|
2023-04-14 23:45:25 -03:00 |
|
Alex "mcmonkey" Goodwin
|
64e3b44e0f
|
initial multi-lora support (#1103)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-04-14 14:52:06 -03:00 |
|
dependabot[bot]
|
852a5aa13d
|
Bump bitsandbytes from 0.37.2 to 0.38.1 (#1158)
|
2023-04-13 21:23:14 -03:00 |
|
dependabot[bot]
|
84576a80d2
|
Bump llama-cpp-python from 0.1.30 to 0.1.33 (#1157)
|
2023-04-13 21:17:59 -03:00 |
|
oobabooga
|
2908a51587
|
Settle for transformers 4.28.0
|
2023-04-13 21:07:00 -03:00 |
|
oobabooga
|
32d078487e
|
Add llama-cpp-python to requirements.txt
|
2023-04-10 10:45:51 -03:00 |
|
oobabooga
|
d272ac46dd
|
Add Pillow as a requirement
|
2023-04-08 18:48:46 -03:00 |
|
oobabooga
|
58ed87e5d9
|
Update requirements.txt
|
2023-04-06 18:42:54 -03:00 |
|
dependabot[bot]
|
21be80242e
|
Bump rwkv from 0.7.2 to 0.7.3 (#842)
|
2023-04-06 17:52:27 -03:00 |
|
oobabooga
|
113f94b61e
|
Bump transformers (16-bit llama must be reconverted/redownloaded)
|
2023-04-06 16:04:03 -03:00 |
|
oobabooga
|
59058576b5
|
Remove unused requirement
|
2023-04-06 13:28:21 -03:00 |
|
oobabooga
|
03cb44fc8c
|
Add new llama.cpp library (2048 context, temperature, etc now work)
|
2023-04-06 13:12:14 -03:00 |
|
oobabooga
|
b2ce7282a1
|
Use past transformers version #773
|
2023-04-04 16:11:42 -03:00 |
|
dependabot[bot]
|
ad37f396fc
|
Bump rwkv from 0.7.1 to 0.7.2 (#747)
|
2023-04-03 14:29:57 -03:00 |
|
dependabot[bot]
|
18f756ada6
|
Bump gradio from 3.24.0 to 3.24.1 (#746)
|
2023-04-03 14:29:37 -03:00 |
|
TheTerrasque
|
2157bb4319
|
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
|
2023-04-02 20:34:25 -03:00 |
|
oobabooga
|
a5c9b7d977
|
Bump llamacpp version
|
2023-03-31 15:08:01 -03:00 |
|
oobabooga
|
4d98623041
|
Merge branch 'main' into feature/llamacpp
|
2023-03-31 14:37:04 -03:00 |
|
oobabooga
|
9d1dcf880a
|
General improvements
|
2023-03-31 14:27:01 -03:00 |
|
oobabooga
|
f27a66b014
|
Bump gradio version (make sure to update)
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
|
2023-03-31 00:42:26 -03:00 |
|
Thomas Antony
|
8953a262cb
|
Add llamacpp to requirements.txt
|
2023-03-30 11:22:38 +01:00 |
|
Alex "mcmonkey" Goodwin
|
b0f05046b3
|
remove duplicate import
|
2023-03-27 22:50:37 -07:00 |
|