jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
jllllll
a06acd6d09
Update bitsandbytes to 0.39.1 ( #2799 )
2023-06-21 15:04:45 -03:00
oobabooga
c623e142ac
Bump llama-cpp-python
2023-06-20 00:49:38 -03:00
oobabooga
490a1795f0
Bump peft commit
2023-06-18 16:42:11 -03:00
dependabot[bot]
909d8c6ae3
Bump transformers from 4.30.0 to 4.30.2 ( #2695 )
2023-06-14 19:56:28 -03:00
oobabooga
ea0eabd266
Bump llama-cpp-python version
2023-06-10 21:59:29 -03:00
oobabooga
0f8140e99d
Bump transformers/accelerate/peft/autogptq
2023-06-09 00:25:13 -03:00
oobabooga
5d515eeb8c
Bump llama-cpp-python wheel
2023-06-06 13:01:15 -03:00
dependabot[bot]
97f3fa843f
Bump llama-cpp-python from 0.1.56 to 0.1.57 ( #2537 )
2023-06-05 23:45:58 -03:00
oobabooga
4e9937aa99
Bump gradio
2023-06-05 17:29:21 -03:00
jllllll
5216117a63
Fix MacOS incompatibility in requirements.txt ( #2485 )
2023-06-02 01:46:16 -03:00
oobabooga
b4ad060c1f
Use cuda 11.7 instead of 11.8
2023-06-02 01:04:44 -03:00
oobabooga
d0aca83b53
Add AutoGPTQ wheels to requirements.txt
2023-06-02 00:47:11 -03:00
oobabooga
2cdf525d3b
Bump llama-cpp-python version
2023-05-31 23:29:02 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
jllllll
78dbec4c4e
Add 'scipy' to requirements.txt #2335 ( #2343 )
...
Unlisted dependency of bitsandbytes
2023-05-25 23:26:25 -03:00
oobabooga
548f05e106
Add windows bitsandbytes wheel by jllllll
2023-05-25 10:48:22 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
eiery
9967e08b1f
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 ( #2264 )
2023-05-24 10:25:28 -03:00
oobabooga
1490c0af68
Remove RWKV from requirements.txt
2023-05-23 20:49:20 -03:00
dependabot[bot]
baf75356d4
Bump transformers from 4.29.1 to 4.29.2 ( #2268 )
2023-05-22 02:50:18 -03:00
jllllll
2aa01e2303
Fix broken version of peft ( #2229 )
2023-05-20 17:54:51 -03:00
oobabooga
511470a89b
Bump llama-cpp-python version
2023-05-19 12:13:25 -03:00
oobabooga
259020a0be
Bump gradio to 3.31.0
...
This fixes Google Colab lagging.
2023-05-16 22:21:15 -03:00
dependabot[bot]
ae54d83455
Bump transformers from 4.28.1 to 4.29.1 ( #2089 )
2023-05-15 19:25:24 -03:00
feeelX
eee986348c
Update llama-cpp-python from 0.1.45 to 0.1.50 ( #2058 )
2023-05-14 22:41:14 -03:00
dependabot[bot]
a5bb278631
Bump accelerate from 0.18.0 to 0.19.0 ( #1925 )
2023-05-09 02:17:27 -03:00
oobabooga
b040b4110d
Bump llama-cpp-python version
2023-05-08 00:21:17 -03:00
oobabooga
81be7c2dd4
Specify gradio_client version
2023-05-06 21:50:04 -03:00
oobabooga
60be76f0fc
Revert gradio bump (gallery is broken)
2023-05-03 11:53:30 -03:00
oobabooga
d016c38640
Bump gradio version
2023-05-02 19:19:33 -03:00
dependabot[bot]
280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 ( #1720 )
2023-05-02 00:42:39 -03:00
oobabooga
56b13d5d48
Bump llama-cpp-python version
2023-05-02 00:41:54 -03:00
oobabooga
2f6e2ddeac
Bump llama-cpp-python version
2023-04-24 03:42:03 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models ( #1322 )
2023-04-21 00:20:33 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support ( #1200 )
2023-04-16 23:26:52 -03:00
dependabot[bot]
4cd2a9d824
Bump transformers from 4.28.0 to 4.28.1 ( #1288 )
2023-04-16 21:12:57 -03:00
oobabooga
d2ea925fa5
Bump llama-cpp-python to use LlamaCache
2023-04-16 00:53:40 -03:00
catalpaaa
94700cc7a5
Bump gradio to 3.25 ( #1089 )
2023-04-14 23:45:25 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support ( #1103 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
dependabot[bot]
852a5aa13d
Bump bitsandbytes from 0.37.2 to 0.38.1 ( #1158 )
2023-04-13 21:23:14 -03:00
dependabot[bot]
84576a80d2
Bump llama-cpp-python from 0.1.30 to 0.1.33 ( #1157 )
2023-04-13 21:17:59 -03:00
oobabooga
2908a51587
Settle for transformers 4.28.0
2023-04-13 21:07:00 -03:00
oobabooga
32d078487e
Add llama-cpp-python to requirements.txt
2023-04-10 10:45:51 -03:00
oobabooga
d272ac46dd
Add Pillow as a requirement
2023-04-08 18:48:46 -03:00
oobabooga
58ed87e5d9
Update requirements.txt
2023-04-06 18:42:54 -03:00
dependabot[bot]
21be80242e
Bump rwkv from 0.7.2 to 0.7.3 ( #842 )
2023-04-06 17:52:27 -03:00
oobabooga
113f94b61e
Bump transformers (16-bit llama must be reconverted/redownloaded)
2023-04-06 16:04:03 -03:00
oobabooga
59058576b5
Remove unused requirement
2023-04-06 13:28:21 -03:00
oobabooga
03cb44fc8c
Add new llama.cpp library (2048 context, temperature, etc now work)
2023-04-06 13:12:14 -03:00
oobabooga
b2ce7282a1
Use past transformers version #773
2023-04-04 16:11:42 -03:00
dependabot[bot]
ad37f396fc
Bump rwkv from 0.7.1 to 0.7.2 ( #747 )
2023-04-03 14:29:57 -03:00
dependabot[bot]
18f756ada6
Bump gradio from 3.24.0 to 3.24.1 ( #746 )
2023-04-03 14:29:37 -03:00
TheTerrasque
2157bb4319
New yaml character format ( #337 from TheTerrasque/feature/yaml-characters)
...
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
a5c9b7d977
Bump llamacpp version
2023-03-31 15:08:01 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp
2023-03-31 14:37:04 -03:00
oobabooga
9d1dcf880a
General improvements
2023-03-31 14:27:01 -03:00
oobabooga
f27a66b014
Bump gradio version (make sure to update)
...
This fixes the textbox shrinking vertically once it reaches
a certain number of lines.
2023-03-31 00:42:26 -03:00
Thomas Antony
8953a262cb
Add llamacpp to requirements.txt
2023-03-30 11:22:38 +01:00
Alex "mcmonkey" Goodwin
b0f05046b3
remove duplicate import
2023-03-27 22:50:37 -07:00
Alex "mcmonkey" Goodwin
31f04dc615
Merge branch 'main' into add-train-lora-tab
2023-03-27 20:03:30 -07:00
dependabot[bot]
1e02f75f2b
Bump accelerate from 0.17.1 to 0.18.0
...
Bumps [accelerate](https://github.com/huggingface/accelerate ) from 0.17.1 to 0.18.0.
- [Release notes](https://github.com/huggingface/accelerate/releases )
- [Commits](https://github.com/huggingface/accelerate/compare/v0.17.1...v0.18.0 )
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-28 01:19:34 +00:00
oobabooga
37f11803e3
Merge pull request #603 from oobabooga/dependabot/pip/rwkv-0.7.1
...
Bump rwkv from 0.7.0 to 0.7.1
2023-03-27 22:19:08 -03:00
dependabot[bot]
e9c0226b09
Bump rwkv from 0.7.0 to 0.7.1
...
Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV ) from 0.7.0 to 0.7.1.
- [Release notes](https://github.com/BlinkDL/ChatRWKV/releases )
- [Commits](https://github.com/BlinkDL/ChatRWKV/commits )
---
updated-dependencies:
- dependency-name: rwkv
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 21:05:35 +00:00
dependabot[bot]
9c96919121
Bump bitsandbytes from 0.37.1 to 0.37.2
...
Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes ) from 0.37.1 to 0.37.2.
- [Release notes](https://github.com/TimDettmers/bitsandbytes/releases )
- [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md )
- [Commits](https://github.com/TimDettmers/bitsandbytes/commits )
---
updated-dependencies:
- dependency-name: bitsandbytes
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 21:05:19 +00:00
Alex "mcmonkey" Goodwin
e439228ed8
Merge branch 'main' into add-train-lora-tab
2023-03-27 08:21:19 -07:00
oobabooga
9ff6a538b6
Bump gradio version
...
Make sure to upgrade with
`pip install -r requirements.txt --upgrade`
2023-03-26 22:11:19 -03:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00
oobabooga
7073e96093
Add back RWKV dependency #98
2023-03-19 12:05:28 -03:00
oobabooga
86b99006d9
Remove rwkv dependency
2023-03-18 10:27:52 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
23a5e886e1
The LLaMA PR has been merged into transformers
...
https://github.com/huggingface/transformers/pull/21955
The tokenizer class has been changed from
"LLaMATokenizer"
to
"LlamaTokenizer"
It is necessary to edit this change in every tokenizer_config.json
that you had for LLaMA so far.
2023-03-16 11:18:32 -03:00
oobabooga
29b7c5ac0c
Sort the requirements
2023-03-15 12:40:03 -03:00
oobabooga
693b53d957
Merge branch 'main' into HideLord-main
2023-03-15 12:08:56 -03:00
dependabot[bot]
02d407542c
Bump accelerate from 0.17.0 to 0.17.1
...
Bumps [accelerate](https://github.com/huggingface/accelerate ) from 0.17.0 to 0.17.1.
- [Release notes](https://github.com/huggingface/accelerate/releases )
- [Commits](https://github.com/huggingface/accelerate/compare/v0.17.0...v0.17.1 )
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:40:42 +00:00
oobabooga
d685332c10
Merge pull request #307 from oobabooga/dependabot/pip/bitsandbytes-0.37.1
...
Bump bitsandbytes from 0.37.0 to 0.37.1
2023-03-13 22:39:59 -03:00
dependabot[bot]
df83088593
Bump bitsandbytes from 0.37.0 to 0.37.1
...
Bumps [bitsandbytes](https://github.com/TimDettmers/bitsandbytes ) from 0.37.0 to 0.37.1.
- [Release notes](https://github.com/TimDettmers/bitsandbytes/releases )
- [Changelog](https://github.com/TimDettmers/bitsandbytes/blob/main/CHANGELOG.md )
- [Commits](https://github.com/TimDettmers/bitsandbytes/commits )
---
updated-dependencies:
- dependency-name: bitsandbytes
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:36:18 +00:00
dependabot[bot]
715c3ecba6
Bump rwkv from 0.3.1 to 0.4.2
...
Bumps [rwkv](https://github.com/BlinkDL/ChatRWKV ) from 0.3.1 to 0.4.2.
- [Release notes](https://github.com/BlinkDL/ChatRWKV/releases )
- [Commits](https://github.com/BlinkDL/ChatRWKV/commits )
---
updated-dependencies:
- dependency-name: rwkv
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-03-14 01:36:02 +00:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main
2023-03-13 19:50:08 +02:00
Luis Cosio
435a69e357
Fix for issue #282
...
RuntimeError: Tensors must have same number of dimensions: got 3 and 4
2023-03-13 11:41:35 -06:00
HideLord
683556f411
Adding markdown support and slight refactoring.
2023-03-12 21:34:09 +02:00
oobabooga
441e993c51
Bump accelerate, RWKV and safetensors
2023-03-12 14:25:14 -03:00
oobabooga
3c25557ef0
Add tqdm to requirements.txt
2023-03-12 08:48:16 -03:00
oobabooga
501afbc234
Add requests to requirements.txt
2023-03-11 14:47:30 -03:00
oobabooga
fd540b8930
Use new LLaMA implementation (this will break stuff. I am sorry)
...
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00
oobabooga
8660227e1b
Add top_k to RWKV
2023-03-07 17:24:28 -03:00
oobabooga
153dfeb4dd
Add --rwkv-cuda-on parameter, bump rwkv version
2023-03-06 20:12:54 -03:00
oobabooga
145c725c39
Bump RWKV version
2023-03-05 16:28:21 -03:00
oobabooga
5492e2e9f8
Add sentencepiece
2023-03-05 10:02:24 -03:00
oobabooga
c33715ad5b
Move towards HF LLaMA implementation
2023-03-05 01:20:31 -03:00
oobabooga
bcea196c9d
Bump flexgen version
2023-03-02 12:03:57 -03:00
oobabooga
7a9b4407b0
Settle for 0.0.6 for now
2023-03-01 17:37:14 -03:00
oobabooga
f351dce032
Keep rwkv up to date
2023-03-01 17:36:16 -03:00
oobabooga
9c86a1cd4a
Add RWKV pip package
2023-03-01 11:42:49 -03:00
oobabooga
b16f097466
Add FlexGen to requirements.txt
2023-02-27 08:58:07 -03:00
oobabooga
4548227fb5
Downgrade gradio version (file uploads are broken in 3.19.1)
2023-02-25 22:59:02 -03:00
oobabooga
32f40f3b42
Bump gradio version to 3.19.1
2023-02-25 17:20:03 -03:00
oobabooga
fe1771157f
Properly scrape huggingface for download links (for #122 )
2023-02-24 14:06:42 -03:00
oobabooga
55bb5e5ef0
Bump accelerate version
2023-02-18 22:15:47 -03:00
oobabooga
a55e8836f6
Bump gradio version
...
It looks uglier, but the old one was bugged and unstable.
2023-02-15 20:20:56 -03:00
oobabooga
3277b751f5
Add softprompt support (for real this time)
...
Is this too much voodoo for our purposes?
2023-02-13 15:25:16 -03:00
oobabooga
b4fc8dfa8f
Add safetensors version
2023-02-04 18:58:17 -03:00
oobabooga
3dbebe30b1
Remove deepspeed requirement (only works on Linux for now)
2023-02-03 20:07:13 -03:00
oobabooga
03f084f311
Add safetensors support
2023-02-03 18:36:32 -03:00
oobabooga
03ebfba0fb
Bump bitsandbytes version
2023-02-03 09:29:31 -03:00
oobabooga
224be31a74
Use main bs4 package
2023-02-02 12:20:58 -03:00
oobabooga
cecaebc291
Add bs4 requirement ( fixes #47 )
2023-02-02 12:18:32 -03:00
oobabooga
d6b2d68527
Remove redundant requirements
2023-02-02 10:40:09 -03:00
81300
a97afa6965
Add DeepSpeed ZeRO-3 integration
2023-02-01 18:48:13 +02:00
oobabooga
414fa9d161
Revert transformers version (gpt-j and opt are broken)
2023-01-26 23:01:13 -03:00
Silver267
ad191e295b
bump transformers
...
idk but there seems to be a lot of fixes in the new version, and it's working according to my local tests.
2023-01-25 23:22:20 -05:00
oobabooga
fc73188ec7
Allow specifying your own profile picture in chat mode
2023-01-25 19:37:44 -03:00
oobabooga
3fa14befc5
Bump the gradio version, add back the queue
2023-01-25 16:10:35 -03:00
oobabooga
4067cecf67
Bump bitsandbytes version
2023-01-20 12:51:49 -03:00
oobabooga
fed7233ff4
Add script to download models
2023-01-06 19:57:31 -03:00
oobabooga
874cd6ff3f
Add the requirements.txt file (duh)
2023-01-06 00:25:33 -03:00