Commit Graph

1687 Commits

Author SHA1 Message Date
oobabooga
de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga
113b791aa5
Merge pull request #219 from deepdiffuser/4bit-multigpu
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 10:52:45 -03:00
oobabooga
e461c0b7a0 Move the import to the top 2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22 add no_split_module_classes to prevent tensor split error 2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459 add multi-gpu support for 4bit gptq LLaMA 2023-03-10 04:52:45 -08:00
EliasVincent
1c0bda33fb added installation instructions 2023-03-10 11:47:16 +01:00
rohvani
2ac2913747 fix reference issue 2023-03-09 20:13:23 -08:00
oobabooga
1d7e893fa1
Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads
download tokenizer when present
2023-03-10 00:46:21 -03:00
oobabooga
875847bf88 Consider tokenizer a type of text 2023-03-10 00:45:28 -03:00
oobabooga
8ed214001d Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-10 00:42:09 -03:00
oobabooga
249c268176 Fix the download script for long lists of files on HF 2023-03-10 00:41:10 -03:00
Ber Zoidberg
ec3de0495c download tokenizer when present 2023-03-09 19:08:09 -08:00
rohvani
5ee376c580 add LLaMA preset 2023-03-09 18:31:41 -08:00
rohvani
826e297b0e add llama-65b-4bit support & multiple pt paths 2023-03-09 18:31:32 -08:00
oobabooga
7c3d1b43c1
Merge pull request #204 from MichealC0/patch-1
Update README.md
2023-03-09 23:04:09 -03:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
1a3d25f75d
Merge pull request #206 from oobabooga/llama-4bit
Add LLaMA 4-bit support
2023-03-09 21:07:32 -03:00
oobabooga
eb0cb9b6df Update README 2023-03-09 20:53:52 -03:00
oobabooga
74102d5ee4 Insert to the path instead of appending 2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625 Check if the .pt file exists 2023-03-09 20:48:51 -03:00
oobabooga
d41e3c233b
Update README.md 2023-03-09 18:02:44 -03:00
oobabooga
fd540b8930 Use new LLaMA implementation (this will break stuff. I am sorry)
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00
EliasVincent
a24fa781f1 tweaked Whisper parameters 2023-03-09 21:18:46 +01:00
Elias Vincent Simon
d5efc0659b
Merge branch 'oobabooga:main' into stt-extension 2023-03-09 21:05:34 +01:00
EliasVincent
00359ba054 interactive preview window 2023-03-09 21:03:49 +01:00
EliasVincent
7a03d0bda3 cleanup 2023-03-09 20:33:00 +01:00
oobabooga
828a524f9a Add LLaMA 4-bit support 2023-03-09 15:50:26 -03:00
oobabooga
33414478bf Update README 2023-03-09 11:13:03 -03:00
oobabooga
e7adf5fe4e Add Contrastive Search preset #197 2023-03-09 10:27:11 -03:00
oobabooga
557c773df7
Merge pull request #201 from jtang613/Name_It
Lets propose a name besides "Gradio"
2023-03-09 09:45:47 -03:00
oobabooga
038e90765b Rename to "Text generation web UI" 2023-03-09 09:44:08 -03:00
EliasVincent
4c72e43bcf first implementation 2023-03-09 12:46:50 +01:00
Chimdumebi Nebolisa
4dd14dcab4
Update README.md 2023-03-09 10:22:09 +01:00
jtang613
807a41cf87 Lets propose a name besides "Gradio" 2023-03-08 21:02:25 -05:00
Xan
a2b5383398 Merge in audio generation only on text stream finish., postpone audioblock autoplay
- Keeping simpleaudio until audio block "autoplay" doesn't play previous messages
- Only generate audio for finished messages
- Better name for autoplay, clean up comments
- set default to unlimited wav files. Still a few bugs when wav id resets

Co-Authored-By: Christoph Hess <9931495+ChristophHess@users.noreply.github.com>
2023-03-09 10:48:44 +11:00
oobabooga
59b5f7a4b7 Improve usage of stopping_criteria 2023-03-08 12:13:40 -03:00
oobabooga
add9330e5e Bug fixes 2023-03-08 11:26:29 -03:00
Xan
738be6dd59 Fix merge errors and unlimited wav bug 2023-03-08 22:25:55 +11:00
Xan
5648a41a27 Merge branch 'main' of https://github.com/xanthousm/text-generation-webui 2023-03-08 22:08:54 +11:00
Xan
ad6b699503 Better TTS with autoplay
- Adds "still_streaming" to shared module for extensions to know if generation is complete
- Changed TTS extension with new options:
   - Show text under the audio widget
   - Automatically play the audio once text generation finishes
   - manage the generated wav files (only keep files for finished generations, optional max file limit)
   - [wip] ability to change voice pitch and speed
- added 'tensorboard' to requirements, since python sent "tensorboard not found" errors after a fresh installation.
2023-03-08 22:02:17 +11:00
oobabooga
33fb6aed74 Minor bug fix 2023-03-08 03:08:16 -03:00
oobabooga
ad2970374a Readability improvements 2023-03-08 03:00:06 -03:00
oobabooga
72d539dbff Better separate the FlexGen case 2023-03-08 02:54:47 -03:00
oobabooga
0e16c0bacb Remove redeclaration of a function 2023-03-08 02:50:49 -03:00
oobabooga
ab50f80542 New text streaming method (much faster) 2023-03-08 02:46:35 -03:00
oobabooga
c09f416adb Change the Naive preset
(again)
2023-03-07 23:17:13 -03:00
oobabooga
8e89bc596b Fix encode() for RWKV 2023-03-07 23:15:46 -03:00
oobabooga
44e6d82185 Remove unused imports 2023-03-07 22:56:15 -03:00
oobabooga
19a34941ed Add proper streaming to RWKV 2023-03-07 18:17:56 -03:00
oobabooga
8660227e1b Add top_k to RWKV 2023-03-07 17:24:28 -03:00