oobabooga
501afbc234
Add requests to requirements.txt
2023-03-11 14:47:30 -03:00
oobabooga
8f8da6707d
Minor style changes to silero_tts
2023-03-11 11:17:13 -03:00
oobabooga
2743dd736a
Add *Is typing...* to impersonate as well
2023-03-11 10:50:18 -03:00
Xan
96c51973f9
--auto-launch and "Is typing..."
...
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f
Merge remote-tracking branch 'upstream/main'
2023-03-11 22:40:47 +11:00
Xan
b8f7d34c1d
Undo changes to requirements
...
needing to manually install tensorboard might be a windows-only problem. Can be easily solved manually.
2023-03-11 17:05:09 +11:00
Xan
0dfac4b777
Working html autoplay, clean up, improve wav naming
...
- New autoplay using html tag, removed from old message when new input provided
- Add voice pitch and speed control
- Group settings together
- Use name + conversation history to match wavs to messages, minimize problems when changing characters
Current minor bugs:
- Gradio seems to cache the audio files, so using "clear history" and generating new messages will play the old audio (the new messages are saving correctly). Gradio will clear cache and use correct audio after a few messages or after a page refresh.
- Switching characters does not immediately update the message ID used for the audio. ID is updated after the first new message, but that message will use the wrong ID
2023-03-11 16:34:59 +11:00
draff
28fd4fc970
Change wording to be consistent with other args
2023-03-10 23:34:13 +00:00
draff
001e638b47
Make it actually work
2023-03-10 23:28:19 +00:00
draff
804486214b
Re-implement --load-in-4bit and update --llama-bits arg description
2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path()
2023-03-10 22:33:58 +00:00
draff
e6c631aea4
Replace --load-in-4bit with --llama-bits
...
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34
Remove default preset that didn't do anything
2023-03-10 14:01:02 -03:00
oobabooga
e01da4097c
Merge pull request #210 from rohvani/pt-path-changes
...
Add llama-65b-4bit.pt support
2023-03-10 11:04:56 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes
2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb
Minor changes
2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa
Add comments
2023-03-10 10:54:08 -03:00
oobabooga
113b791aa5
Merge pull request #219 from deepdiffuser/4bit-multigpu
...
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 10:52:45 -03:00
oobabooga
e461c0b7a0
Move the import to the top
2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22
add no_split_module_classes to prevent tensor split error
2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 04:52:45 -08:00
EliasVincent
1c0bda33fb
added installation instructions
2023-03-10 11:47:16 +01:00
rohvani
2ac2913747
fix reference issue
2023-03-09 20:13:23 -08:00
oobabooga
1d7e893fa1
Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads
...
download tokenizer when present
2023-03-10 00:46:21 -03:00
oobabooga
875847bf88
Consider tokenizer a type of text
2023-03-10 00:45:28 -03:00
oobabooga
8ed214001d
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-10 00:42:09 -03:00
oobabooga
249c268176
Fix the download script for long lists of files on HF
2023-03-10 00:41:10 -03:00
Ber Zoidberg
ec3de0495c
download tokenizer when present
2023-03-09 19:08:09 -08:00
rohvani
5ee376c580
add LLaMA preset
2023-03-09 18:31:41 -08:00
rohvani
826e297b0e
add llama-65b-4bit support & multiple pt paths
2023-03-09 18:31:32 -08:00
oobabooga
7c3d1b43c1
Merge pull request #204 from MichealC0/patch-1
...
Update README.md
2023-03-09 23:04:09 -03:00
oobabooga
9849aac0f1
Don't show .pt models in the list
2023-03-09 21:54:50 -03:00
oobabooga
1a3d25f75d
Merge pull request #206 from oobabooga/llama-4bit
...
Add LLaMA 4-bit support
2023-03-09 21:07:32 -03:00
oobabooga
eb0cb9b6df
Update README
2023-03-09 20:53:52 -03:00
oobabooga
74102d5ee4
Insert to the path instead of appending
2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625
Check if the .pt file exists
2023-03-09 20:48:51 -03:00
oobabooga
d41e3c233b
Update README.md
2023-03-09 18:02:44 -03:00
oobabooga
fd540b8930
Use new LLaMA implementation (this will break stuff. I am sorry)
...
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00
EliasVincent
a24fa781f1
tweaked Whisper parameters
2023-03-09 21:18:46 +01:00
Elias Vincent Simon
d5efc0659b
Merge branch 'oobabooga:main' into stt-extension
2023-03-09 21:05:34 +01:00
EliasVincent
00359ba054
interactive preview window
2023-03-09 21:03:49 +01:00
EliasVincent
7a03d0bda3
cleanup
2023-03-09 20:33:00 +01:00
oobabooga
828a524f9a
Add LLaMA 4-bit support
2023-03-09 15:50:26 -03:00
oobabooga
33414478bf
Update README
2023-03-09 11:13:03 -03:00
oobabooga
e7adf5fe4e
Add Contrastive Search preset #197
2023-03-09 10:27:11 -03:00
oobabooga
557c773df7
Merge pull request #201 from jtang613/Name_It
...
Lets propose a name besides "Gradio"
2023-03-09 09:45:47 -03:00
oobabooga
038e90765b
Rename to "Text generation web UI"
2023-03-09 09:44:08 -03:00
EliasVincent
4c72e43bcf
first implementation
2023-03-09 12:46:50 +01:00
Chimdumebi Nebolisa
4dd14dcab4
Update README.md
2023-03-09 10:22:09 +01:00
jtang613
807a41cf87
Lets propose a name besides "Gradio"
2023-03-08 21:02:25 -05:00