Commit Graph

1914 Commits

Author SHA1 Message Date
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read 2023-03-11 22:10:57 -08:00
oobabooga
3437de686c
Merge pull request #189 from oobabooga/new-streaming
New streaming method (much faster)
2023-03-12 03:01:26 -03:00
oobabooga
341e135036 Various fixes in chat mode 2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8 2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88 Various fixes in chat mode 2023-03-12 02:31:45 -03:00
unknown
433f6350bc Load and save character files in UTF-8 2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988 Use 'with' statement to better handle streaming memory 2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d Fix memory leak in new streaming (second attempt) 2023-03-11 23:14:49 -03:00
HideLord
def97f658c Small patch to fix loading of character jsons. Now it correctly reads non-ascii characters on Windows. 2023-03-12 02:54:22 +02:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
195e99d0b6 Add llama_prompts extension 2023-03-11 16:11:15 -03:00
oobabooga
501afbc234 Add requests to requirements.txt 2023-03-11 14:47:30 -03:00
oobabooga
8f8da6707d Minor style changes to silero_tts 2023-03-11 11:17:13 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
Xan
b8f7d34c1d Undo changes to requirements
needing to manually install tensorboard might be a windows-only problem. Can be easily solved manually.
2023-03-11 17:05:09 +11:00
Xan
0dfac4b777 Working html autoplay, clean up, improve wav naming
- New autoplay using html tag, removed from old message when new input provided
- Add voice pitch and speed control
- Group settings together
- Use name + conversation history to match wavs to messages, minimize problems when changing characters

Current minor bugs:
- Gradio seems to cache the audio files, so using "clear history" and generating new messages will play the old audio (the new messages are saving correctly). Gradio will clear cache and use correct audio after a few messages or after a page refresh.
- Switching characters does not immediately update the message ID used for the audio. ID is updated after the first new message, but that message will use the wrong ID
2023-03-11 16:34:59 +11:00
draff
28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff
001e638b47 Make it actually work 2023-03-10 23:28:19 +00:00
draff
804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path() 2023-03-10 22:33:58 +00:00
draff
e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00
oobabooga
e01da4097c
Merge pull request #210 from rohvani/pt-path-changes
Add llama-65b-4bit.pt support
2023-03-10 11:04:56 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes 2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb Minor changes 2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga
113b791aa5
Merge pull request #219 from deepdiffuser/4bit-multigpu
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 10:52:45 -03:00
oobabooga
e461c0b7a0 Move the import to the top 2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22 add no_split_module_classes to prevent tensor split error 2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459 add multi-gpu support for 4bit gptq LLaMA 2023-03-10 04:52:45 -08:00
EliasVincent
1c0bda33fb added installation instructions 2023-03-10 11:47:16 +01:00
rohvani
2ac2913747 fix reference issue 2023-03-09 20:13:23 -08:00
oobabooga
1d7e893fa1
Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads
download tokenizer when present
2023-03-10 00:46:21 -03:00
oobabooga
875847bf88 Consider tokenizer a type of text 2023-03-10 00:45:28 -03:00
oobabooga
8ed214001d Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-10 00:42:09 -03:00
oobabooga
249c268176 Fix the download script for long lists of files on HF 2023-03-10 00:41:10 -03:00
Ber Zoidberg
ec3de0495c download tokenizer when present 2023-03-09 19:08:09 -08:00
rohvani
5ee376c580 add LLaMA preset 2023-03-09 18:31:41 -08:00
rohvani
826e297b0e add llama-65b-4bit support & multiple pt paths 2023-03-09 18:31:32 -08:00
oobabooga
7c3d1b43c1
Merge pull request #204 from MichealC0/patch-1
Update README.md
2023-03-09 23:04:09 -03:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
1a3d25f75d
Merge pull request #206 from oobabooga/llama-4bit
Add LLaMA 4-bit support
2023-03-09 21:07:32 -03:00
oobabooga
eb0cb9b6df Update README 2023-03-09 20:53:52 -03:00
oobabooga
74102d5ee4 Insert to the path instead of appending 2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625 Check if the .pt file exists 2023-03-09 20:48:51 -03:00
oobabooga
d41e3c233b
Update README.md 2023-03-09 18:02:44 -03:00
oobabooga
fd540b8930 Use new LLaMA implementation (this will break stuff. I am sorry)
https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model
2023-03-09 17:59:15 -03:00
EliasVincent
a24fa781f1 tweaked Whisper parameters 2023-03-09 21:18:46 +01:00