oobabooga
3fb8196e16
Implement "*Is recording a voice message...*" for TTS #303
2023-03-13 22:28:00 -03:00
oobabooga
66b6971b61
Update README
2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f
Document --auto-launch
2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d
Fix GPTQ load_quant call on Windows
2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4
Fix GALACTICA ( #285 )
2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9
Change a comment
2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9
Remove redundant comments
2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd
Use str(Path) instead of os.path.abspath(Path)
2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92
Fix Open Assistant
2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba
Clean up silero_tts
...
This should only be used with --no-stream.
The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
oobabooga
cebe8b390d
Remove useless "substring_found" variable
2023-03-12 15:50:38 -03:00
oobabooga
4bcd675ccd
Add *Is typing...* to regenerate as well
2023-03-12 15:23:33 -03:00
oobabooga
c7aa51faa6
Use a list of eos_tokens instead of just a number
...
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00
oobabooga
d8bea766d7
Merge pull request #192 from xanthousm/main
...
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga
fda376d9c3
Use os.path.abspath() instead of str()
2023-03-12 12:41:04 -03:00
HideLord
8403152257
Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on
2023-03-12 17:28:15 +02:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
...
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
65dda28c9d
Rename --llama-bits to --gptq-bits
2023-03-12 11:19:07 -03:00
oobabooga
fed3617f07
Move LLaMA 4-bit into a separate file
2023-03-12 11:12:34 -03:00
oobabooga
0ac562bdba
Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253
2023-03-12 10:46:16 -03:00
oobabooga
78901d522b
Remove unused imports
2023-03-12 08:59:05 -03:00
Xan
b3e10e47c0
Fix merge conflict in text_generation
...
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
oobabooga
ad14f0e499
Fix regenerate (provisory way)
2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
...
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685
Fix You You You appearing in chat mode
2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
...
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read
2023-03-11 22:10:57 -08:00
oobabooga
341e135036
Various fixes in chat mode
2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8
2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88
Various fixes in chat mode
2023-03-12 02:31:45 -03:00
unknown
433f6350bc
Load and save character files in UTF-8
2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988
Use 'with' statement to better handle streaming memory
2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d
Fix memory leak in new streaming (second attempt)
2023-03-11 23:14:49 -03:00
oobabooga
92fe947721
Merge branch 'main' into new-streaming
2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a
Add *Is typing...* to impersonate as well
2023-03-11 10:50:18 -03:00
Xan
96c51973f9
--auto-launch and "Is typing..."
...
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f
Merge remote-tracking branch 'upstream/main'
2023-03-11 22:40:47 +11:00
draff
28fd4fc970
Change wording to be consistent with other args
2023-03-10 23:34:13 +00:00
draff
001e638b47
Make it actually work
2023-03-10 23:28:19 +00:00
draff
804486214b
Re-implement --load-in-4bit and update --llama-bits arg description
2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path()
2023-03-10 22:33:58 +00:00
draff
e6c631aea4
Replace --load-in-4bit with --llama-bits
...
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34
Remove default preset that didn't do anything
2023-03-10 14:01:02 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes
2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb
Minor changes
2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa
Add comments
2023-03-10 10:54:08 -03:00
oobabooga
e461c0b7a0
Move the import to the top
2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22
add no_split_module_classes to prevent tensor split error
2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459
add multi-gpu support for 4bit gptq LLaMA
2023-03-10 04:52:45 -08:00
rohvani
2ac2913747
fix reference issue
2023-03-09 20:13:23 -08:00