oobabooga
|
a5d5bb9390
|
Fix silero tts autoplay
|
2023-05-21 12:11:59 -03:00 |
|
matatonic
|
78b2478d9c
|
assistant: space fix, system: prompt fix (#2219)
|
2023-05-20 23:32:34 -03:00 |
|
oobabooga
|
05593a7834
|
Minor bug fix
|
2023-05-20 23:22:36 -03:00 |
|
Luis Lopez
|
9c53517d2c
|
Fix superbooga error when querying empty DB (Issue #2160) (#2212)
|
2023-05-20 22:27:22 -03:00 |
|
Matthew McAllister
|
ab6acddcc5
|
Add Save/Delete character buttons (#1870)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-20 21:48:45 -03:00 |
|
oobabooga
|
c5af549d4b
|
Add chat API (#2233)
|
2023-05-20 18:42:17 -03:00 |
|
jllllll
|
2aa01e2303
|
Fix broken version of peft (#2229)
|
2023-05-20 17:54:51 -03:00 |
|
oobabooga
|
159eccac7e
|
Update Audio-Notification.md
|
2023-05-19 23:20:42 -03:00 |
|
HappyWorldGames
|
a3e9769e31
|
Added an audible notification after text generation in web. (#1277)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-19 23:16:06 -03:00 |
|
Konstantin Gukov
|
1b52bddfcc
|
Mitigate UnboundLocalError (#2136)
|
2023-05-19 14:46:18 -03:00 |
|
Alex "mcmonkey" Goodwin
|
50c70e28f0
|
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108)
|
2023-05-19 12:58:54 -03:00 |
|
oobabooga
|
511470a89b
|
Bump llama-cpp-python version
|
2023-05-19 12:13:25 -03:00 |
|
Carl Kenner
|
a9733d4a99
|
Metharme context fix (#2153)
|
2023-05-19 11:46:13 -03:00 |
|
Carl Kenner
|
c86231377b
|
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159)
|
2023-05-19 11:42:41 -03:00 |
|
Mykeehu
|
c98d6ad27f
|
Create chat_style-messenger.css (#2187)
Add Messenger-like style for chat mode
|
2023-05-19 11:31:06 -03:00 |
|
oobabooga
|
499c2e009e
|
Remove problematic regex from models/config.yaml
|
2023-05-19 11:20:35 -03:00 |
|
oobabooga
|
9d5025f531
|
Improve error handling while loading GPTQ models
|
2023-05-19 11:20:08 -03:00 |
|
oobabooga
|
39dab18307
|
Add a timeout to download-model.py requests
|
2023-05-19 11:19:34 -03:00 |
|
oobabooga
|
f052ab9c8f
|
Fix setting pre_layer from within the ui
|
2023-05-17 23:17:44 -03:00 |
|
oobabooga
|
b667ffa51d
|
Simplify GPTQ_loader.py
|
2023-05-17 16:22:56 -03:00 |
|
oobabooga
|
ef10ffc6b4
|
Add various checks to model loading functions
|
2023-05-17 16:14:54 -03:00 |
|
oobabooga
|
abd361b3a0
|
Minor change
|
2023-05-17 11:33:43 -03:00 |
|
oobabooga
|
21ecc3701e
|
Avoid a name conflict
|
2023-05-17 11:23:13 -03:00 |
|
oobabooga
|
fb91c07191
|
Minor bug fix
|
2023-05-17 11:16:37 -03:00 |
|
oobabooga
|
1a8151a2b6
|
Add AutoGPTQ support (basic) (#2132)
|
2023-05-17 11:12:12 -03:00 |
|
oobabooga
|
10cf7831f7
|
Update Extensions.md
|
2023-05-17 10:45:29 -03:00 |
|
Alex "mcmonkey" Goodwin
|
1f50dbe352
|
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100)
|
2023-05-17 10:41:09 -03:00 |
|
oobabooga
|
fd743a0207
|
Small change
|
2023-05-17 02:34:29 -03:00 |
|
LoopLooter
|
aeb1b7a9c5
|
feature to save prompts with custom names (#1583)
---------
Co-authored-by: LoopLooter <looplooter>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-17 02:30:45 -03:00 |
|
oobabooga
|
c9c6aa2b6e
|
Update docs/Extensions.md
|
2023-05-17 02:04:37 -03:00 |
|
oobabooga
|
85f74961f9
|
Update "Interface mode" tab
|
2023-05-17 01:57:51 -03:00 |
|
oobabooga
|
9e558cba9b
|
Update docs/Extensions.md
|
2023-05-17 01:43:32 -03:00 |
|
oobabooga
|
687f21f965
|
Update docs/Extensions.md
|
2023-05-17 01:41:01 -03:00 |
|
oobabooga
|
8f85d84e08
|
Merge remote-tracking branch 'refs/remotes/origin/main'
|
2023-05-17 01:32:42 -03:00 |
|
oobabooga
|
ce21804ec7
|
Allow extensions to define a new tab
|
2023-05-17 01:31:56 -03:00 |
|
ye7iaserag
|
acf3dbbcc5
|
Allow extensions to have custom display_name (#1242)
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-17 01:08:22 -03:00 |
|
oobabooga
|
ad0b71af11
|
Add missing file
|
2023-05-17 00:37:34 -03:00 |
|
oobabooga
|
a84f499718
|
Allow extensions to define custom CSS and JS
|
2023-05-17 00:30:54 -03:00 |
|
oobabooga
|
824fa8fc0e
|
Attempt at making interface restart more robust
|
2023-05-16 22:27:43 -03:00 |
|
oobabooga
|
259020a0be
|
Bump gradio to 3.31.0
This fixes Google Colab lagging.
|
2023-05-16 22:21:15 -03:00 |
|
pixel
|
458a627ab9
|
fix: elevenlabs cloned voices do not show up in webui after entering API key (#2107)
|
2023-05-16 20:21:36 -03:00 |
|
oobabooga
|
7584d46c29
|
Refactor models.py (#2113)
|
2023-05-16 19:52:22 -03:00 |
|
oobabooga
|
5cd6dd4287
|
Fix no-mmap bug
|
2023-05-16 17:35:49 -03:00 |
|
oobabooga
|
89e37626ab
|
Reorganize chat settings tab
|
2023-05-16 17:22:59 -03:00 |
|
Forkoz
|
d205ec9706
|
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
|
2023-05-16 13:40:19 -03:00 |
|
Orbitoid
|
428261eede
|
fix: elevenlabs removed the need for the api key for refreshing voices (#2097)
|
2023-05-16 13:34:49 -03:00 |
|
oobabooga
|
cd9be4c2ba
|
Update llama.cpp-models.md
|
2023-05-16 00:49:32 -03:00 |
|
atriantafy
|
26cf8c2545
|
add api port options (#1990)
|
2023-05-15 20:44:16 -03:00 |
|
Andrei
|
e657dd342d
|
Add in-memory cache support for llama.cpp (#1936)
|
2023-05-15 20:19:55 -03:00 |
|
Jakub Strnad
|
0227e738ed
|
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087)
|
2023-05-15 19:51:23 -03:00 |
|