Commit Graph

3228 Commits

Author SHA1 Message Date
Mykeehu
c98d6ad27f
Create chat_style-messenger.css (#2187)
Add Messenger-like style for chat mode
2023-05-19 11:31:06 -03:00
oobabooga
499c2e009e Remove problematic regex from models/config.yaml 2023-05-19 11:20:35 -03:00
oobabooga
9d5025f531 Improve error handling while loading GPTQ models 2023-05-19 11:20:08 -03:00
oobabooga
39dab18307 Add a timeout to download-model.py requests 2023-05-19 11:19:34 -03:00
jllllll
4ef2de3486
Fix dependencies downgrading from gptq install (#61) 2023-05-18 12:46:04 -03:00
oobabooga
07510a2414
Change a message 2023-05-18 10:58:37 -03:00
oobabooga
0bcd5b6894
Soothe anxious users 2023-05-18 10:56:49 -03:00
oobabooga
f052ab9c8f Fix setting pre_layer from within the ui 2023-05-17 23:17:44 -03:00
oobabooga
b667ffa51d Simplify GPTQ_loader.py 2023-05-17 16:22:56 -03:00
oobabooga
ef10ffc6b4 Add various checks to model loading functions 2023-05-17 16:14:54 -03:00
oobabooga
abd361b3a0 Minor change 2023-05-17 11:33:43 -03:00
oobabooga
21ecc3701e Avoid a name conflict 2023-05-17 11:23:13 -03:00
oobabooga
fb91c07191 Minor bug fix 2023-05-17 11:16:37 -03:00
oobabooga
1a8151a2b6
Add AutoGPTQ support (basic) (#2132) 2023-05-17 11:12:12 -03:00
oobabooga
10cf7831f7
Update Extensions.md 2023-05-17 10:45:29 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) 2023-05-17 10:41:09 -03:00
oobabooga
fd743a0207 Small change 2023-05-17 02:34:29 -03:00
LoopLooter
aeb1b7a9c5
feature to save prompts with custom names (#1583)
---------

Co-authored-by: LoopLooter <looplooter>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-17 02:30:45 -03:00
oobabooga
c9c6aa2b6e Update docs/Extensions.md 2023-05-17 02:04:37 -03:00
oobabooga
85f74961f9 Update "Interface mode" tab 2023-05-17 01:57:51 -03:00
oobabooga
9e558cba9b Update docs/Extensions.md 2023-05-17 01:43:32 -03:00
oobabooga
687f21f965 Update docs/Extensions.md 2023-05-17 01:41:01 -03:00
oobabooga
8f85d84e08 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-05-17 01:32:42 -03:00
oobabooga
ce21804ec7 Allow extensions to define a new tab 2023-05-17 01:31:56 -03:00
ye7iaserag
acf3dbbcc5
Allow extensions to have custom display_name (#1242)
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-17 01:08:22 -03:00
oobabooga
ad0b71af11 Add missing file 2023-05-17 00:37:34 -03:00
oobabooga
a84f499718 Allow extensions to define custom CSS and JS 2023-05-17 00:30:54 -03:00
oobabooga
824fa8fc0e Attempt at making interface restart more robust 2023-05-16 22:27:43 -03:00
oobabooga
259020a0be Bump gradio to 3.31.0
This fixes Google Colab lagging.
2023-05-16 22:21:15 -03:00
pixel
458a627ab9
fix: elevenlabs cloned voices do not show up in webui after entering API key (#2107) 2023-05-16 20:21:36 -03:00
oobabooga
7584d46c29
Refactor models.py (#2113) 2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287 Fix no-mmap bug 2023-05-16 17:35:49 -03:00
oobabooga
89e37626ab Reorganize chat settings tab 2023-05-16 17:22:59 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
Orbitoid
428261eede
fix: elevenlabs removed the need for the api key for refreshing voices (#2097) 2023-05-16 13:34:49 -03:00
oobabooga
cd9be4c2ba
Update llama.cpp-models.md 2023-05-16 00:49:32 -03:00
atriantafy
26cf8c2545
add api port options (#1990) 2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) 2023-05-15 19:51:23 -03:00
oobabooga
10869de0f4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-05-15 19:39:48 -03:00
oobabooga
c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115 Create get_max_memory_dict() function 2023-05-15 19:38:27 -03:00
dependabot[bot]
ae54d83455
Bump transformers from 4.28.1 to 4.29.1 (#2089) 2023-05-15 19:25:24 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
feeelX
eee986348c
Update llama-cpp-python from 0.1.45 to 0.1.50 (#2058) 2023-05-14 22:41:14 -03:00
oobabooga
897fa60069 Sort selected superbooga chunks by insertion order
For better coherence
2023-05-14 22:19:29 -03:00
Luis Lopez
b07f849e41
Add superbooga chunk separator option (#2051) 2023-05-14 21:44:52 -03:00
matatonic
ab08cf6465
[extensions/openai] clip extra leading space (#2042) 2023-05-14 12:57:52 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga
5f6cf39f36 Change the injection context string 2023-05-13 14:23:02 -03:00