Commit Graph

582 Commits

Author SHA1 Message Date
ye7iaserag
acf3dbbcc5
Allow extensions to have custom display_name (#1242)
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-17 01:08:22 -03:00
oobabooga
a84f499718 Allow extensions to define custom CSS and JS 2023-05-17 00:30:54 -03:00
oobabooga
7584d46c29
Refactor models.py (#2113) 2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287 Fix no-mmap bug 2023-05-16 17:35:49 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
atriantafy
26cf8c2545
add api port options (#1990) 2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) 2023-05-15 19:51:23 -03:00
oobabooga
c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115 Create get_max_memory_dict() function 2023-05-15 19:38:27 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga
df37ba5256 Update impersonate_wrapper 2023-05-12 12:59:48 -03:00
oobabooga
e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga
2eeb27659d Fix bug in --cpu-memory 2023-05-12 06:17:07 -03:00
oobabooga
5eaa914e1b Fix settings.json being ignored because of config.yaml 2023-05-12 06:09:45 -03:00
oobabooga
71693161eb Better handle spaces in LlamaTokenizer 2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a Fix a bug 2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d Always return only the new tokens in generation functions 2023-05-11 17:07:20 -03:00
oobabooga
394bb253db Syntax improvement 2023-05-11 16:27:50 -03:00
oobabooga
f7dbddfff5 Add a variable for tts extensions to use 2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions (#2003) 2023-05-11 15:37:04 -03:00
oobabooga
b7a589afc8 Improve the Metharme prompt 2023-05-10 16:09:32 -03:00
oobabooga
b01c4884cb Better stopping strings for instruct mode 2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7 Add markdown table rendering 2023-05-10 13:41:23 -03:00
oobabooga
3316e33d14 Remove unused code 2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt (#1965) 2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6 Fix prompt order in the dropdown 2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90 Remove duplicate code 2023-05-10 02:07:22 -03:00
oobabooga
bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements (#1957) 2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location (#1953) 2023-05-09 19:22:10 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages (#1354) 2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) (#1147) 2023-05-09 02:37:42 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. (#1663) 2023-05-08 22:55:41 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails (#1871)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00
oobabooga
68dcbc7ebd Fix chat history handling in instruct mode 2023-05-08 16:41:21 -03:00
Clay Shoaf
79ac94cc2f
fixed LoRA loading issue (#1865) 2023-05-08 16:21:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
EgrorBs
d3ea70f453
More trust_remote_code=trust_remote_code (#1899) 2023-05-07 23:48:20 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes (#1896) 2023-05-07 23:47:02 -03:00
oobabooga
9754d6a811 Fix an error message 2023-05-07 17:44:05 -03:00
camenduru
ba65a48ec8
trust_remote_code=shared.args.trust_remote_code (#1891) 2023-05-07 17:42:44 -03:00
oobabooga
6b67cb6611 Generalize superbooga to chat mode 2023-05-07 15:05:26 -03:00
oobabooga
56f6b7052a Sort dropdowns numerically 2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions (#1817) 2023-05-05 18:53:03 -03:00