Commit Graph

978 Commits

Author SHA1 Message Date
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
atriantafy
26cf8c2545
add api port options (#1990) 2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) 2023-05-15 19:51:23 -03:00
oobabooga
c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115 Create get_max_memory_dict() function 2023-05-15 19:38:27 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga
df37ba5256 Update impersonate_wrapper 2023-05-12 12:59:48 -03:00
oobabooga
e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga
2eeb27659d Fix bug in --cpu-memory 2023-05-12 06:17:07 -03:00
oobabooga
5eaa914e1b Fix settings.json being ignored because of config.yaml 2023-05-12 06:09:45 -03:00
oobabooga
71693161eb Better handle spaces in LlamaTokenizer 2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a Fix a bug 2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d Always return only the new tokens in generation functions 2023-05-11 17:07:20 -03:00
oobabooga
394bb253db Syntax improvement 2023-05-11 16:27:50 -03:00
oobabooga
f7dbddfff5 Add a variable for tts extensions to use 2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions (#2003) 2023-05-11 15:37:04 -03:00
oobabooga
b7a589afc8 Improve the Metharme prompt 2023-05-10 16:09:32 -03:00
oobabooga
b01c4884cb Better stopping strings for instruct mode 2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7 Add markdown table rendering 2023-05-10 13:41:23 -03:00
oobabooga
3316e33d14 Remove unused code 2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt (#1965) 2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6 Fix prompt order in the dropdown 2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90 Remove duplicate code 2023-05-10 02:07:22 -03:00
oobabooga
bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements (#1957) 2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location (#1953) 2023-05-09 19:22:10 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages (#1354) 2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) (#1147) 2023-05-09 02:37:42 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. (#1663) 2023-05-08 22:55:41 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails (#1871)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00
oobabooga
68dcbc7ebd Fix chat history handling in instruct mode 2023-05-08 16:41:21 -03:00
Clay Shoaf
79ac94cc2f
fixed LoRA loading issue (#1865) 2023-05-08 16:21:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
EgrorBs
d3ea70f453
More trust_remote_code=trust_remote_code (#1899) 2023-05-07 23:48:20 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes (#1896) 2023-05-07 23:47:02 -03:00
oobabooga
9754d6a811 Fix an error message 2023-05-07 17:44:05 -03:00
camenduru
ba65a48ec8
trust_remote_code=shared.args.trust_remote_code (#1891) 2023-05-07 17:42:44 -03:00
oobabooga
6b67cb6611 Generalize superbooga to chat mode 2023-05-07 15:05:26 -03:00
oobabooga
56f6b7052a Sort dropdowns numerically 2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions (#1817) 2023-05-05 18:53:03 -03:00
oobabooga
c728f2b5f0 Better handle new line characters in code blocks 2023-05-05 11:22:36 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga
f673f4a4ca Change --verbose behavior 2023-05-04 15:56:06 -03:00
oobabooga
97a6a50d98 Use oasst tokenizer instead of universal tokenizer 2023-05-04 15:55:39 -03:00
oobabooga
b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
Mylo
bd531c2dc2
Make --trust-remote-code work for all models (#1772) 2023-05-04 02:01:28 -03:00
oobabooga
0e6d17304a Clearer syntax for instruction-following characters 2023-05-03 22:50:39 -03:00
oobabooga
9c77ab4fc2 Improve some warnings 2023-05-03 22:06:46 -03:00
oobabooga
057b1b2978 Add credits 2023-05-03 21:49:55 -03:00
oobabooga
95d04d6a8d Better warning messages 2023-05-03 21:43:17 -03:00
oobabooga
f54256e348 Rename no_mmap to no-mmap 2023-05-03 09:50:31 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token (#1678)
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab
80c2f25131
LLaVA: small fixes (#1664)
* change multimodal projector to the correct one

* remove reference to custom stopping strings from readme

* fix stopping strings if tokenizer extension adds/removes tokens

* add API example

* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga
4e09df4034 Only show extension in UI if it has an ui() function 2023-05-02 19:20:02 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input (#1707) 2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing (#1625) 2023-05-02 01:21:54 -03:00
oobabooga
c83210c460 Move the rstrips 2023-04-26 17:17:22 -03:00
oobabooga
1d8b8222e9 Revert #1579, apply the proper fix
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
oobabooga
9c2e7c0fab Fix path on models.py 2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py 2023-04-26 01:55:40 -03:00
oobabooga
f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
15940e762e Fix missing initial space for LlamaTokenizer 2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) (#1535)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 (#1546)
* full dynamic model type support on modern peft

* remove shuffle option
2023-04-25 21:27:30 -03:00
oobabooga
9b272bc8e5 Monkey patch fixes 2023-04-25 21:20:26 -03:00
oobabooga
da812600f4 Apply settings regardless of setup() function 2023-04-25 01:16:23 -03:00
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() (#1484) 2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e Add spaces 2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit 2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132 Remove obsolete function 2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc Only load the default history if it's empty 2023-04-24 11:50:51 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
oobabooga
b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions 2023-04-24 00:47:40 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs (#1493) 2023-04-23 12:54:41 -03:00
oobabooga
c0b5c09860 Minor change 2023-04-22 15:15:31 -03:00
oobabooga
fcb594b90e Don't require llama.cpp models to be placed in subfolders 2023-04-22 14:56:48 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
USBhost
e1aa9d5173
Support upstream GPTQ once again. (#1451) 2023-04-21 12:43:56 -03:00
oobabooga
eddd016449 Minor deletion 2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50 Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
oobabooga
5e023ae64d Change dropdown menu highlight color 2023-04-21 02:47:18 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models (#1322) 2023-04-21 00:20:33 -03:00
oobabooga
7bb9036ac9 Add universal LLaMA tokenizer support 2023-04-19 21:23:51 -03:00
Alex "mcmonkey" Goodwin
ee30625cd1
4-Bit LoRA training + several new training options and fixes 2023-04-19 19:39:03 -03:00
oobabooga
702fe92d42 Increase truncation_length_max value 2023-04-19 17:35:38 -03:00
oobabooga
9d9ae62938 Fix stopping strings in the gradio API 2023-04-19 13:52:21 -03:00
oobabooga
649e4017a5 Style improvements 2023-04-19 00:36:28 -03:00
oobabooga
000f65a2ef
Delete unused file 2023-04-18 04:01:14 -03:00
oobabooga
36f7c022f2
Rename a file 2023-04-18 01:38:33 -03:00
oobabooga
b069bb1f2e
Update monkey_patch_gradio.py 2023-04-18 01:32:42 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home 2023-04-18 01:13:16 -03:00
Tynan Burke
6a810b16b2
typo in training.py (#1329) 2023-04-17 21:40:46 -03:00
oobabooga
ac2973ffc6 Add a warning for --share 2023-04-17 19:34:28 -03:00
oobabooga
c544386824 Reset your name when choosing a character 2023-04-17 13:56:40 -03:00
oobabooga
c3dc348d1c Don't show 'None' in the LoRA list 2023-04-17 13:52:23 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
catalpaaa
07de7d0426
Load llamacpp before quantized model (#1307) 2023-04-17 10:47:26 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga
46a8aa8c09 Readability 2023-04-16 21:26:19 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
6a03ad0824 Remove fix_newlines() calls from chat.py 2023-04-16 18:25:44 -03:00
oobabooga
5342f72968 Properly handle blockquote blocks 2023-04-16 18:00:12 -03:00
oobabooga
27f3a78834 Better detect when no model is loaded 2023-04-16 17:35:54 -03:00
oobabooga
c8ad960018 Add defaults to the gradio API 2023-04-16 17:33:28 -03:00
oobabooga
beb95f5fe2 Add a style for the "chat" mode 2023-04-16 16:44:50 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00
oobabooga
b705b4210c Minor changes to training.py 2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c Make training.py more readable 2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 (#1098)
* add support for other model types

dependent on future-peft-changes but with fallback to function now

* use encoding=utf8 for training format

* make shuffling optional

and describe dropout a bit more

* add eval_steps to control evaluation

* make callbacks not depend on globals

* make save steps controllable

* placeholder of initial loading-existing-model support

and var name cleanup

* save/load parameters

* last bit of cleanup

* remove `gptq_bits` ref as main branch removed that setting

* add higher_rank_limit option

2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter

* fix math on save_steps
2023-04-16 02:35:13 -03:00
kernyan
ac19d5101f
revert incorrect eos_token_id change from #814 (#1261)
- fixes #1054
2023-04-16 01:47:01 -03:00
oobabooga
a2127239de Fix a bug 2023-04-16 01:41:37 -03:00
oobabooga
9d3c6d2dc3 Fix a bug 2023-04-16 01:40:47 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
* Remove eval of API input

* Remove unnecessary eval/exec for security

* Use ast.literal_eval

* Use ast.literal_eval

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
d2ea925fa5 Bump llama-cpp-python to use LlamaCache 2023-04-16 00:53:40 -03:00
oobabooga
ac189011cb Add "Save current settings for this model" button 2023-04-15 12:54:02 -03:00
oobabooga
abef355ed0 Remove deprecated flag 2023-04-15 01:21:19 -03:00
oobabooga
c3aa79118e Minor generate_chat_prompt simplification 2023-04-14 23:02:08 -03:00
oobabooga
3a337cfded Use argparse defaults 2023-04-14 15:35:06 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support (#1103)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
1901d238e1 Minor change to API code 2023-04-14 12:11:47 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option (#1122) 2023-04-13 21:35:08 -03:00
oobabooga
a75e02de4d Simplify GPTQ_loader.py 2023-04-13 12:13:07 -03:00
oobabooga
ca293bb713 Show a warning if two quantized models are found 2023-04-13 12:04:27 -03:00
oobabooga
8b482b4127
Merge #1073 from sgsdxzy/triton
* Multi-GPU support for triton
* Better quantized model filename detection
2023-04-13 11:31:21 -03:00
oobabooga
fde6d06167 Prioritize names with the groupsize in them 2023-04-13 11:27:03 -03:00
oobabooga
f2bf1a2c9e Add some comments, remove obsolete code 2023-04-13 11:17:32 -03:00
Light
da74cd7c44 Generalized weight search path. 2023-04-13 21:43:32 +08:00
oobabooga
04866dc4fc Add a warning for when no model is loaded 2023-04-13 10:35:08 -03:00
Light
cf58058c33 Change warmup_autotune to a negative switch. 2023-04-13 20:59:49 +08:00
Light
15d5a043f2 Merge remote-tracking branch 'origin/main' into triton 2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42 Add --model-menu option 2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e Update settings-template.json 2023-04-12 18:30:43 -03:00
oobabooga
10e939c9b4 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-12 17:21:59 -03:00
oobabooga
1566d8e344 Add model settings to the Models tab 2023-04-12 17:20:18 -03:00
Light
a405064ceb Better dispatch. 2023-04-13 01:48:17 +08:00
Light
f3591ccfa1 Keep minimal change. 2023-04-12 23:26:06 +08:00
Lukas
5ad92c940e
lora training fixes: (#970)
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
oobabooga
80f4eabb2a Fix send_pictures extension 2023-04-12 10:27:06 -03:00
oobabooga
8265d45db8 Add send dummy message/reply buttons
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
37d52c96bc Fix Continue in chat mode 2023-04-11 21:46:17 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.

The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters (#1008) 2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad Make the bos token optional 2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
769aa900ea Print the used seed 2023-04-10 10:53:31 -03:00
Alex "mcmonkey" Goodwin
30befe492a fix random seeds to actually randomize
Without this fix, manual seeds get locked in.
2023-04-10 06:29:10 -07:00
oobabooga
1911504f82 Minor bug fix 2023-04-09 23:45:41 -03:00
oobabooga
dba2000d2b Do things that I am not proud of 2023-04-09 23:40:49 -03:00
oobabooga
65552d2157 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-09 23:19:53 -03:00
oobabooga
8c6155251a More robust 4-bit model loading 2023-04-09 23:19:28 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
Brian O'Connor
625d81f495
Update character log logic (#977)
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
2023-04-09 22:20:21 -03:00
oobabooga
a3085dba07 Fix LlamaTokenizer eos_token (attempt) 2023-04-09 21:19:39 -03:00
oobabooga
120f5662cf Better handle spaces for Continue 2023-04-09 20:37:31 -03:00
oobabooga
b27d757fd1 Minor change 2023-04-09 20:06:20 -03:00
oobabooga
d29f4624e9 Add a Continue button to chat mode 2023-04-09 20:04:16 -03:00
oobabooga
cc693a7546 Remove obsolete code 2023-04-09 00:51:07 -03:00
oobabooga
cb169d0834 Minor formatting changes 2023-04-08 17:34:07 -03:00
oobabooga
0b458bf82d Simplify a function 2023-04-07 21:37:41 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 (#596) 2023-04-07 21:36:04 -03:00
oobabooga
1dc464dcb0 Sort imports 2023-04-07 14:42:03 -03:00
oobabooga
42ea6a3fc0 Change the timing for setup() calls 2023-04-07 12:20:57 -03:00
oobabooga
768354239b Change training file encoding 2023-04-07 11:15:52 -03:00
oobabooga
6762e62a40 Simplifications 2023-04-07 11:14:32 -03:00
oobabooga
a453d4e9c4 Reorganize some chat functions 2023-04-07 11:07:03 -03:00
Maya
8fa182cfa7
Fix regeneration of first message in instruct mode (#881) 2023-04-07 10:45:42 -03:00
oobabooga
46c4654226 More PEP8 stuff 2023-04-07 00:52:02 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
OWKenobi
310bf46a94
Instruction Character Vicuna, Instruction Mode Bugfix (#838) 2023-04-06 17:40:44 -03:00
oobabooga
113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support (#820) 2023-04-06 12:16:48 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements (#763) 2023-04-06 02:04:11 -03:00
oobabooga
e94ab5dac1 Minor fixes 2023-04-06 01:43:10 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API 2023-04-06 01:22:15 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset (#767) 2023-04-05 18:52:36 -03:00
Forkoz
8203ce0cac
Stop character pic from being cached when changing chars or clearing. (#798)
Tested on both FF and chromium
2023-04-05 14:25:01 -03:00
oobabooga
7f66421369 Fix loading characters 2023-04-05 14:22:32 -03:00
oobabooga
e722c240af Add Instruct mode 2023-04-05 13:54:50 -03:00
oobabooga
3d6cb5ed63 Minor rewrite 2023-04-05 01:21:40 -03:00
oobabooga
f3a2e0b8a9 Disable pre_layer when the model type is not llama 2023-04-05 01:19:26 -03:00
catalpaaa
4ab679480e
allow quantized model to be loaded from model dir (#760) 2023-04-04 23:19:38 -03:00
oobabooga
ae1fe45bc0 One more cache reset 2023-04-04 23:15:57 -03:00
oobabooga
8ef89730a5 Try to better handle browser image cache 2023-04-04 23:09:28 -03:00
oobabooga
cc6c7a37f3 Add make_thumbnail function 2023-04-04 23:03:58 -03:00
oobabooga
80dfba05f3 Better crop/resize cached images 2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type (#772) 2023-04-04 13:23:27 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly (#743)
* Add greetings field

* add greeting field and make it interactive

* Minor changes

* Fix a bug

* Simplify clear_chat_log

* Change a label

* Minor change

* Simplifications

* Simplification

* Simplify loading the default character history

* Fix regression

---------

Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading (#740) 2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac Rename another variable 2023-04-03 01:15:20 -03:00
oobabooga
08448fb637 Rename a variable 2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc Use Path.stem for simplicity 2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue (#650) 2023-04-02 21:54:29 -03:00
TheTerrasque
2157bb4319
New yaml character format (#337 from TheTerrasque/feature/yaml-characters)
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
5f3f3faa96 Better handle CUDA out of memory errors in chat mode 2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925 Add shared.is_chat() function 2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py 2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776 Add also_return_rows to generate_chat_prompt 2023-04-01 01:12:13 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713 Fix streaming 2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd Add repetition_penalty 2023-03-31 19:01:34 -03:00
oobabooga
2259143fec Fix llama.cpp with --no-stream 2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3 Detect ggml*.bin files automatically 2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8 Properly count tokens/s for llama.cpp in chat mode 2023-03-31 17:04:32 -03:00
oobabooga
ea3ba6fc73 Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp 2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb Add repetition_penalty 2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp 2023-03-31 14:37:04 -03:00
oobabooga
4c27562157 Minor changes 2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a General improvements 2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
Maya
b246d17513
Fix type object is not subscriptable
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
Thomas Antony
7fa5d96c22 Update to use new llamacpp API 2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e Add support for alpaca 2023-03-30 11:23:04 +01:00
Thomas Antony
a5f5736e74 Add to text_generation.py 2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb Add llamacpp to models.py 2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa Initial version of llamacpp_model.py 2023-03-30 11:22:07 +01:00
oobabooga
a21e580782 Move an import 2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models 2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636 2023-03-29 11:48:17 -03:00
oobabooga
1edfb96778
Fix loading extensions from within the interface 2023-03-28 23:27:02 -03:00
oobabooga
304f812c63 Gracefully handle CUDA out of memory errors with streaming 2023-03-28 19:20:50 -03:00
oobabooga
010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga
0bec15ebcd Reorder imports 2023-03-28 17:34:15 -03:00
Maya Eary
41ec682834 Disable kernel threshold for gpt-j 2023-03-28 22:45:38 +03:00
Maya
1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2 2023-03-28 22:30:39 +03:00
Maya Eary
1c075d8d21 Fix typo 2023-03-28 20:43:50 +03:00
Maya Eary
c8207d474f Generalized load_quantized 2023-03-28 20:38:55 +03:00
oobabooga
8579fe51dd Fix new lines in the HTML tab 2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin
e817fac542 better defaults 2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf implement initial Raw Text File Input
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3 change number minimums to 0
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556 use new shared.args.lora_dir 2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
31f04dc615 Merge branch 'main' into add-train-lora-tab 2023-03-27 20:03:30 -07:00
oobabooga
53da672315 Fix FlexGen 2023-03-27 23:44:21 -03:00
oobabooga
ee95e55df6 Fix RWKV tokenizer 2023-03-27 23:42:29 -03:00
oobabooga
036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3 Some simplifications 2023-03-27 23:29:52 -03:00
oobabooga
fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir 2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin
8a97f6ba29 corrections per the PR comments 2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6 couple missed camelCases 2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin
6368dad7db Fix camelCase to snake_case to match repo format standard 2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4 Small style changes 2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d add total time estimate 2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d interrupt button 2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95 initial progress tracker in UI 2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin
d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir 2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa 2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya
acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya
81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya
099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga
e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
awoo
83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1 Remove unused imports 2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077 Keep GALACTICA special tokens (#300) 2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd Make the "no GPU has been detected" message more descriptive 2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga
348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8 Optimize the HTML generation speed 2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893 Minor fix 2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5 Rename a file 2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352 Prevent *Is typing* from disappearing instantly while streaming 2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1 Move all css/js into separate files 2023-03-15 12:35:11 -03:00
oobabooga
693b53d957 Merge branch 'main' into HideLord-main 2023-03-15 12:08:56 -03:00
oobabooga
1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions 2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7 Rename a file, add deprecation warning for --load-in-4bit 2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16 Implement "*Is recording a voice message...*" for TTS #303 2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244 Some minor fixes to the GPTQ loader 2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6 use updated load_quantized 2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a determine model type from model name 2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e remove default value from argument 2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main 2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c make argument non case-sensitive 2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3 rename method 2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc add argument --gptq-model-type and remove duplicate arguments 2023-03-13 20:01:34 +03:00
Ayanami Rei
edbc61139f use new quant loader 2023-03-13 20:00:38 +03:00
Ayanami Rei
345b6dee8c refactor quant models loader and add support of OPT 2023-03-13 19:59:57 +03:00
oobabooga
66b6971b61 Update README 2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f Document --auto-launch 2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d Fix GPTQ load_quant call on Windows 2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4 Fix GALACTICA (#285) 2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9 Change a comment 2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9 Remove redundant comments 2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd Use str(Path) instead of os.path.abspath(Path) 2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92 Fix Open Assistant 2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba Clean up silero_tts
This should only be used with --no-stream.

The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord
683556f411 Adding markdown support and slight refactoring. 2023-03-12 21:34:09 +02:00
oobabooga
cebe8b390d Remove useless "substring_found" variable 2023-03-12 15:50:38 -03:00
oobabooga
4bcd675ccd Add *Is typing...* to regenerate as well 2023-03-12 15:23:33 -03:00
oobabooga
c7aa51faa6 Use a list of eos_tokens instead of just a number
This might be the cause of LLaMA ramblings that some people have experienced.
2023-03-12 14:54:58 -03:00
oobabooga
d8bea766d7
Merge pull request #192 from xanthousm/main
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga
fda376d9c3 Use os.path.abspath() instead of str() 2023-03-12 12:41:04 -03:00
HideLord
8403152257 Fixing compatibility with GPTQ repo commit 2f667f7da051967566a5fb0546f8614bcd3a1ccd. Expects string and breaks on 2023-03-12 17:28:15 +02:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
65dda28c9d Rename --llama-bits to --gptq-bits 2023-03-12 11:19:07 -03:00
oobabooga
fed3617f07 Move LLaMA 4-bit into a separate file 2023-03-12 11:12:34 -03:00
oobabooga
0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
oobabooga
78901d522b Remove unused imports 2023-03-12 08:59:05 -03:00
Xan
b3e10e47c0 Fix merge conflict in text_generation
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
oobabooga
ad14f0e499 Fix regenerate (provisory way) 2023-03-12 03:42:29 -03:00
oobabooga
6e12068ba2
Merge pull request #258 from lxe/lxe/utf8
Load and save character files and chat history in UTF-8
2023-03-12 03:28:49 -03:00
oobabooga
e2da6b9685 Fix You You You appearing in chat mode 2023-03-12 03:25:56 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
Aleksey Smolenchuk
3f7c3d6559
No need to set encoding on binary read 2023-03-11 22:10:57 -08:00
oobabooga
341e135036 Various fixes in chat mode 2023-03-12 02:53:08 -03:00
Aleksey Smolenchuk
3baf5fc700
Load and save chat history in utf-8 2023-03-11 21:40:01 -08:00
oobabooga
b0e8cb8c88 Various fixes in chat mode 2023-03-12 02:31:45 -03:00
unknown
433f6350bc Load and save character files in UTF-8 2023-03-11 21:23:05 -08:00
oobabooga
0bd5430988 Use 'with' statement to better handle streaming memory 2023-03-12 02:04:28 -03:00
oobabooga
37f0166b2d Fix memory leak in new streaming (second attempt) 2023-03-11 23:14:49 -03:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
draff
28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff
001e638b47 Make it actually work 2023-03-10 23:28:19 +00:00
draff
804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
ItsLogic
9ba8156a70
remove unnecessary Path() 2023-03-10 22:33:58 +00:00
draff
e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00
oobabooga
e9dbdafb14
Merge branch 'main' into pt-path-changes 2023-03-10 11:03:42 -03:00
oobabooga
706a03b2cb Minor changes 2023-03-10 11:02:25 -03:00
oobabooga
de7dd8b6aa Add comments 2023-03-10 10:54:08 -03:00
oobabooga
e461c0b7a0 Move the import to the top 2023-03-10 10:51:12 -03:00
deepdiffuser
9fbd60bf22 add no_split_module_classes to prevent tensor split error 2023-03-10 05:30:47 -08:00
deepdiffuser
ab47044459 add multi-gpu support for 4bit gptq LLaMA 2023-03-10 04:52:45 -08:00
rohvani
2ac2913747 fix reference issue 2023-03-09 20:13:23 -08:00
rohvani
826e297b0e add llama-65b-4bit support & multiple pt paths 2023-03-09 18:31:32 -08:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
74102d5ee4 Insert to the path instead of appending 2023-03-09 20:51:22 -03:00
oobabooga
2965aa1625 Check if the .pt file exists 2023-03-09 20:48:51 -03:00
oobabooga
828a524f9a Add LLaMA 4-bit support 2023-03-09 15:50:26 -03:00
oobabooga
59b5f7a4b7 Improve usage of stopping_criteria 2023-03-08 12:13:40 -03:00
oobabooga
add9330e5e Bug fixes 2023-03-08 11:26:29 -03:00
Xan
5648a41a27 Merge branch 'main' of https://github.com/xanthousm/text-generation-webui 2023-03-08 22:08:54 +11:00
Xan
ad6b699503 Better TTS with autoplay
- Adds "still_streaming" to shared module for extensions to know if generation is complete
- Changed TTS extension with new options:
   - Show text under the audio widget
   - Automatically play the audio once text generation finishes
   - manage the generated wav files (only keep files for finished generations, optional max file limit)
   - [wip] ability to change voice pitch and speed
- added 'tensorboard' to requirements, since python sent "tensorboard not found" errors after a fresh installation.
2023-03-08 22:02:17 +11:00
oobabooga
33fb6aed74 Minor bug fix 2023-03-08 03:08:16 -03:00
oobabooga
ad2970374a Readability improvements 2023-03-08 03:00:06 -03:00
oobabooga
72d539dbff Better separate the FlexGen case 2023-03-08 02:54:47 -03:00
oobabooga
0e16c0bacb Remove redeclaration of a function 2023-03-08 02:50:49 -03:00
oobabooga
ab50f80542 New text streaming method (much faster) 2023-03-08 02:46:35 -03:00
oobabooga
8e89bc596b Fix encode() for RWKV 2023-03-07 23:15:46 -03:00
oobabooga
19a34941ed Add proper streaming to RWKV 2023-03-07 18:17:56 -03:00
oobabooga
8660227e1b Add top_k to RWKV 2023-03-07 17:24:28 -03:00
oobabooga
153dfeb4dd Add --rwkv-cuda-on parameter, bump rwkv version 2023-03-06 20:12:54 -03:00
oobabooga
6904a507c6 Change some parameters 2023-03-06 16:29:43 -03:00
oobabooga
20bd645f6a Fix bug in multigpu setups (attempt 3) 2023-03-06 15:58:18 -03:00
oobabooga
09a7c36e1b Minor improvement while running custom models 2023-03-06 15:36:35 -03:00
oobabooga
24c4c20391 Fix bug in multigpu setups (attempt #2) 2023-03-06 15:23:29 -03:00
oobabooga
d88b7836c6 Fix bug in multigpu setups 2023-03-06 14:58:30 -03:00
oobabooga
5bed607b77 Increase repetition frequency/penalty for RWKV 2023-03-06 14:25:48 -03:00