Commit Graph

393 Commits

Author SHA1 Message Date
oobabooga
57345b8f30 Add prompt loading/saving menus + reorganize interface 2023-03-27 12:16:37 -03:00
oobabooga
95c97e1747 Unload the model using the "Remove all" button 2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093 Merge branch 'main' into Brawlence-main 2023-03-26 23:40:51 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
catalpaaa
d51cb8292b Update server.py
yea i should go to bed
2023-03-24 17:36:31 -07:00
catalpaaa
9e2963e0c8 Update server.py 2023-03-24 17:35:45 -07:00
catalpaaa
ec2a1facee Update server.py 2023-03-24 17:34:33 -07:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
d8e950d6bd
Don't load the model twice when using --lora 2023-03-24 16:30:32 -03:00
oobabooga
fd99995b01
Make the Stop button more consistent in chat mode 2023-03-24 15:59:27 -03:00
oobabooga
9bdb3c784d
Minor fix 2023-03-23 22:02:40 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
Φφ
483d173d23 Code reuse + indication
Now shows the message in the console when unloading weights. Also reload_model() calls unload_model() first to free the memory so that multiple reloads won't overfill it.
2023-03-23 07:06:26 +03:00
Φφ
1917b15275 Unload and reload models on request 2023-03-23 07:06:26 +03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
4d701a6eb9 Create a mirror for the preset menu 2023-03-19 12:51:47 -03:00
oobabooga
20f5b455bf Add parameters reference #386 #331 2023-03-17 20:19:04 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
38d7017657 Add all command-line flags to "Interface mode" 2023-03-16 12:44:03 -03:00
oobabooga
d54f3f4a34 Add no-stream checkbox to the interface 2023-03-16 10:19:00 -03:00
oobabooga
25a00eaf98 Add "Experimental" warning 2023-03-15 23:43:35 -03:00
oobabooga
599d3139fd Increase the reload timeout a bit 2023-03-15 23:34:08 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
ffb898608b Mini refactor 2023-03-15 20:44:34 -03:00
oobabooga
67d62475dc Further reorganize chat UI 2023-03-15 18:56:26 -03:00
oobabooga
c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga
348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga
658849d6c3 Move a checkbutton 2023-03-15 13:29:00 -03:00
oobabooga
d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga
ffc6cb3116
Merge pull request #325 from Ph0rk0z/fix-RWKV-Names
Fix rwkv names
2023-03-15 12:56:21 -03:00
oobabooga
1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
Forkoz
3b62bd180d
Remove PTH extension from RWKV
When loading the current model was blank unless you typed it out.
2023-03-14 21:23:39 +00:00
Forkoz
f0f325eac1
Remove Json from loading
no more 20b tokenizer
2023-03-14 21:21:47 +00:00
oobabooga
72d207c098
Remove the chat API
It is not implemented, has not been tested, and this is causing confusion.
2023-03-14 16:31:27 -03:00
oobabooga
a95592fc56 Add back a progress indicator to --no-stream 2023-03-12 20:38:40 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
oobabooga
92fe947721 Merge branch 'main' into new-streaming 2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a Add *Is typing...* to impersonate as well 2023-03-11 10:50:18 -03:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
oobabooga
9849aac0f1 Don't show .pt models in the list 2023-03-09 21:54:50 -03:00
oobabooga
038e90765b Rename to "Text generation web UI" 2023-03-09 09:44:08 -03:00
jtang613
807a41cf87 Lets propose a name besides "Gradio" 2023-03-08 21:02:25 -05:00
oobabooga
ab50f80542 New text streaming method (much faster) 2023-03-08 02:46:35 -03:00
oobabooga
bf56b6c1fb Load settings.json without the need for --settings settings.json
This is for setting UI defaults
2023-03-06 10:57:45 -03:00
oobabooga
bcea196c9d Bump flexgen version 2023-03-02 12:03:57 -03:00
oobabooga
169209805d Model-aware prompts and presets 2023-03-02 11:25:04 -03:00
oobabooga
99dc95e14e Minor aesthetic change 2023-03-01 19:32:04 -03:00
oobabooga
a1429d1607 Add default extensions to the settings 2023-02-28 02:20:11 -03:00
oobabooga
365e1089b3 Move some buttons 2023-02-28 01:34:07 -03:00
oobabooga
43b6ab8673 Store thumbnails as files instead of base64 strings
This improves the UI responsiveness for large histories.
2023-02-27 13:41:00 -03:00
oobabooga
611010e8af Add a confirmation to clear history 2023-02-27 11:41:21 -03:00
oobabooga
7a776ccf87 Make the gallery interactive to load characters 2023-02-26 17:19:36 -03:00
oobabooga
e91eb24649 Decrease the repetition penalty upper limit to 3 2023-02-26 01:51:59 -03:00
oobabooga
3d94ebfdd0 Change --chat colors 2023-02-26 00:51:15 -03:00
oobabooga
b3d2365d92 Rename a button 2023-02-25 16:33:46 -03:00
oobabooga
03d25c1c61 Reorder the chat buttons 2023-02-25 15:35:43 -03:00
oobabooga
e2cf4e4968 Reorder the custom parameters 2023-02-25 15:21:40 -03:00
oobabooga
381f747181 Reorganize the custom parameters for mobile usage 2023-02-25 15:17:44 -03:00
oobabooga
01acb250c5 Add a comment 2023-02-25 02:07:29 -03:00
oobabooga
7c2babfe39 Rename greed to "generation attempts" 2023-02-25 01:42:19 -03:00
oobabooga
2dfb999bf1 Add greed parameter 2023-02-25 01:31:01 -03:00
oobabooga
7a527a5581 Move "send picture" into an extension
I am not proud of how I did it for now.
2023-02-25 00:23:51 -03:00
oobabooga
e51ece21c0 Add ui() function to extensions 2023-02-24 19:00:11 -03:00
oobabooga
77f58e5dab Remove a space 2023-02-24 17:32:34 -03:00
oobabooga
c5066f1192 Rename some variables, be consistent about ' and " 2023-02-24 17:31:23 -03:00
oobabooga
78ad55641b Remove duplicate max_new_tokens parameter 2023-02-24 17:19:42 -03:00
oobabooga
65326b545a Move all gradio elements to shared (so that extensions can use them) 2023-02-24 16:46:50 -03:00
oobabooga
0a3590da8c Add a progress bar 2023-02-24 14:19:27 -03:00
oobabooga
3b8cecbab7 Reload the default chat on page refresh 2023-02-23 19:50:23 -03:00
oobabooga
f1914115d3 Fix minor issue with chat logs 2023-02-23 16:04:47 -03:00
oobabooga
2e86a1ec04 Move chat history into shared module 2023-02-23 15:11:18 -03:00
oobabooga
c87800341c Move function to extensions module 2023-02-23 14:55:21 -03:00
oobabooga
7224343a70 Improve the imports 2023-02-23 14:41:42 -03:00
oobabooga
364529d0c7 Further refactor 2023-02-23 14:31:28 -03:00
oobabooga
e46c43afa6 Move some stuff from server.py to modules 2023-02-23 13:42:23 -03:00
oobabooga
1dacd34165 Further refactor 2023-02-23 13:28:30 -03:00
oobabooga
ce7feb3641 Further refactor 2023-02-23 13:03:52 -03:00
oobabooga
98af4bfb0d Refactor the code to make it more modular 2023-02-23 12:05:25 -03:00
oobabooga
18e0ec955e Improve some descriptions in --help 2023-02-23 10:11:58 -03:00
oobabooga
c72892835a Don't show *-np models in the list of choices 2023-02-22 11:38:16 -03:00
oobabooga
044b963987 Add stop parameter for flexgen (#105) 2023-02-22 11:23:36 -03:00
oobabooga
ea21a22940 Remove redundant preset 2023-02-22 01:01:26 -03:00
oobabooga
b8b3d4139c Add --compress-weight parameter 2023-02-22 00:43:21 -03:00
oobabooga
eef6fc3cbf Add a preset for FlexGen 2023-02-21 23:33:15 -03:00
oobabooga
311404e258 Reuse disk-cache-dir parameter for flexgen 2023-02-21 22:11:05 -03:00
oobabooga
f3c75bbd64 Add --percent flag for flexgen 2023-02-21 22:08:46 -03:00
oobabooga
b83f51ee04 Add FlexGen support #92 (experimental) 2023-02-21 21:00:06 -03:00
oobabooga
444cd69c67 Fix regex bug in loading character jsons with special characters 2023-02-20 19:38:19 -03:00
oobabooga
d7a738fb7a Load any 13b/20b/30b model in 8-bit mode when no flags are supplied 2023-02-20 15:44:10 -03:00
oobabooga
77846ceef3 Minor change 2023-02-20 15:05:48 -03:00
oobabooga
e195377050 Deprecate torch dumps, move to safetensors (they load even faster) 2023-02-20 15:03:19 -03:00
oobabooga
14ffa0b418 Fix line breaks in --chat mode 2023-02-20 13:25:46 -03:00
SillyLossy
ded890c378 Escape regexp in message extraction 2023-02-19 12:55:45 +02:00
oobabooga
8c9dd95d55
Print the softprompt metadata when it is loaded 2023-02-19 01:48:23 -03:00