oobabooga
641e1a09a7
Don't flash when selecting a new prompt
2023-03-27 14:48:43 -03:00
oobabooga
268abd1cba
Add some space in notebook mode
2023-03-27 13:52:12 -03:00
oobabooga
af65c12900
Change Stop button behavior
2023-03-27 13:23:59 -03:00
oobabooga
572bafcd24
Less verbose message
2023-03-27 12:43:37 -03:00
oobabooga
202e981d00
Make Generate/Stop buttons smaller in notebook mode
2023-03-27 12:30:57 -03:00
oobabooga
57345b8f30
Add prompt loading/saving menus + reorganize interface
2023-03-27 12:16:37 -03:00
oobabooga
95c97e1747
Unload the model using the "Remove all" button
2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093
Merge branch 'main' into Brawlence-main
2023-03-26 23:40:51 -03:00
oobabooga
1c77fdca4c
Change notebook mode appearance
2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size ( #530 )
...
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
oobabooga
d8e950d6bd
Don't load the model twice when using --lora
2023-03-24 16:30:32 -03:00
oobabooga
fd99995b01
Make the Stop button more consistent in chat mode
2023-03-24 15:59:27 -03:00
oobabooga
9bdb3c784d
Minor fix
2023-03-23 22:02:40 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
Φφ
483d173d23
Code reuse + indication
...
Now shows the message in the console when unloading weights. Also reload_model() calls unload_model() first to free the memory so that multiple reloads won't overfill it.
2023-03-23 07:06:26 +03:00
Φφ
1917b15275
Unload and reload models on request
2023-03-23 07:06:26 +03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab
2023-03-22 15:40:20 -03:00
oobabooga
4d701a6eb9
Create a mirror for the preset menu
2023-03-19 12:51:47 -03:00
oobabooga
20f5b455bf
Add parameters reference #386 #331
2023-03-17 20:19:04 -03:00
oobabooga
a717fd709d
Sort the imports
2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74
Remove LoRA tab, move it into the Parameters menu
2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e
Several QoL changes related to LoRA
2023-03-17 11:24:52 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
38d7017657
Add all command-line flags to "Interface mode"
2023-03-16 12:44:03 -03:00
oobabooga
d54f3f4a34
Add no-stream checkbox to the interface
2023-03-16 10:19:00 -03:00
oobabooga
25a00eaf98
Add "Experimental" warning
2023-03-15 23:43:35 -03:00
oobabooga
599d3139fd
Increase the reload timeout a bit
2023-03-15 23:34:08 -03:00
oobabooga
4d64a57092
Add Interface mode tab
2023-03-15 23:29:56 -03:00
oobabooga
ffb898608b
Mini refactor
2023-03-15 20:44:34 -03:00
oobabooga
67d62475dc
Further reorganize chat UI
2023-03-15 18:56:26 -03:00
oobabooga
c1959c26ee
Show/hide the extensions block using javascript
2023-03-15 16:35:28 -03:00
oobabooga
348596f634
Fix broken extensions
2023-03-15 15:11:16 -03:00
oobabooga
658849d6c3
Move a checkbutton
2023-03-15 13:29:00 -03:00
oobabooga
d30a14087f
Further reorganize the UI
2023-03-15 13:24:54 -03:00
oobabooga
ffc6cb3116
Merge pull request #325 from Ph0rk0z/fix-RWKV-Names
...
Fix rwkv names
2023-03-15 12:56:21 -03:00
oobabooga
1413931705
Add a header bar and redesign the interface ( #293 )
2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6
Add 'hallucinations' filter #326
...
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
Forkoz
3b62bd180d
Remove PTH extension from RWKV
...
When loading the current model was blank unless you typed it out.
2023-03-14 21:23:39 +00:00
Forkoz
f0f325eac1
Remove Json from loading
...
no more 20b tokenizer
2023-03-14 21:21:47 +00:00
oobabooga
72d207c098
Remove the chat API
...
It is not implemented, has not been tested, and this is causing confusion.
2023-03-14 16:31:27 -03:00
oobabooga
a95592fc56
Add back a progress indicator to --no-stream
2023-03-12 20:38:40 -03:00
oobabooga
bcf0075278
Merge pull request #235 from xanthousm/Quality_of_life-main
...
--auto-launch and "Is typing..."
2023-03-12 03:12:56 -03:00
oobabooga
92fe947721
Merge branch 'main' into new-streaming
2023-03-11 19:59:45 -03:00
oobabooga
2743dd736a
Add *Is typing...* to impersonate as well
2023-03-11 10:50:18 -03:00
Xan
96c51973f9
--auto-launch and "Is typing..."
...
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
oobabooga
9849aac0f1
Don't show .pt models in the list
2023-03-09 21:54:50 -03:00
oobabooga
038e90765b
Rename to "Text generation web UI"
2023-03-09 09:44:08 -03:00
jtang613
807a41cf87
Lets propose a name besides "Gradio"
2023-03-08 21:02:25 -05:00
oobabooga
ab50f80542
New text streaming method (much faster)
2023-03-08 02:46:35 -03:00
oobabooga
bf56b6c1fb
Load settings.json without the need for --settings settings.json
...
This is for setting UI defaults
2023-03-06 10:57:45 -03:00