oobabooga
37d4ad012b
Add a button for rendering markdown for any model
2023-05-25 11:59:27 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
Gabriel Terrien
fc116711b0
FIX save_model_settings
function to also update shared.model_config
( #2282 )
2023-05-24 10:01:07 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag ( #2283 )
2023-05-23 20:39:26 -03:00
oobabooga
8b9ba3d7b4
Fix a typo
2023-05-22 20:13:03 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json ( #2288 )
2023-05-22 19:45:11 -03:00
oobabooga
c5446ae0e2
Fix a link
2023-05-22 19:38:34 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
ec7437f00a
Better way to toggle light/dark mode
2023-05-22 03:19:01 -03:00
oobabooga
d46f5a58a3
Add a button for toggling dark/light mode
2023-05-22 03:11:44 -03:00
oobabooga
753f6c5250
Attempt at making interface restart more robust
2023-05-22 00:26:07 -03:00
oobabooga
30225b9dd0
Fix --no-stream queue bug
2023-05-22 00:02:59 -03:00
oobabooga
288912baf1
Add a description for the extensions checkbox group
2023-05-21 23:33:37 -03:00
oobabooga
6e77844733
Add a description for penalty_alpha
2023-05-21 23:09:30 -03:00
oobabooga
e3d578502a
Improve "Chat settings" tab appearance a bit
2023-05-21 22:58:14 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
oobabooga
d7fabe693d
Reorganize parameters tab
2023-05-21 16:24:47 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters ( #2258 )
2023-05-21 15:11:57 -03:00
Matthew McAllister
ab6acddcc5
Add Save/Delete character buttons ( #1870 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
HappyWorldGames
a3e9769e31
Added an audible notification after text generation in web. ( #1277 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-19 23:16:06 -03:00
oobabooga
f052ab9c8f
Fix setting pre_layer from within the ui
2023-05-17 23:17:44 -03:00
oobabooga
fd743a0207
Small change
2023-05-17 02:34:29 -03:00
LoopLooter
aeb1b7a9c5
feature to save prompts with custom names ( #1583 )
...
---------
Co-authored-by: LoopLooter <looplooter>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-17 02:30:45 -03:00
oobabooga
85f74961f9
Update "Interface mode" tab
2023-05-17 01:57:51 -03:00
oobabooga
ce21804ec7
Allow extensions to define a new tab
2023-05-17 01:31:56 -03:00
oobabooga
a84f499718
Allow extensions to define custom CSS and JS
2023-05-17 00:30:54 -03:00
oobabooga
824fa8fc0e
Attempt at making interface restart more robust
2023-05-16 22:27:43 -03:00
oobabooga
7584d46c29
Refactor models.py ( #2113 )
2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287
Fix no-mmap bug
2023-05-16 17:35:49 -03:00
oobabooga
89e37626ab
Reorganize chat settings tab
2023-05-16 17:22:59 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models ( #2087 )
2023-05-15 19:51:23 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode ( #2049 )
2023-05-14 10:43:55 -03:00
oobabooga
437d1c7ead
Fix bug in save_model_settings
2023-05-12 14:33:00 -03:00
oobabooga
146a9cb393
Allow superbooga to download URLs in parallel
2023-05-12 14:19:55 -03:00
oobabooga
e283ddc559
Change how spaces are handled in continue/generation attempts
2023-05-12 12:50:29 -03:00
oobabooga
5eaa914e1b
Fix settings.json being ignored because of config.yaml
2023-05-12 06:09:45 -03:00
oobabooga
a77965e801
Make the regex for "Save settings for this model" exact
2023-05-12 00:43:13 -03:00
oobabooga
f7dbddfff5
Add a variable for tts extensions to use
2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions ( #2003 )
2023-05-11 15:37:04 -03:00
oobabooga
e5b1547849
Fix reload model button
2023-05-10 14:44:25 -03:00
oobabooga
3316e33d14
Remove unused code
2023-05-10 11:59:59 -03:00
oobabooga
cd36b8f739
Remove space
2023-05-10 01:41:33 -03:00
oobabooga
bdf1274b5d
Remove duplicate code
2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements ( #1957 )
2023-05-09 22:49:39 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) ( #1741 )
2023-05-09 20:18:02 -03:00
oobabooga
13e7ebfc77
Change a comment
2023-05-09 15:56:32 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model ( #1762 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) ( #1147 )
2023-05-09 02:37:42 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles ( #1917 )
2023-05-08 12:35:03 -03:00
Matthew McAllister
0c048252b5
Fix character menu when default chat mode is 'instruct' ( #1873 )
2023-05-07 23:50:38 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes ( #1896 )
2023-05-07 23:47:02 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
Tom Jobbins
876fbb97c0
Allow downloading model from HF branch via UI ( #1662 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-05 13:59:01 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
Tom Jobbins
3c67fc0362
Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage ( #1660 )
2023-05-02 00:46:26 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
f39c99fa14
Load more than one LoRA with --lora, fix a bug
2023-04-25 22:58:48 -03:00
oobabooga
b6af2e56a2
Add --character flag, add character to settings.json
2023-04-24 13:19:42 -03:00
oobabooga
caaa556159
Move extensions block definition to the bottom
2023-04-24 03:30:35 -03:00
oobabooga
b1ee674d75
Make interface state (mostly) persistent on page reload
2023-04-24 03:05:47 -03:00
oobabooga
47809e28aa
Minor changes
2023-04-24 01:04:48 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints ( #990 )
...
Previous title: Add api_streaming extension and update api-example-stream to use it
* Merge with latest main
* Add parameter capturing encoder_repetition_penalty
* Change some defaults, minor fixes
* Add --api, --public-api flags
* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.
* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'
* Update the API examples
* Change a comment
* Update README
* Remove the gradio API
* Remove unused import
* Minor change
* Remove unused import
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga
2dca8bb25e
Sort imports
2023-04-21 17:20:59 -03:00
oobabooga
c238ba9532
Add a 'Count tokens' button
2023-04-21 17:18:34 -03:00
oobabooga
2d766d2e19
Improve notebook mode button sizes
2023-04-21 02:37:58 -03:00
oobabooga
b4af319fa2
Add a workaround for GALACTICA on some systems
2023-04-19 01:43:10 -03:00
oobabooga
61126f4674
Change the button styles
2023-04-19 00:56:24 -03:00
oobabooga
649e4017a5
Style improvements
2023-04-19 00:36:28 -03:00
oobabooga
c58c1d89bd
Clean method to prevent gradio from phoning home
2023-04-18 03:56:20 -03:00
oobabooga
e1b80e6fe6
Comment the gradio patch
2023-04-18 01:57:59 -03:00
oobabooga
36f7c022f2
Rename a file
2023-04-18 01:38:33 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home
2023-04-18 01:13:16 -03:00
oobabooga
c3dc348d1c
Don't show 'None' in the LoRA list
2023-04-17 13:52:23 -03:00
oobabooga
209fcd21d5
Reorganize Parameters tab
2023-04-17 00:33:22 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model ( #1218 )
2023-04-16 14:24:49 -03:00
oobabooga
a9c7ef4159
Exclude yaml files from model list
2023-04-16 12:47:30 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
...
* Remove eval of API input
* Remove unnecessary eval/exec for security
* Use ast.literal_eval
* Use ast.literal_eval
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
ac189011cb
Add "Save current settings for this model" button
2023-04-15 12:54:02 -03:00
oobabooga
b9dcba7762
Don't overwrite --gpu_memory on boot (#1237/#1235)
2023-04-15 11:59:31 -03:00
oobabooga
628f8e6168
Reorganize chat buttons
2023-04-14 23:17:15 -03:00
oobabooga
c4aa1a42b1
Fix chat history downloading
2023-04-14 19:38:30 -03:00
oobabooga
3a337cfded
Use argparse defaults
2023-04-14 15:35:06 -03:00
oobabooga
43e01282b3
Don't override user initial wbits/groupsize
2023-04-14 15:24:03 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support ( #1103 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
ebb81eb176
Add Undo/Regenerate buttons to notebook mode
2023-04-14 14:34:56 -03:00
oobabooga
581f739b2f
Reorganize
2023-04-14 11:38:26 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name ( #1167 )
2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option ( #1122 )
2023-04-13 21:35:08 -03:00
oobabooga
c13e8651ad
Suppress "TypedStorage is deprecated" warnings
2023-04-13 12:09:42 -03:00
oobabooga
17ce7c8671
Suppress annoying bitsandbytes welcome message
2023-04-13 12:04:39 -03:00
oobabooga
fbb448ce4f
If only 1 model is available, load that model
2023-04-13 11:44:10 -03:00
oobabooga
5744b31593
Reorganize some buttons
2023-04-13 11:05:47 -03:00
Xan
6e19ae4b2f
Fix gpt-j model type in UI ( #1129 )
2023-04-13 10:17:20 -03:00
oobabooga
ddbd237ec9
Better way to sort the models/loras
2023-04-12 22:56:32 -03:00
oobabooga
7dfbe54f42
Add --model-menu option
2023-04-12 21:24:26 -03:00
oobabooga
86c10c6f0c
Add some labels
2023-04-12 18:39:21 -03:00
oobabooga
0baa50bcc4
Update a comment
2023-04-12 18:26:15 -03:00
oobabooga
5d1d0bd11f
Add the GPU index to the label
2023-04-12 18:24:19 -03:00
oobabooga
13789fd200
Handle the no-GPU / multi-GPU cases
2023-04-12 18:21:14 -03:00
oobabooga
1566d8e344
Add model settings to the Models tab
2023-04-12 17:20:18 -03:00
oobabooga
80f4eabb2a
Fix send_pictures extension
2023-04-12 10:27:06 -03:00
oobabooga
2289d3686f
Update API example
2023-04-11 22:43:43 -03:00
oobabooga
f2be87235d
Comment lines that were causing undefined behavior
2023-04-11 22:40:04 -03:00
oobabooga
8265d45db8
Add send dummy message/reply buttons
...
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
f2ec880e81
Auto-scroll to the bottom when streaming is over in notebook/default modes
2023-04-11 20:58:10 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token
2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes ( #903 )
2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
...
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.
The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
oobabooga
58b34c0841
Fix chat_prompt_size
2023-04-10 20:06:42 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters ( #1008 )
2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad
Make the bos token optional
2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1
Don't treat Intruct mode histories as regular histories
...
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
d679c4be13
Change a label
2023-04-10 11:44:37 -03:00
oobabooga
45244ed125
More descriptive download info
2023-04-10 11:42:12 -03:00
oobabooga
11b23db8d4
Remove unused imports
2023-04-10 11:37:42 -03:00
oobabooga
2c14df81a8
Use download-model.py to download the model
2023-04-10 11:36:39 -03:00
oobabooga
c6e9ba20a4
Merge branch 'main' into UsamaKenway-main
2023-04-10 11:14:03 -03:00
oobabooga
d29f4624e9
Add a Continue button to chat mode
2023-04-09 20:04:16 -03:00
oobabooga
f91d3a3ff4
server.py readability
2023-04-09 14:46:32 -03:00
Usama Kenway
ebdf4c8c12
path fixed
2023-04-09 16:53:21 +05:00
Usama Kenway
7436dd5b4a
download custom model menu (from hugging face) added in model tab
2023-04-09 16:11:43 +05:00
oobabooga
cb169d0834
Minor formatting changes
2023-04-08 17:34:07 -03:00
oobabooga
2f16d0afca
Remove redundant events
2023-04-08 17:32:36 -03:00
oobabooga
a6a00cb82f
Properly concatenate chat events
2023-04-08 17:25:21 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 ( #596 )
2023-04-07 21:36:04 -03:00
oobabooga
5543a5089d
Auto-submit the whisper extension transcription
2023-04-07 15:57:51 -03:00
oobabooga
1dc464dcb0
Sort imports
2023-04-07 14:42:03 -03:00
oobabooga
962e33dc10
Change button style
2023-04-07 12:22:14 -03:00
Maya
744bf7cbf2
Get rid of type parameter warning ( #883 )
...
Fix annoying `The 'type' parameter has been deprecated. Use the Number component instead` warning
2023-04-07 11:17:16 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
oobabooga
5b301d9a02
Create a Model tab
2023-04-06 01:54:05 -03:00
oobabooga
4a400320dd
Clean up
2023-04-06 01:47:00 -03:00
Randell Miller
641646a801
Fix crash if missing instructions directory ( #812 )
2023-04-06 01:24:22 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API
2023-04-06 01:22:15 -03:00
oobabooga
7f66421369
Fix loading characters
2023-04-05 14:22:32 -03:00
oobabooga
90141bc1a8
Fix saving prompts on Windows
2023-04-05 14:08:54 -03:00
oobabooga
cf2c4e740b
Disable gradio analytics globally
2023-04-05 14:05:50 -03:00
oobabooga
e722c240af
Add Instruct mode
2023-04-05 13:54:50 -03:00
oobabooga
ae1fe45bc0
One more cache reset
2023-04-04 23:15:57 -03:00
oobabooga
80dfba05f3
Better crop/resize cached images
2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d
Show profile pictures in the Character tab
2023-04-04 22:28:49 -03:00
oobabooga
8de22ac82a
Merge character upload tabs
2023-04-03 18:01:45 -03:00
oobabooga
3012bdb5e0
Fix a label
2023-04-03 12:20:53 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly ( #743 )
...
* Add greetings field
* add greeting field and make it interactive
* Minor changes
* Fix a bug
* Simplify clear_chat_log
* Change a label
* Minor change
* Simplifications
* Simplification
* Simplify loading the default character history
* Fix regression
---------
Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
TheTerrasque
2157bb4319
New yaml character format ( #337 from TheTerrasque/feature/yaml-characters)
...
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
0dc6fa038b
Use gr.State() to store the user input
2023-04-02 18:05:21 -03:00
Brian O'Connor
d0f9625f0b
Clear text input for chat
...
Add logic to clear the textbox for chat input when the user submits or hits the generate button.
2023-04-01 21:48:24 -04:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
8c51b405e4
Progress towards generalizing Interface mode tab
2023-03-31 23:41:10 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
oobabooga
fd72afd8e7
Increase the textbox sizes
2023-03-31 00:43:00 -03:00
oobabooga
bd65940a48
Increase --chat box height
2023-03-30 00:43:49 -03:00
oobabooga
55755e27b9
Don't hardcode prompts in the settings dict/json
2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160
Adapt to the new model names
2023-03-29 21:47:36 -03:00
oobabooga
cac577d99f
Fix interface reloading
2023-03-28 13:25:58 -03:00
Alex "mcmonkey" Goodwin
9cc811a0e6
fix LoRA path typo in #549
2023-03-27 22:16:40 -07:00
Alex "mcmonkey" Goodwin
31f04dc615
Merge branch 'main' into add-train-lora-tab
2023-03-27 20:03:30 -07:00
oobabooga
005f552ea3
Some simplifications
2023-03-27 23:29:52 -03:00
oobabooga
fde92048af
Merge branch 'main' into catalpaaa-lora-and-model-dir
2023-03-27 23:16:44 -03:00
oobabooga
2f0571bfa4
Small style changes
2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772
Merge branch 'main' into mcmonkey4eva-add-train-lora-tab
2023-03-27 21:05:44 -03:00
oobabooga
641e1a09a7
Don't flash when selecting a new prompt
2023-03-27 14:48:43 -03:00
oobabooga
268abd1cba
Add some space in notebook mode
2023-03-27 13:52:12 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850
add some outputs to indicate progress updates (sorta)
...
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900
Change Stop button behavior
2023-03-27 13:23:59 -03:00
oobabooga
572bafcd24
Less verbose message
2023-03-27 12:43:37 -03:00
Alex "mcmonkey" Goodwin
2afe1c13c1
move Training to before Interface mode
...
as Interface Mode seems to be a core 'settings' page that naturally belongs at the very end
2023-03-27 08:32:32 -07:00
oobabooga
202e981d00
Make Generate/Stop buttons smaller in notebook mode
2023-03-27 12:30:57 -03:00
Alex "mcmonkey" Goodwin
e439228ed8
Merge branch 'main' into add-train-lora-tab
2023-03-27 08:21:19 -07:00
oobabooga
57345b8f30
Add prompt loading/saving menus + reorganize interface
2023-03-27 12:16:37 -03:00
oobabooga
95c97e1747
Unload the model using the "Remove all" button
2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093
Merge branch 'main' into Brawlence-main
2023-03-26 23:40:51 -03:00
oobabooga
1c77fdca4c
Change notebook mode appearance
2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size ( #530 )
...
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00
catalpaaa
d51cb8292b
Update server.py
...
yea i should go to bed
2023-03-24 17:36:31 -07:00
catalpaaa
9e2963e0c8
Update server.py
2023-03-24 17:35:45 -07:00
catalpaaa
ec2a1facee
Update server.py
2023-03-24 17:34:33 -07:00
catalpaaa
b37c54edcf
lora-dir, model-dir and login auth
...
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
d8e950d6bd
Don't load the model twice when using --lora
2023-03-24 16:30:32 -03:00
oobabooga
fd99995b01
Make the Stop button more consistent in chat mode
2023-03-24 15:59:27 -03:00
oobabooga
9bdb3c784d
Minor fix
2023-03-23 22:02:40 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
Φφ
483d173d23
Code reuse + indication
...
Now shows the message in the console when unloading weights. Also reload_model() calls unload_model() first to free the memory so that multiple reloads won't overfill it.
2023-03-23 07:06:26 +03:00
Φφ
1917b15275
Unload and reload models on request
2023-03-23 07:06:26 +03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab
2023-03-22 15:40:20 -03:00
oobabooga
4d701a6eb9
Create a mirror for the preset menu
2023-03-19 12:51:47 -03:00
oobabooga
20f5b455bf
Add parameters reference #386 #331
2023-03-17 20:19:04 -03:00
oobabooga
a717fd709d
Sort the imports
2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74
Remove LoRA tab, move it into the Parameters menu
2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e
Several QoL changes related to LoRA
2023-03-17 11:24:52 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
38d7017657
Add all command-line flags to "Interface mode"
2023-03-16 12:44:03 -03:00
oobabooga
d54f3f4a34
Add no-stream checkbox to the interface
2023-03-16 10:19:00 -03:00
oobabooga
25a00eaf98
Add "Experimental" warning
2023-03-15 23:43:35 -03:00
oobabooga
599d3139fd
Increase the reload timeout a bit
2023-03-15 23:34:08 -03:00