Commit Graph

1070 Commits

Author SHA1 Message Date
oobabooga
c2cad30772 Merge branch 'main' into mcmonkey4eva-add-train-lora-tab 2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d add total time estimate 2023-03-27 10:57:27 -07:00
oobabooga
641e1a09a7 Don't flash when selecting a new prompt 2023-03-27 14:48:43 -03:00
Alex "mcmonkey" Goodwin
16ea4fc36d interrupt button 2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95 initial progress tracker in UI 2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
oobabooga
268abd1cba Add some space in notebook mode 2023-03-27 13:52:12 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850 add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900 Change Stop button behavior 2023-03-27 13:23:59 -03:00
oobabooga
addb9777f9 Increase size of GALACTICA equations 2023-03-27 12:59:07 -03:00
oobabooga
572bafcd24 Less verbose message 2023-03-27 12:43:37 -03:00
Alex "mcmonkey" Goodwin
2afe1c13c1 move Training to before Interface mode
as Interface Mode seems to be a core 'settings' page that naturally belongs at the very end
2023-03-27 08:32:32 -07:00
Alex "mcmonkey" Goodwin
d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
oobabooga
202e981d00 Make Generate/Stop buttons smaller in notebook mode 2023-03-27 12:30:57 -03:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
8e2d94a5a1 Add saved promtps to gitignore 2023-03-27 12:21:19 -03:00
oobabooga
57345b8f30 Add prompt loading/saving menus + reorganize interface 2023-03-27 12:16:37 -03:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
b6e38e8b97
silero_tts streaming fix (#568 from Brawlence/silero_tts-fix)
silero_tts streaming fix
2023-03-26 23:59:07 -03:00
oobabooga
af603a142a
Unload models on request (#471 from Brawlence/main) 2023-03-26 23:53:39 -03:00
oobabooga
95c97e1747 Unload the model using the "Remove all" button 2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093 Merge branch 'main' into Brawlence-main 2023-03-26 23:40:51 -03:00
oobabooga
511be06dcc Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-26 22:21:29 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
9ff6a538b6 Bump gradio version
Make sure to upgrade with

`pip install -r requirements.txt --upgrade`
2023-03-26 22:11:19 -03:00
oobabooga
a04b7cf264
Merge pull request #585 from fkusche/also-download-markdown
Also download Markdown files
2023-03-26 14:51:23 -03:00
Florian Kusche
19174842b8 Also download Markdown files 2023-03-26 19:41:14 +02:00
oobabooga
8222d32240
Merge pull request #565 from mcmonkey4eva/improve-gitignore
improve/simplify gitignore
2023-03-26 13:31:45 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
8134c4b334 add training/datsets to gitignore for #570 2023-03-25 12:41:18 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
Φφ
1a1e420e65 Silero_tts streaming fix
Temporarily suppress the streaming during the audio response as it would interfere with the audio (making it stutter and play anew)
2023-03-25 21:33:30 +03:00
Alex "mcmonkey" Goodwin
9ccf505ccd improve/simplify gitignore
- add repositories
- remove the redundant "/*" on folders
- remove the exclusions for files that already exist
2023-03-25 10:04:00 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
oobabooga
70f9565f37
Update README.md 2023-03-25 02:35:30 -03:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00