Commit Graph

3787 Commits

Author SHA1 Message Date
oobabooga
8e2d94a5a1 Add saved promtps to gitignore 2023-03-27 12:21:19 -03:00
oobabooga
57345b8f30 Add prompt loading/saving menus + reorganize interface 2023-03-27 12:16:37 -03:00
jllllll
cb5dff0087
Update installer to use official micromamba url 2023-03-26 23:40:46 -05:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
b6e38e8b97
silero_tts streaming fix (#568 from Brawlence/silero_tts-fix)
silero_tts streaming fix
2023-03-26 23:59:07 -03:00
jllllll
bdf85ffcf9
Remove explicit pytorch installation
Fixes an issue some people were having: https://github.com/oobabooga/text-generation-webui/issues/15
I did not experience this issue on my system. Not everyone does for some reason.
2023-03-26 21:56:16 -05:00
oobabooga
af603a142a
Unload models on request (#471 from Brawlence/main) 2023-03-26 23:53:39 -03:00
oobabooga
95c97e1747 Unload the model using the "Remove all" button 2023-03-26 23:47:29 -03:00
oobabooga
e07c9e3093 Merge branch 'main' into Brawlence-main 2023-03-26 23:40:51 -03:00
oobabooga
511be06dcc Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-26 22:21:29 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
9ff6a538b6 Bump gradio version
Make sure to upgrade with

`pip install -r requirements.txt --upgrade`
2023-03-26 22:11:19 -03:00
oobabooga
a04b7cf264
Merge pull request #585 from fkusche/also-download-markdown
Also download Markdown files
2023-03-26 14:51:23 -03:00
Florian Kusche
19174842b8 Also download Markdown files 2023-03-26 19:41:14 +02:00
oobabooga
8222d32240
Merge pull request #565 from mcmonkey4eva/improve-gitignore
improve/simplify gitignore
2023-03-26 13:31:45 -03:00
jllllll
6f89242094
Remove temporary fix for GPTQ-for-LLaMa
No longer necessary.
2023-03-26 03:29:14 -05:00
jllllll
6dcfcf4fed
Amended fix for GPTQ-for-LLaMa
Prevents breaking 3-bit support
2023-03-26 01:00:52 -05:00
jllllll
12baa0e84b
Update for latest GPTQ-for-LLaMa 2023-03-26 00:46:07 -05:00
jllllll
247e8e5b79
Fix for issue in current GPTQ-for-LLaMa. 2023-03-26 00:24:00 -05:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
8134c4b334 add training/datsets to gitignore for #570 2023-03-25 12:41:18 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
Φφ
1a1e420e65 Silero_tts streaming fix
Temporarily suppress the streaming during the audio response as it would interfere with the audio (making it stutter and play anew)
2023-03-25 21:33:30 +03:00
Alex "mcmonkey" Goodwin
9ccf505ccd improve/simplify gitignore
- add repositories
- remove the redundant "/*" on folders
- remove the exclusions for files that already exist
2023-03-25 10:04:00 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir 2023-03-25 01:28:33 -07:00
jllllll
ce9a5e3b53
Update install.bat
Minor fixes
2023-03-25 02:22:02 -05:00
jllllll
2e02d42682 Changed things around to allow Micromamba to work with paths containing spaces. 2023-03-25 01:26:25 -05:00
oobabooga
70f9565f37
Update README.md 2023-03-25 02:35:30 -03:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
jllllll
1e260544cd
Update install.bat
Added C:\Windows\System32 to PATH to avoid issues with broken? Windows installs.
2023-03-24 21:25:14 -05:00
catalpaaa
d51cb8292b Update server.py
yea i should go to bed
2023-03-24 17:36:31 -07:00
catalpaaa
9e2963e0c8 Update server.py 2023-03-24 17:35:45 -07:00
catalpaaa
ec2a1facee Update server.py 2023-03-24 17:34:33 -07:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
jllllll
fa916aa1de
Update INSTRUCTIONS.txt
Added clarification on new variable added to download-model.bat.
2023-03-24 18:28:46 -05:00
jllllll
586775ad47
Update download-model.bat
Removed redundant %ModelName% variable.
2023-03-24 18:25:49 -05:00
jllllll
bddbc2f898
Update start-webui.bat
Updated virtual environment handling to use Micromamba.
2023-03-24 18:19:23 -05:00
jllllll
2604e3f7ac
Update download-model.bat
Added variables for model selection and text only mode.
Updated virtual environment handling to use Micromamba.
2023-03-24 18:15:24 -05:00
jllllll
24870e51ed
Update micromamba-cmd.bat
Add cd command for admin.
2023-03-24 18:12:02 -05:00
jllllll
f0c82f06c3
Add files via upload
Add script to open cmd within installation environment for easier modification.
2023-03-24 18:09:44 -05:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00