Commit Graph

3422 Commits

Author SHA1 Message Date
oobabooga
1ba0082410
Add files via upload 2023-04-18 02:30:47 -03:00
oobabooga
a5f7d98cf3
Rename environment_windows.bat to cmd_windows.bat 2023-04-18 02:30:23 -03:00
oobabooga
316aaff348
Rename environment_macos.sh to cmd_macos.sh 2023-04-18 02:30:08 -03:00
oobabooga
647f7bca36
Rename environment_linux.sh to cmd_linux.sh 2023-04-18 02:29:55 -03:00
Blake Wyatt
6d2c72b593
Add support for MacOS, Linux, and WSL (#21)
* Initial commit

* Initial commit with new code

* Add comments

* Move GPTQ out of if

* Fix install on Arch Linux

* Fix case where install was aborted

If the install was aborted before a model was downloaded, webui wouldn't run.

* Update start_windows.bat

Add necessary flags to Miniconda installer
Disable Start Menu shortcut creation
Disable ssl on Conda
Change Python version to latest 3.10,
I've noticed that explicitly specifying 3.10.9 can break the included Python installation

* Update bitsandbytes wheel link to 0.38.1

Disable ssl on Conda

* Add check for spaces in path

Installation of Miniconda will fail in this case

* Mirror changes to mac and linux scripts

* Start with model-menu

* Add updaters

* Fix line endings

* Add check for path with spaces

* Fix one-click updating

* Fix one-click updating

* Clean up update scripts

* Add environment scripts

---------

Co-authored-by: jllllll <3887729+jllllll@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-18 02:23:09 -03:00
oobabooga
e1b80e6fe6
Comment the gradio patch 2023-04-18 01:57:59 -03:00
oobabooga
36f7c022f2
Rename a file 2023-04-18 01:38:33 -03:00
oobabooga
b069bb1f2e
Update monkey_patch_gradio.py 2023-04-18 01:32:42 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home 2023-04-18 01:13:16 -03:00
Tynan Burke
6a810b16b2
typo in training.py (#1329) 2023-04-17 21:40:46 -03:00
oobabooga
ac2973ffc6 Add a warning for --share 2023-04-17 19:34:28 -03:00
oobabooga
c544386824 Reset your name when choosing a character 2023-04-17 13:56:40 -03:00
oobabooga
163ea295e7 Fix but in API extension 2023-04-17 13:54:15 -03:00
oobabooga
b1b9519539 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-17 13:52:49 -03:00
oobabooga
c3dc348d1c Don't show 'None' in the LoRA list 2023-04-17 13:52:23 -03:00
oobabooga
301c687c64
Update README.md 2023-04-17 11:25:26 -03:00
oobabooga
19e3a59997 Remove unused extension 2023-04-17 11:06:08 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
catalpaaa
07de7d0426
Load llamacpp before quantized model (#1307) 2023-04-17 10:47:26 -03:00
practicaldreamer
3961f49524
Add note about --no-fused_mlp ignoring --gpu-memory (#1301) 2023-04-17 10:46:37 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
209fcd21d5 Reorganize Parameters tab 2023-04-17 00:33:22 -03:00
oobabooga
3e5cdd005f
Update README.md 2023-04-16 23:28:59 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga
ec3e869c27 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-16 21:26:42 -03:00
oobabooga
46a8aa8c09 Readability 2023-04-16 21:26:19 -03:00
GuizzyQC
5011f94659
Improved compatibility between silero and sd_api_pictures (#1196) 2023-04-16 21:18:52 -03:00
svupper
61d6f7f507
Add dependencies to Dockerfile for TTS extensions (#1276) 2023-04-16 21:17:00 -03:00
dependabot[bot]
4cd2a9d824
Bump transformers from 4.28.0 to 4.28.1 (#1288) 2023-04-16 21:12:57 -03:00
oobabooga
705121161b
Update README.md 2023-04-16 20:03:03 -03:00
oobabooga
50c55a51fc
Update README.md 2023-04-16 19:22:31 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
6a03ad0824 Remove fix_newlines() calls from chat.py 2023-04-16 18:25:44 -03:00
oobabooga
5342f72968 Properly handle blockquote blocks 2023-04-16 18:00:12 -03:00
oobabooga
27f3a78834 Better detect when no model is loaded 2023-04-16 17:35:54 -03:00
oobabooga
c8ad960018 Add defaults to the gradio API 2023-04-16 17:33:28 -03:00
oobabooga
c96529a1b3
Update README.md 2023-04-16 17:00:03 -03:00
oobabooga
6675f51ffe Change a color 2023-04-16 16:48:20 -03:00
oobabooga
beb95f5fe2 Add a style for the "chat" mode 2023-04-16 16:44:50 -03:00
oobabooga
cb95a2432c Add Koala support 2023-04-16 14:41:06 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00
oobabooga
a9c7ef4159 Exclude yaml files from model list 2023-04-16 12:47:30 -03:00
oobabooga
4e035cc3fb Fix api-example-stream 2023-04-16 12:12:31 -03:00
oobabooga
b705b4210c Minor changes to training.py 2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c Make training.py more readable 2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 (#1098)
* add support for other model types

dependent on future-peft-changes but with fallback to function now

* use encoding=utf8 for training format

* make shuffling optional

and describe dropout a bit more

* add eval_steps to control evaluation

* make callbacks not depend on globals

* make save steps controllable

* placeholder of initial loading-existing-model support

and var name cleanup

* save/load parameters

* last bit of cleanup

* remove `gptq_bits` ref as main branch removed that setting

* add higher_rank_limit option

2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter

* fix math on save_steps
2023-04-16 02:35:13 -03:00
kernyan
ac19d5101f
revert incorrect eos_token_id change from #814 (#1261)
- fixes #1054
2023-04-16 01:47:01 -03:00
oobabooga
a2127239de Fix a bug 2023-04-16 01:41:37 -03:00
oobabooga
9d3c6d2dc3 Fix a bug 2023-04-16 01:40:47 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
* Remove eval of API input

* Remove unnecessary eval/exec for security

* Use ast.literal_eval

* Use ast.literal_eval

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00