Commit Graph

1727 Commits

Author SHA1 Message Date
oobabooga
172bc949dd
Update README.md 2023-04-18 12:50:33 -03:00
oobabooga
753cd2d303
Rename Dockerfile to docker/Dockerfile 2023-04-18 12:48:04 -03:00
loeken
89e22d4d6a
added windows/docker docs (#1027) 2023-04-18 12:47:43 -03:00
oobabooga
b0c762ceba
Revert a change
I think that this may be needed for some clients
2023-04-18 04:10:45 -03:00
oobabooga
000f65a2ef
Delete unused file 2023-04-18 04:01:14 -03:00
oobabooga
c58c1d89bd
Clean method to prevent gradio from phoning home 2023-04-18 03:56:20 -03:00
oobabooga
8275989f03
Add new 1-click installers for Linux and MacOS 2023-04-18 02:40:36 -03:00
oobabooga
e1b80e6fe6
Comment the gradio patch 2023-04-18 01:57:59 -03:00
oobabooga
36f7c022f2
Rename a file 2023-04-18 01:38:33 -03:00
oobabooga
b069bb1f2e
Update monkey_patch_gradio.py 2023-04-18 01:32:42 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home 2023-04-18 01:13:16 -03:00
Tynan Burke
6a810b16b2
typo in training.py (#1329) 2023-04-17 21:40:46 -03:00
oobabooga
ac2973ffc6 Add a warning for --share 2023-04-17 19:34:28 -03:00
oobabooga
c544386824 Reset your name when choosing a character 2023-04-17 13:56:40 -03:00
oobabooga
163ea295e7 Fix but in API extension 2023-04-17 13:54:15 -03:00
oobabooga
b1b9519539 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-17 13:52:49 -03:00
oobabooga
c3dc348d1c Don't show 'None' in the LoRA list 2023-04-17 13:52:23 -03:00
oobabooga
301c687c64
Update README.md 2023-04-17 11:25:26 -03:00
oobabooga
19e3a59997 Remove unused extension 2023-04-17 11:06:08 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
catalpaaa
07de7d0426
Load llamacpp before quantized model (#1307) 2023-04-17 10:47:26 -03:00
practicaldreamer
3961f49524
Add note about --no-fused_mlp ignoring --gpu-memory (#1301) 2023-04-17 10:46:37 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
209fcd21d5 Reorganize Parameters tab 2023-04-17 00:33:22 -03:00
oobabooga
3e5cdd005f
Update README.md 2023-04-16 23:28:59 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga
ec3e869c27 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-16 21:26:42 -03:00
oobabooga
46a8aa8c09 Readability 2023-04-16 21:26:19 -03:00
GuizzyQC
5011f94659
Improved compatibility between silero and sd_api_pictures (#1196) 2023-04-16 21:18:52 -03:00
svupper
61d6f7f507
Add dependencies to Dockerfile for TTS extensions (#1276) 2023-04-16 21:17:00 -03:00
dependabot[bot]
4cd2a9d824
Bump transformers from 4.28.0 to 4.28.1 (#1288) 2023-04-16 21:12:57 -03:00
oobabooga
705121161b
Update README.md 2023-04-16 20:03:03 -03:00
oobabooga
50c55a51fc
Update README.md 2023-04-16 19:22:31 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
6a03ad0824 Remove fix_newlines() calls from chat.py 2023-04-16 18:25:44 -03:00
oobabooga
5342f72968 Properly handle blockquote blocks 2023-04-16 18:00:12 -03:00
oobabooga
27f3a78834 Better detect when no model is loaded 2023-04-16 17:35:54 -03:00
oobabooga
c8ad960018 Add defaults to the gradio API 2023-04-16 17:33:28 -03:00
oobabooga
c96529a1b3
Update README.md 2023-04-16 17:00:03 -03:00
oobabooga
6675f51ffe Change a color 2023-04-16 16:48:20 -03:00
oobabooga
beb95f5fe2 Add a style for the "chat" mode 2023-04-16 16:44:50 -03:00
oobabooga
cb95a2432c Add Koala support 2023-04-16 14:41:06 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00
oobabooga
a9c7ef4159 Exclude yaml files from model list 2023-04-16 12:47:30 -03:00
oobabooga
4e035cc3fb Fix api-example-stream 2023-04-16 12:12:31 -03:00
oobabooga
b705b4210c Minor changes to training.py 2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c Make training.py more readable 2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 (#1098)
* add support for other model types

dependent on future-peft-changes but with fallback to function now

* use encoding=utf8 for training format

* make shuffling optional

and describe dropout a bit more

* add eval_steps to control evaluation

* make callbacks not depend on globals

* make save steps controllable

* placeholder of initial loading-existing-model support

and var name cleanup

* save/load parameters

* last bit of cleanup

* remove `gptq_bits` ref as main branch removed that setting

* add higher_rank_limit option

2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter

* fix math on save_steps
2023-04-16 02:35:13 -03:00
kernyan
ac19d5101f
revert incorrect eos_token_id change from #814 (#1261)
- fixes #1054
2023-04-16 01:47:01 -03:00
oobabooga
a2127239de Fix a bug 2023-04-16 01:41:37 -03:00