Commit Graph

510 Commits

Author SHA1 Message Date
oobabooga
f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
15940e762e Fix missing initial space for LlamaTokenizer 2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) (#1535)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 (#1546)
* full dynamic model type support on modern peft

* remove shuffle option
2023-04-25 21:27:30 -03:00
oobabooga
9b272bc8e5 Monkey patch fixes 2023-04-25 21:20:26 -03:00
oobabooga
da812600f4 Apply settings regardless of setup() function 2023-04-25 01:16:23 -03:00
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() (#1484) 2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e Add spaces 2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit 2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132 Remove obsolete function 2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc Only load the default history if it's empty 2023-04-24 11:50:51 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
oobabooga
b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions 2023-04-24 00:47:40 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs (#1493) 2023-04-23 12:54:41 -03:00
oobabooga
c0b5c09860 Minor change 2023-04-22 15:15:31 -03:00
oobabooga
fcb594b90e Don't require llama.cpp models to be placed in subfolders 2023-04-22 14:56:48 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
USBhost
e1aa9d5173
Support upstream GPTQ once again. (#1451) 2023-04-21 12:43:56 -03:00
oobabooga
eddd016449 Minor deletion 2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50 Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
oobabooga
5e023ae64d Change dropdown menu highlight color 2023-04-21 02:47:18 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models (#1322) 2023-04-21 00:20:33 -03:00
oobabooga
7bb9036ac9 Add universal LLaMA tokenizer support 2023-04-19 21:23:51 -03:00
Alex "mcmonkey" Goodwin
ee30625cd1
4-Bit LoRA training + several new training options and fixes 2023-04-19 19:39:03 -03:00
oobabooga
702fe92d42 Increase truncation_length_max value 2023-04-19 17:35:38 -03:00
oobabooga
9d9ae62938 Fix stopping strings in the gradio API 2023-04-19 13:52:21 -03:00
oobabooga
649e4017a5 Style improvements 2023-04-19 00:36:28 -03:00
oobabooga
000f65a2ef
Delete unused file 2023-04-18 04:01:14 -03:00
oobabooga
36f7c022f2
Rename a file 2023-04-18 01:38:33 -03:00
oobabooga
b069bb1f2e
Update monkey_patch_gradio.py 2023-04-18 01:32:42 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home 2023-04-18 01:13:16 -03:00
Tynan Burke
6a810b16b2
typo in training.py (#1329) 2023-04-17 21:40:46 -03:00
oobabooga
ac2973ffc6 Add a warning for --share 2023-04-17 19:34:28 -03:00
oobabooga
c544386824 Reset your name when choosing a character 2023-04-17 13:56:40 -03:00
oobabooga
c3dc348d1c Don't show 'None' in the LoRA list 2023-04-17 13:52:23 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
catalpaaa
07de7d0426
Load llamacpp before quantized model (#1307) 2023-04-17 10:47:26 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga
46a8aa8c09 Readability 2023-04-16 21:26:19 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
6a03ad0824 Remove fix_newlines() calls from chat.py 2023-04-16 18:25:44 -03:00
oobabooga
5342f72968 Properly handle blockquote blocks 2023-04-16 18:00:12 -03:00
oobabooga
27f3a78834 Better detect when no model is loaded 2023-04-16 17:35:54 -03:00
oobabooga
c8ad960018 Add defaults to the gradio API 2023-04-16 17:33:28 -03:00