Commit Graph

2062 Commits

Author SHA1 Message Date
oobabooga
d87ca8f2af LLaVA fixes 2023-04-26 03:47:34 -03:00
oobabooga
9c2e7c0fab Fix path on models.py 2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py 2023-04-26 01:55:40 -03:00
oobabooga
4c491aa142 Add Alpaca prompt with Input field 2023-04-25 23:50:32 -03:00
oobabooga
68ed73dd89 Make API extension print its exceptions 2023-04-25 23:23:47 -03:00
oobabooga
f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
15940e762e Fix missing initial space for LlamaTokenizer 2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) (#1535)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
USBhost
95aa43b9c2
Update LLaMA download docs 2023-04-25 21:28:15 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 (#1546)
* full dynamic model type support on modern peft

* remove shuffle option
2023-04-25 21:27:30 -03:00
Wojtab
65beb51b0b
fix returned dtypes for LLaVA (#1547) 2023-04-25 21:25:34 -03:00
oobabooga
9b272bc8e5 Monkey patch fixes 2023-04-25 21:20:26 -03:00
oobabooga
da812600f4 Apply settings regardless of setup() function 2023-04-25 01:16:23 -03:00
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() (#1484) 2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e Add spaces 2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit 2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132 Remove obsolete function 2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc Only load the default history if it's empty 2023-04-24 11:50:51 -03:00
MajdajkD
c86e9a3372
fix websocket batching (#1511) 2023-04-24 03:51:32 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
oobabooga
2f6e2ddeac Bump llama-cpp-python version 2023-04-24 03:42:03 -03:00
oobabooga
caaa556159 Move extensions block definition to the bottom 2023-04-24 03:30:35 -03:00
oobabooga
b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
oobabooga
47809e28aa Minor changes 2023-04-24 01:04:48 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions 2023-04-24 00:47:40 -03:00
Wojtab
04b98a8485
Fix Continue for LLaVA (#1507) 2023-04-23 22:58:15 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
oobabooga
9197d3fec8
Update Extensions.md 2023-04-23 16:11:17 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs (#1493) 2023-04-23 12:54:41 -03:00
oobabooga
7ff645899e Fix bug in api extension 2023-04-22 17:33:36 -03:00
AICatgirls
b992c9236a
Prevent API extension responses from getting cut off with --chat enabled (#1467) 2023-04-22 16:06:43 -03:00
oobabooga
c0b5c09860 Minor change 2023-04-22 15:15:31 -03:00
oobabooga
47666c4d00
Update GPTQ-models-(4-bit-mode).md 2023-04-22 15:12:14 -03:00
oobabooga
fcb594b90e Don't require llama.cpp models to be placed in subfolders 2023-04-22 14:56:48 -03:00
oobabooga
06b6ff6c2e
Update GPTQ-models-(4-bit-mode).md 2023-04-22 12:49:00 -03:00
oobabooga
2c6d43e60f
Update GPTQ-models-(4-bit-mode).md 2023-04-22 12:48:20 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
InconsolableCellist
e03b873460
Updating Using-LoRAs.md doc to clarify resuming training (#1474) 2023-04-22 03:35:36 -03:00
oobabooga
fe02281477
Update README.md 2023-04-22 03:05:00 -03:00
oobabooga
ef40b4e862
Update README.md 2023-04-22 03:03:39 -03:00
oobabooga
408e172ad9
Rename docker/README.md to docs/Docker.md 2023-04-22 03:03:05 -03:00
oobabooga
4d9ae44efd
Update Spell-book.md 2023-04-22 02:53:52 -03:00
oobabooga
9508f207ba
Update Using-LoRAs.md 2023-04-22 02:53:01 -03:00
oobabooga
6d4f131d0a
Update Low-VRAM-guide.md 2023-04-22 02:50:35 -03:00
oobabooga
f5c36cca40
Update LLaMA-model.md 2023-04-22 02:49:54 -03:00
oobabooga
038fa3eb39
Update README.md 2023-04-22 02:46:07 -03:00