Commit Graph

2147 Commits

Author SHA1 Message Date
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() (#1484) 2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e Add spaces 2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit 2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132 Remove obsolete function 2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc Only load the default history if it's empty 2023-04-24 11:50:51 -03:00
MajdajkD
c86e9a3372
fix websocket batching (#1511) 2023-04-24 03:51:32 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
oobabooga
2f6e2ddeac Bump llama-cpp-python version 2023-04-24 03:42:03 -03:00
oobabooga
caaa556159 Move extensions block definition to the bottom 2023-04-24 03:30:35 -03:00
oobabooga
b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
oobabooga
47809e28aa Minor changes 2023-04-24 01:04:48 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions 2023-04-24 00:47:40 -03:00
Wojtab
04b98a8485
Fix Continue for LLaVA (#1507) 2023-04-23 22:58:15 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
oobabooga
9197d3fec8
Update Extensions.md 2023-04-23 16:11:17 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs (#1493) 2023-04-23 12:54:41 -03:00
oobabooga
7ff645899e Fix bug in api extension 2023-04-22 17:33:36 -03:00
AICatgirls
b992c9236a
Prevent API extension responses from getting cut off with --chat enabled (#1467) 2023-04-22 16:06:43 -03:00
oobabooga
c0b5c09860 Minor change 2023-04-22 15:15:31 -03:00
oobabooga
47666c4d00
Update GPTQ-models-(4-bit-mode).md 2023-04-22 15:12:14 -03:00
oobabooga
fcb594b90e Don't require llama.cpp models to be placed in subfolders 2023-04-22 14:56:48 -03:00
oobabooga
06b6ff6c2e
Update GPTQ-models-(4-bit-mode).md 2023-04-22 12:49:00 -03:00
oobabooga
2c6d43e60f
Update GPTQ-models-(4-bit-mode).md 2023-04-22 12:48:20 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
InconsolableCellist
e03b873460
Updating Using-LoRAs.md doc to clarify resuming training (#1474) 2023-04-22 03:35:36 -03:00
oobabooga
fe02281477
Update README.md 2023-04-22 03:05:00 -03:00
oobabooga
ef40b4e862
Update README.md 2023-04-22 03:03:39 -03:00
oobabooga
408e172ad9
Rename docker/README.md to docs/Docker.md 2023-04-22 03:03:05 -03:00
oobabooga
4d9ae44efd
Update Spell-book.md 2023-04-22 02:53:52 -03:00
oobabooga
9508f207ba
Update Using-LoRAs.md 2023-04-22 02:53:01 -03:00
oobabooga
6d4f131d0a
Update Low-VRAM-guide.md 2023-04-22 02:50:35 -03:00
oobabooga
f5c36cca40
Update LLaMA-model.md 2023-04-22 02:49:54 -03:00
oobabooga
038fa3eb39
Update README.md 2023-04-22 02:46:07 -03:00
oobabooga
b5e5b9aeae
Delete Home.md 2023-04-22 02:40:20 -03:00
oobabooga
fe6e9ea986
Update README.md 2023-04-22 02:40:08 -03:00
oobabooga
80ef7c7bcb
Add files via upload 2023-04-22 02:34:13 -03:00
oobabooga
25b433990a
Create README.md 2023-04-22 02:33:32 -03:00
oobabooga
505c2c73e8
Update README.md 2023-04-22 00:11:27 -03:00
Φφ
143e88694d
SD_api_pictures: Modefix, +hires options, UI layout change (#1400) 2023-04-21 17:49:18 -03:00
oobabooga
2dca8bb25e Sort imports 2023-04-21 17:20:59 -03:00
oobabooga
c238ba9532 Add a 'Count tokens' button 2023-04-21 17:18:34 -03:00
Lou Bernardi
a6ef2429fa
Add "do not download" and "download from HF" to download-model.py (#1439) 2023-04-21 12:54:50 -03:00
USBhost
e1aa9d5173
Support upstream GPTQ once again. (#1451) 2023-04-21 12:43:56 -03:00
oobabooga
eddd016449 Minor deletion 2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50 Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
oobabooga
5e023ae64d Change dropdown menu highlight color 2023-04-21 02:47:18 -03:00
oobabooga
2d766d2e19 Improve notebook mode button sizes 2023-04-21 02:37:58 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models (#1322) 2023-04-21 00:20:33 -03:00