Commit Graph

357 Commits

Author SHA1 Message Date
jllllll
412e7a6a96
Update README.md to include missing flags (#2449) 2023-05-31 11:07:56 -03:00
Atinoda
bfbd13ae89
Update docker repo link (#2340) 2023-05-30 22:14:49 -03:00
oobabooga
962d05ca7e
Update README.md 2023-05-29 14:56:55 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
oobabooga
f27135bdd3 Add Eta Sampling preset
Also remove some presets that I do not consider relevant
2023-05-28 22:44:35 -03:00
oobabooga
00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
jllllll
07a4f0569f
Update README.md to account for BnB Windows wheel (#2341) 2023-05-25 18:44:26 -03:00
oobabooga
231305d0f5
Update README.md 2023-05-25 12:05:08 -03:00
oobabooga
37d4ad012b Add a button for rendering markdown for any model 2023-05-25 11:59:27 -03:00
oobabooga
9a43656a50
Add bitsandbytes note 2023-05-25 11:21:52 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
oobabooga
a04266161d
Update README.md 2023-05-25 01:23:46 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag (#2283) 2023-05-23 20:39:26 -03:00
Atinoda
4155aaa96a
Add mention to alternative docker repository (#2145) 2023-05-23 20:35:53 -03:00
Carl Kenner
c86231377b
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159) 2023-05-19 11:42:41 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) 2023-05-17 10:41:09 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
23d3f6909a
Update README.md 2023-05-11 10:21:20 -03:00
oobabooga
2930e5a895
Update README.md 2023-05-11 10:04:38 -03:00
oobabooga
0ff38c994e
Update README.md 2023-05-11 09:58:58 -03:00
oobabooga
e6959a5d9a
Update README.md 2023-05-11 09:54:22 -03:00
oobabooga
dcfd09b61e
Update README.md 2023-05-11 09:49:57 -03:00
oobabooga
7a49ceab29
Update README.md 2023-05-11 09:42:39 -03:00
oobabooga
57dc44a995
Update README.md 2023-05-10 12:48:25 -03:00
oobabooga
181b102521
Update README.md 2023-05-10 12:09:47 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga
b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
oobabooga
fe02281477
Update README.md 2023-04-22 03:05:00 -03:00
oobabooga
038fa3eb39
Update README.md 2023-04-22 02:46:07 -03:00
oobabooga
505c2c73e8
Update README.md 2023-04-22 00:11:27 -03:00
oobabooga
f8da9a0424
Update README.md 2023-04-18 20:25:08 -03:00
oobabooga
c3f6e65554
Update README.md 2023-04-18 20:23:31 -03:00
oobabooga
eb15193327
Update README.md 2023-04-18 13:07:08 -03:00
oobabooga
7fbfc489e2
Update README.md 2023-04-18 12:56:37 -03:00
oobabooga
f559f9595b
Update README.md 2023-04-18 12:54:09 -03:00
loeken
89e22d4d6a
added windows/docker docs (#1027) 2023-04-18 12:47:43 -03:00
oobabooga
8275989f03
Add new 1-click installers for Linux and MacOS 2023-04-18 02:40:36 -03:00
oobabooga
301c687c64
Update README.md 2023-04-17 11:25:26 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
practicaldreamer
3961f49524
Add note about --no-fused_mlp ignoring --gpu-memory (#1301) 2023-04-17 10:46:37 -03:00