Commit Graph

381 Commits

Author SHA1 Message Date
oobabooga
181b102521
Update README.md 2023-05-10 12:09:47 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga
b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
oobabooga
fe02281477
Update README.md 2023-04-22 03:05:00 -03:00
oobabooga
038fa3eb39
Update README.md 2023-04-22 02:46:07 -03:00
oobabooga
505c2c73e8
Update README.md 2023-04-22 00:11:27 -03:00
oobabooga
f8da9a0424
Update README.md 2023-04-18 20:25:08 -03:00
oobabooga
c3f6e65554
Update README.md 2023-04-18 20:23:31 -03:00
oobabooga
eb15193327
Update README.md 2023-04-18 13:07:08 -03:00
oobabooga
7fbfc489e2
Update README.md 2023-04-18 12:56:37 -03:00
oobabooga
f559f9595b
Update README.md 2023-04-18 12:54:09 -03:00
loeken
89e22d4d6a
added windows/docker docs (#1027) 2023-04-18 12:47:43 -03:00
oobabooga
8275989f03
Add new 1-click installers for Linux and MacOS 2023-04-18 02:40:36 -03:00
oobabooga
301c687c64
Update README.md 2023-04-17 11:25:26 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
practicaldreamer
3961f49524
Add note about --no-fused_mlp ignoring --gpu-memory (#1301) 2023-04-17 10:46:37 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
3e5cdd005f
Update README.md 2023-04-16 23:28:59 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
oobabooga
705121161b
Update README.md 2023-04-16 20:03:03 -03:00
oobabooga
50c55a51fc
Update README.md 2023-04-16 19:22:31 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
c96529a1b3
Update README.md 2023-04-16 17:00:03 -03:00
oobabooga
004f275efe
Update README.md 2023-04-14 23:36:56 -03:00
oobabooga
83964ed354
Update README.md 2023-04-14 23:33:54 -03:00
oobabooga
c41037db68
Update README.md 2023-04-14 23:32:39 -03:00
v0xie
9d66957207
Add --listen-host launch option (#1122) 2023-04-13 21:35:08 -03:00
oobabooga
403be8a27f
Update README.md 2023-04-13 21:23:35 -03:00
Light
97e67d136b Update README.md 2023-04-13 21:00:58 +08:00
Light
15d5a043f2 Merge remote-tracking branch 'origin/main' into triton 2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42 Add --model-menu option 2023-04-12 21:24:26 -03:00
MarlinMr
47daf891fe
Link to developer.nvidia.com (#1104) 2023-04-12 15:56:42 -03:00
Light
f3591ccfa1 Keep minimal change. 2023-04-12 23:26:06 +08:00
oobabooga
461ca7faf5
Mention that pull request reviews are welcome 2023-04-11 23:12:48 -03:00
oobabooga
749c08a4ff
Update README.md 2023-04-11 14:42:10 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga
f035b01823
Update README.md 2023-04-10 16:20:23 -03:00
Jeff Lefebvre
b7ca89ba3f
Mention that build-essential is required (#1013) 2023-04-10 16:19:10 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
oobabooga
bce1b7fbb2
Update README.md 2023-04-09 02:19:40 -03:00
oobabooga
f7860ce192
Update README.md 2023-04-09 02:19:17 -03:00
oobabooga
ece8ed2c84
Update README.md 2023-04-09 02:18:42 -03:00
MarlinMr
ec979cd9c4
Use updated docker compose (#877) 2023-04-07 10:48:47 -03:00
MarlinMr
2c0018d946
Cosmetic change of README.md (#878) 2023-04-07 10:47:10 -03:00
oobabooga
848c4edfd5
Update README.md 2023-04-06 22:52:35 -03:00
oobabooga
e047cd1def Update README 2023-04-06 22:50:58 -03:00
loeken
08b9d1b23a
creating a layer with Docker/docker-compose (#633) 2023-04-06 22:46:04 -03:00
oobabooga
d9e7aba714
Update README.md 2023-04-06 13:42:24 -03:00
oobabooga
eec3665845
Add instructions for updating requirements 2023-04-06 13:24:01 -03:00
oobabooga
4a28f39823
Update README.md 2023-04-06 02:47:27 -03:00
eiery
19b516b11b
fix link to streaming api example (#803) 2023-04-05 14:50:23 -03:00
oobabooga
7617ed5bfd
Add AMD instructions 2023-04-05 14:42:58 -03:00
oobabooga
770ef5744f Update README 2023-04-05 14:38:11 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
525f729b8e
Update README.md 2023-04-02 21:12:41 -03:00
oobabooga
53084241b4
Update README.md 2023-04-02 20:50:06 -03:00
oobabooga
b6f817be45
Update README.md 2023-04-01 14:54:10 -03:00
oobabooga
88fa38ac01
Update README.md 2023-04-01 14:49:03 -03:00
oobabooga
4b57bd0d99
Update README.md 2023-04-01 14:38:04 -03:00
oobabooga
b53bec5a1f
Update README.md 2023-04-01 14:37:35 -03:00
oobabooga
9160586c04
Update README.md 2023-04-01 14:31:10 -03:00
oobabooga
7ec11ae000
Update README.md 2023-04-01 14:15:19 -03:00
oobabooga
012f4f83b8
Update README.md 2023-04-01 13:55:15 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
cbfe0b944a
Update README.md 2023-03-31 17:49:11 -03:00
oobabooga
5c4e44b452
llama.cpp documentation 2023-03-31 15:20:39 -03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
oobabooga
41b58bc47e
Update README.md 2023-03-29 11:02:29 -03:00
oobabooga
3b4447a4fe
Update README.md 2023-03-29 02:24:11 -03:00
oobabooga
5d0b83c341
Update README.md 2023-03-29 02:22:19 -03:00
oobabooga
c2a863f87d
Mention the updated one-click installer 2023-03-29 02:11:51 -03:00
oobabooga
010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga
036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga
30585b3e71 Update README 2023-03-27 23:35:01 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
oobabooga
70f9565f37
Update README.md 2023-03-25 02:35:30 -03:00
oobabooga
04417b658b
Update README.md 2023-03-24 01:40:43 -03:00
oobabooga
143b5b5edf
Mention one-click-bandaid in the README 2023-03-23 23:28:50 -03:00
oobabooga
6872ffd976
Update README.md 2023-03-20 16:53:14 -03:00
oobabooga
dd4374edde Update README 2023-03-19 20:15:15 -03:00
oobabooga
9378754cc7 Update README 2023-03-19 20:14:50 -03:00
oobabooga
7ddf6147ac
Update README.md 2023-03-19 19:25:52 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
0cbe2dd7e9
Update README.md 2023-03-18 12:24:54 -03:00
oobabooga
d2a7fac8ea
Use pip instead of conda for pytorch 2023-03-18 11:56:04 -03:00
oobabooga
a0b1a30fd5
Specify torchvision/torchaudio versions 2023-03-18 11:23:56 -03:00
oobabooga
a163807f86
Update README.md 2023-03-18 03:07:27 -03:00
oobabooga
a7acfa4893
Update README.md 2023-03-17 22:57:46 -03:00
oobabooga
dc35861184
Update README.md 2023-03-17 21:05:17 -03:00
oobabooga
f2a5ca7d49
Update README.md 2023-03-17 20:50:27 -03:00
oobabooga
8c8286b0e6
Update README.md 2023-03-17 20:49:40 -03:00