Commit Graph

170 Commits

Author SHA1 Message Date
John Smith
cc7b7ba153
fix lora training with alpaca_lora_4bit (#3853) 2023-09-11 01:22:20 -03:00
q5sys (JT)
cdb854db9e
Update llama.cpp.md instructions (#3702) 2023-08-29 17:56:50 -03:00
oobabooga
a1a9ec895d
Unify the 3 interface modes (#3554) 2023-08-13 01:12:15 -03:00
oobabooga
c7f52bbdc1 Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a.
2023-08-10 08:39:41 -07:00
oobabooga
16e2b117b4 Minor doc change 2023-08-10 08:38:10 -07:00
jllllll
d6765bebc4
Update installation documentation 2023-08-10 00:53:48 -05:00
jllllll
e3d3565b2a
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
2023-08-09 23:59:04 -05:00
oobabooga
9773534181 Update Chat-mode.md 2023-08-01 08:03:22 -07:00
oobabooga
b2207f123b Update docs 2023-07-31 19:20:48 -07:00
oobabooga
ca4188aabc Update the example extension 2023-07-29 18:57:22 -07:00
oobabooga
f24f87cfb0 Change a comment 2023-07-26 09:38:13 -07:00
oobabooga
b553c33dd0 Add a link to the gradio docs 2023-07-26 07:49:22 -07:00
oobabooga
517d40cffe Update Extensions.md 2023-07-26 07:01:35 -07:00
oobabooga
b11f63cb18 update extensions docs 2023-07-26 07:00:33 -07:00
oobabooga
4a24849715 Revert changes 2023-07-25 21:09:32 -07:00
oobabooga
d3abe7caa8 Update llama.cpp.md 2023-07-25 15:33:16 -07:00
oobabooga
863d2f118f Update llama.cpp.md 2023-07-25 15:31:05 -07:00
oobabooga
77d2e9f060 Remove flexgen 2 2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf Remove flexgen support 2023-07-25 15:15:29 -07:00
Eve
f653546484
README updates and improvements (#3198) 2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier (#3307) 2023-07-25 18:49:56 -03:00
oobabooga
a2918176ea Update LLaMA-v2-model.md (thanks Panchovix) 2023-07-18 13:21:18 -07:00
oobabooga
603c596616 Add LLaMA-v2 conversion instructions 2023-07-18 10:29:56 -07:00
oobabooga
60a3e70242 Update LLaMA links and info 2023-07-17 12:51:01 -07:00
oobabooga
8705eba830 Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
55457549cd Add information about presets to the UI 2023-07-03 22:39:01 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
oobabooga
63770c0643 Update docs/Extensions.md 2023-06-27 22:25:05 -03:00
oobabooga
ebfcfa41f2
Update ExLlama.md 2023-06-24 20:25:34 -03:00
oobabooga
a70a2ac3be
Update ExLlama.md 2023-06-24 20:23:01 -03:00
oobabooga
0d9d70ec7e Update docs 2023-06-19 12:52:23 -03:00
oobabooga
f6a602861e Update docs 2023-06-19 12:51:30 -03:00
oobabooga
5d4b4d15a5
Update Using-LoRAs.md 2023-06-19 12:43:57 -03:00
oobabooga
05a743d6ad Make llama.cpp use tfs parameter 2023-06-17 19:08:25 -03:00
Jonathan Yankovich
a1ca1c04a1
Update ExLlama.md (#2729)
Add details for configuring exllama
2023-06-16 23:46:25 -03:00
oobabooga
cb9be5db1c
Update ExLlama.md 2023-06-16 20:40:12 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
Meng-Yuan Huang
772d4080b2
Update llama.cpp-models.md for macOS (#2711) 2023-06-16 00:00:24 -03:00
Amine Djeghri
8275dbc68c
Update WSL-installation-guide.md (#2626) 2023-06-11 12:30:34 -03:00
oobabooga
c6552785af Minor cleanup 2023-06-09 00:30:22 -03:00
zaypen
084b006cfe
Update LLaMA-model.md (#2460)
Better approach of converting LLaMA model
2023-06-07 15:34:50 -03:00
oobabooga
00b94847da Remove softprompt support 2023-06-06 07:42:23 -03:00
oobabooga
99d701994a Update GPTQ-models-(4-bit-mode).md 2023-06-05 15:55:00 -03:00
oobabooga
d0aca83b53 Add AutoGPTQ wheels to requirements.txt 2023-06-02 00:47:11 -03:00
oobabooga
aa83fc21d4
Update Low-VRAM-guide.md 2023-06-01 12:14:27 -03:00
oobabooga
756e3afbcc
Update llama.cpp-models.md 2023-06-01 12:04:31 -03:00
oobabooga
74bf2f05b1
Update llama.cpp-models.md 2023-06-01 11:58:33 -03:00
oobabooga
90dc8a91ae
Update llama.cpp-models.md 2023-06-01 11:57:57 -03:00
oobabooga
c9ac45d4cf
Update Using-LoRAs.md 2023-06-01 11:34:04 -03:00
oobabooga
9aad6d07de
Update Using-LoRAs.md 2023-06-01 11:32:41 -03:00
oobabooga
e52b43c934
Update GPTQ-models-(4-bit-mode).md 2023-06-01 01:17:13 -03:00
oobabooga
419c34eca4
Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:49:00 -03:00
oobabooga
a160230893 Update GPTQ-models-(4-bit-mode).md 2023-05-31 23:38:15 -03:00
AlpinDale
6627f7feb9
Add notice about downgrading gcc and g++ (#2446) 2023-05-30 22:28:53 -03:00
oobabooga
e763ace593
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:35:49 -03:00
oobabooga
86ef695d37
Update GPTQ-models-(4-bit-mode).md 2023-05-29 22:20:55 -03:00
oobabooga
540a161a08
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:45:40 -03:00
oobabooga
166a0d9893
Update GPTQ-models-(4-bit-mode).md 2023-05-29 15:07:59 -03:00
oobabooga
4a190a98fd
Update GPTQ-models-(4-bit-mode).md 2023-05-29 14:56:05 -03:00
oobabooga
1490c0af68 Remove RWKV from requirements.txt 2023-05-23 20:49:20 -03:00
Atinoda
4155aaa96a
Add mention to alternative docker repository (#2145) 2023-05-23 20:35:53 -03:00
oobabooga
c2d2ef7c13
Update Generation-parameters.md 2023-05-23 02:11:28 -03:00
oobabooga
b0845ae4e8
Update RWKV-model.md 2023-05-23 02:10:08 -03:00
oobabooga
cd3618d7fb Add support for RWKV in Hugging Face format 2023-05-23 02:07:28 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp (#2287) 2023-05-22 19:37:24 -03:00
oobabooga
1e5821bd9e Fix silero tts autoplay (attempt #2) 2023-05-21 13:25:11 -03:00
oobabooga
159eccac7e
Update Audio-Notification.md 2023-05-19 23:20:42 -03:00
HappyWorldGames
a3e9769e31
Added an audible notification after text generation in web. (#1277)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-19 23:16:06 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108) 2023-05-19 12:58:54 -03:00
oobabooga
10cf7831f7
Update Extensions.md 2023-05-17 10:45:29 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) 2023-05-17 10:41:09 -03:00
oobabooga
c9c6aa2b6e Update docs/Extensions.md 2023-05-17 02:04:37 -03:00
oobabooga
9e558cba9b Update docs/Extensions.md 2023-05-17 01:43:32 -03:00
oobabooga
687f21f965 Update docs/Extensions.md 2023-05-17 01:41:01 -03:00
oobabooga
ce21804ec7 Allow extensions to define a new tab 2023-05-17 01:31:56 -03:00
oobabooga
a84f499718 Allow extensions to define custom CSS and JS 2023-05-17 00:30:54 -03:00
oobabooga
7584d46c29
Refactor models.py (#2113) 2023-05-16 19:52:22 -03:00
oobabooga
cd9be4c2ba
Update llama.cpp-models.md 2023-05-16 00:49:32 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
400f3648f4
Update docs/README.md 2023-05-11 10:10:24 -03:00
oobabooga
ac9a86a16c
Update llama.cpp-models.md 2023-05-11 09:47:36 -03:00
oobabooga
f5592781e5
Update README.md 2023-05-10 12:19:56 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
shadownetdev1
32ad47c898
added note about build essentials to WSL docs (#1859) 2023-05-08 22:32:41 -03:00
Arseni Lapunov
8818967d37
Fix typo in docs/Training-LoRAs.md (#1921) 2023-05-08 15:12:39 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
oobabooga
63898c09ac Document superbooga 2023-05-08 00:11:31 -03:00
oobabooga
ee3c8a893e
Update Extensions.md 2023-05-05 19:04:50 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions (#1817) 2023-05-05 18:53:03 -03:00
oobabooga
0e6d17304a Clearer syntax for instruction-following characters 2023-05-03 22:50:39 -03:00
oobabooga
d6410a1b36 Bump recommended monkey patch commit 2023-05-03 14:49:25 -03:00
oobabooga
ecd79caa68
Update Extensions.md 2023-05-02 22:52:32 -03:00
oobabooga
e6a78c00f2
Update Docker.md 2023-05-02 00:51:10 -03:00
Lawrence M Stewart
78bd4d3a5c
Update LLaMA-model.md (#1700)
protobuf needs to be 3.20.x or lower
2023-05-02 00:44:09 -03:00
Lőrinc Pap
ee68ec9079
Update folder produced by download-model (#1601) 2023-04-27 12:03:02 -03:00
USBhost
95aa43b9c2
Update LLaMA download docs 2023-04-25 21:28:15 -03:00
oobabooga
9b272bc8e5 Monkey patch fixes 2023-04-25 21:20:26 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
oobabooga
9197d3fec8
Update Extensions.md 2023-04-23 16:11:17 -03:00