oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
1e5821bd9e
|
Fix silero tts autoplay (attempt #2)
|
2023-05-21 13:25:11 -03:00 |
|
oobabooga
|
159eccac7e
|
Update Audio-Notification.md
|
2023-05-19 23:20:42 -03:00 |
|
HappyWorldGames
|
a3e9769e31
|
Added an audible notification after text generation in web. (#1277)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-19 23:16:06 -03:00 |
|
Alex "mcmonkey" Goodwin
|
50c70e28f0
|
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108)
|
2023-05-19 12:58:54 -03:00 |
|
oobabooga
|
10cf7831f7
|
Update Extensions.md
|
2023-05-17 10:45:29 -03:00 |
|
Alex "mcmonkey" Goodwin
|
1f50dbe352
|
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100)
|
2023-05-17 10:41:09 -03:00 |
|
oobabooga
|
c9c6aa2b6e
|
Update docs/Extensions.md
|
2023-05-17 02:04:37 -03:00 |
|
oobabooga
|
9e558cba9b
|
Update docs/Extensions.md
|
2023-05-17 01:43:32 -03:00 |
|
oobabooga
|
687f21f965
|
Update docs/Extensions.md
|
2023-05-17 01:41:01 -03:00 |
|
oobabooga
|
ce21804ec7
|
Allow extensions to define a new tab
|
2023-05-17 01:31:56 -03:00 |
|
oobabooga
|
a84f499718
|
Allow extensions to define custom CSS and JS
|
2023-05-17 00:30:54 -03:00 |
|
oobabooga
|
7584d46c29
|
Refactor models.py (#2113)
|
2023-05-16 19:52:22 -03:00 |
|
oobabooga
|
cd9be4c2ba
|
Update llama.cpp-models.md
|
2023-05-16 00:49:32 -03:00 |
|
AlphaAtlas
|
071f0776ad
|
Add llama.cpp GPU offload option (#2060)
|
2023-05-14 22:58:11 -03:00 |
|
oobabooga
|
400f3648f4
|
Update docs/README.md
|
2023-05-11 10:10:24 -03:00 |
|
oobabooga
|
ac9a86a16c
|
Update llama.cpp-models.md
|
2023-05-11 09:47:36 -03:00 |
|
oobabooga
|
f5592781e5
|
Update README.md
|
2023-05-10 12:19:56 -03:00 |
|
Wojtab
|
e9e75a9ec7
|
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741)
|
2023-05-09 20:18:02 -03:00 |
|
shadownetdev1
|
32ad47c898
|
added note about build essentials to WSL docs (#1859)
|
2023-05-08 22:32:41 -03:00 |
|
Arseni Lapunov
|
8818967d37
|
Fix typo in docs/Training-LoRAs.md (#1921)
|
2023-05-08 15:12:39 -03:00 |
|
oobabooga
|
b5260b24f1
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
|
oobabooga
|
63898c09ac
|
Document superbooga
|
2023-05-08 00:11:31 -03:00 |
|
oobabooga
|
ee3c8a893e
|
Update Extensions.md
|
2023-05-05 19:04:50 -03:00 |
|
oobabooga
|
8aafb1f796
|
Refactor text_generation.py, add support for custom generation functions (#1817)
|
2023-05-05 18:53:03 -03:00 |
|
oobabooga
|
0e6d17304a
|
Clearer syntax for instruction-following characters
|
2023-05-03 22:50:39 -03:00 |
|
oobabooga
|
d6410a1b36
|
Bump recommended monkey patch commit
|
2023-05-03 14:49:25 -03:00 |
|
oobabooga
|
ecd79caa68
|
Update Extensions.md
|
2023-05-02 22:52:32 -03:00 |
|
oobabooga
|
e6a78c00f2
|
Update Docker.md
|
2023-05-02 00:51:10 -03:00 |
|
Lawrence M Stewart
|
78bd4d3a5c
|
Update LLaMA-model.md (#1700)
protobuf needs to be 3.20.x or lower
|
2023-05-02 00:44:09 -03:00 |
|
Lőrinc Pap
|
ee68ec9079
|
Update folder produced by download-model (#1601)
|
2023-04-27 12:03:02 -03:00 |
|
USBhost
|
95aa43b9c2
|
Update LLaMA download docs
|
2023-04-25 21:28:15 -03:00 |
|
oobabooga
|
9b272bc8e5
|
Monkey patch fixes
|
2023-04-25 21:20:26 -03:00 |
|
Wojtab
|
12212cf6be
|
LLaVA support (#1487)
|
2023-04-23 20:32:22 -03:00 |
|
oobabooga
|
9197d3fec8
|
Update Extensions.md
|
2023-04-23 16:11:17 -03:00 |
|
Alex "mcmonkey" Goodwin
|
459e725af9
|
Lora trainer docs (#1493)
|
2023-04-23 12:54:41 -03:00 |
|
oobabooga
|
47666c4d00
|
Update GPTQ-models-(4-bit-mode).md
|
2023-04-22 15:12:14 -03:00 |
|
oobabooga
|
fcb594b90e
|
Don't require llama.cpp models to be placed in subfolders
|
2023-04-22 14:56:48 -03:00 |
|
oobabooga
|
06b6ff6c2e
|
Update GPTQ-models-(4-bit-mode).md
|
2023-04-22 12:49:00 -03:00 |
|
oobabooga
|
2c6d43e60f
|
Update GPTQ-models-(4-bit-mode).md
|
2023-04-22 12:48:20 -03:00 |
|
InconsolableCellist
|
e03b873460
|
Updating Using-LoRAs.md doc to clarify resuming training (#1474)
|
2023-04-22 03:35:36 -03:00 |
|
oobabooga
|
ef40b4e862
|
Update README.md
|
2023-04-22 03:03:39 -03:00 |
|
oobabooga
|
408e172ad9
|
Rename docker/README.md to docs/Docker.md
|
2023-04-22 03:03:05 -03:00 |
|
oobabooga
|
4d9ae44efd
|
Update Spell-book.md
|
2023-04-22 02:53:52 -03:00 |
|
oobabooga
|
9508f207ba
|
Update Using-LoRAs.md
|
2023-04-22 02:53:01 -03:00 |
|
oobabooga
|
6d4f131d0a
|
Update Low-VRAM-guide.md
|
2023-04-22 02:50:35 -03:00 |
|
oobabooga
|
f5c36cca40
|
Update LLaMA-model.md
|
2023-04-22 02:49:54 -03:00 |
|
oobabooga
|
b5e5b9aeae
|
Delete Home.md
|
2023-04-22 02:40:20 -03:00 |
|
oobabooga
|
fe6e9ea986
|
Update README.md
|
2023-04-22 02:40:08 -03:00 |
|
oobabooga
|
80ef7c7bcb
|
Add files via upload
|
2023-04-22 02:34:13 -03:00 |
|
oobabooga
|
25b433990a
|
Create README.md
|
2023-04-22 02:33:32 -03:00 |
|
oobabooga
|
2fde50a800
|
Delete docker.md
|
2023-04-08 22:37:54 -03:00 |
|
loeken
|
acc235aced
|
updated docs for docker, setup video added, removed left over GPTQ_VERSION from docker-compose (#940)
|
2023-04-08 22:35:15 -03:00 |
|
oobabooga
|
e047cd1def
|
Update README
|
2023-04-06 22:50:58 -03:00 |
|
loeken
|
08b9d1b23a
|
creating a layer with Docker/docker-compose (#633)
|
2023-04-06 22:46:04 -03:00 |
|