Light
|
97e67d136b
|
Update README.md
|
2023-04-13 21:00:58 +08:00 |
|
Light
|
15d5a043f2
|
Merge remote-tracking branch 'origin/main' into triton
|
2023-04-13 19:38:51 +08:00 |
|
oobabooga
|
7dfbe54f42
|
Add --model-menu option
|
2023-04-12 21:24:26 -03:00 |
|
MarlinMr
|
47daf891fe
|
Link to developer.nvidia.com (#1104)
|
2023-04-12 15:56:42 -03:00 |
|
Light
|
f3591ccfa1
|
Keep minimal change.
|
2023-04-12 23:26:06 +08:00 |
|
oobabooga
|
461ca7faf5
|
Mention that pull request reviews are welcome
|
2023-04-11 23:12:48 -03:00 |
|
oobabooga
|
749c08a4ff
|
Update README.md
|
2023-04-11 14:42:10 -03:00 |
|
IggoOnCode
|
09d8119e3c
|
Add CPU LoRA training (#938)
(It's very slow)
|
2023-04-10 17:29:00 -03:00 |
|
oobabooga
|
f035b01823
|
Update README.md
|
2023-04-10 16:20:23 -03:00 |
|
Jeff Lefebvre
|
b7ca89ba3f
|
Mention that build-essential is required (#1013)
|
2023-04-10 16:19:10 -03:00 |
|
MarkovInequality
|
992663fa20
|
Added xformers support to Llama (#950)
|
2023-04-09 23:08:40 -03:00 |
|
oobabooga
|
bce1b7fbb2
|
Update README.md
|
2023-04-09 02:19:40 -03:00 |
|
oobabooga
|
f7860ce192
|
Update README.md
|
2023-04-09 02:19:17 -03:00 |
|
oobabooga
|
ece8ed2c84
|
Update README.md
|
2023-04-09 02:18:42 -03:00 |
|
MarlinMr
|
ec979cd9c4
|
Use updated docker compose (#877)
|
2023-04-07 10:48:47 -03:00 |
|
MarlinMr
|
2c0018d946
|
Cosmetic change of README.md (#878)
|
2023-04-07 10:47:10 -03:00 |
|
oobabooga
|
848c4edfd5
|
Update README.md
|
2023-04-06 22:52:35 -03:00 |
|
oobabooga
|
e047cd1def
|
Update README
|
2023-04-06 22:50:58 -03:00 |
|
loeken
|
08b9d1b23a
|
creating a layer with Docker/docker-compose (#633)
|
2023-04-06 22:46:04 -03:00 |
|
oobabooga
|
d9e7aba714
|
Update README.md
|
2023-04-06 13:42:24 -03:00 |
|
oobabooga
|
eec3665845
|
Add instructions for updating requirements
|
2023-04-06 13:24:01 -03:00 |
|
oobabooga
|
4a28f39823
|
Update README.md
|
2023-04-06 02:47:27 -03:00 |
|
eiery
|
19b516b11b
|
fix link to streaming api example (#803)
|
2023-04-05 14:50:23 -03:00 |
|
oobabooga
|
7617ed5bfd
|
Add AMD instructions
|
2023-04-05 14:42:58 -03:00 |
|
oobabooga
|
770ef5744f
|
Update README
|
2023-04-05 14:38:11 -03:00 |
|
oobabooga
|
65d8a24a6d
|
Show profile pictures in the Character tab
|
2023-04-04 22:28:49 -03:00 |
|
oobabooga
|
b24147c7ca
|
Document --pre_layer
|
2023-04-03 17:34:25 -03:00 |
|
oobabooga
|
525f729b8e
|
Update README.md
|
2023-04-02 21:12:41 -03:00 |
|
oobabooga
|
53084241b4
|
Update README.md
|
2023-04-02 20:50:06 -03:00 |
|
oobabooga
|
b6f817be45
|
Update README.md
|
2023-04-01 14:54:10 -03:00 |
|
oobabooga
|
88fa38ac01
|
Update README.md
|
2023-04-01 14:49:03 -03:00 |
|
oobabooga
|
4b57bd0d99
|
Update README.md
|
2023-04-01 14:38:04 -03:00 |
|
oobabooga
|
b53bec5a1f
|
Update README.md
|
2023-04-01 14:37:35 -03:00 |
|
oobabooga
|
9160586c04
|
Update README.md
|
2023-04-01 14:31:10 -03:00 |
|
oobabooga
|
7ec11ae000
|
Update README.md
|
2023-04-01 14:15:19 -03:00 |
|
oobabooga
|
012f4f83b8
|
Update README.md
|
2023-04-01 13:55:15 -03:00 |
|
oobabooga
|
2c52310642
|
Add --threads flag for llama.cpp
|
2023-03-31 21:18:05 -03:00 |
|
oobabooga
|
cbfe0b944a
|
Update README.md
|
2023-03-31 17:49:11 -03:00 |
|
oobabooga
|
5c4e44b452
|
llama.cpp documentation
|
2023-03-31 15:20:39 -03:00 |
|
oobabooga
|
d4a9b5ea97
|
Remove redundant preset (see the plot in #587)
|
2023-03-30 17:34:44 -03:00 |
|
oobabooga
|
41b58bc47e
|
Update README.md
|
2023-03-29 11:02:29 -03:00 |
|
oobabooga
|
3b4447a4fe
|
Update README.md
|
2023-03-29 02:24:11 -03:00 |
|
oobabooga
|
5d0b83c341
|
Update README.md
|
2023-03-29 02:22:19 -03:00 |
|
oobabooga
|
c2a863f87d
|
Mention the updated one-click installer
|
2023-03-29 02:11:51 -03:00 |
|
oobabooga
|
010b259dde
|
Update documentation
|
2023-03-28 17:46:00 -03:00 |
|
oobabooga
|
036163a751
|
Change description
|
2023-03-27 23:39:26 -03:00 |
|
oobabooga
|
30585b3e71
|
Update README
|
2023-03-27 23:35:01 -03:00 |
|
oobabooga
|
49c10c5570
|
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
|
2023-03-26 00:11:33 -03:00 |
|
oobabooga
|
70f9565f37
|
Update README.md
|
2023-03-25 02:35:30 -03:00 |
|
oobabooga
|
04417b658b
|
Update README.md
|
2023-03-24 01:40:43 -03:00 |
|