Commit Graph

2178 Commits

Author SHA1 Message Date
oobabooga
5f392122fd Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
cb9be5db1c
Update ExLlama.md 2023-06-16 20:40:12 -03:00
oobabooga
83be8eacf0 Minor fix 2023-06-16 20:38:32 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0 Add some clarifications 2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
oobabooga
57be2eecdf
Update README.md 2023-06-16 15:04:16 -03:00
Meng-Yuan Huang
772d4080b2
Update llama.cpp-models.md for macOS (#2711) 2023-06-16 00:00:24 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) 2023-06-15 23:59:54 -03:00
dependabot[bot]
909d8c6ae3
Bump transformers from 4.30.0 to 4.30.2 (#2695) 2023-06-14 19:56:28 -03:00
oobabooga
2b9a6b9259 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58 Add some checks to AutoGPTQ loader 2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel (#2689) 2023-06-14 18:29:42 -03:00
oobabooga
134430bbe2 Minor change 2023-06-14 11:34:42 -03:00
oobabooga
474dc7355a Allow API requests to use parameter presets 2023-06-14 11:32:20 -03:00
oobabooga
8936160e54
Add WSL installer to README (thanks jllllll) 2023-06-13 00:07:34 -03:00
FartyPants
9f150aedc3
A small UI change in Models menu (#2640) 2023-06-12 01:24:44 -03:00
oobabooga
da5d9a28d8 Fix tabbed extensions showing up at the bottom of the UI 2023-06-11 21:20:51 -03:00
oobabooga
ae5e2b3470 Reorganize a bit 2023-06-11 19:50:20 -03:00
oobabooga
e471919e6d Make llava/minigpt-4 work with AutoGPTQ 2023-06-11 17:56:01 -03:00
oobabooga
f4defde752 Add a menu for installing extensions 2023-06-11 17:11:06 -03:00
oobabooga
8e73806b20 Improve "Interface mode" appearance 2023-06-11 15:29:45 -03:00
oobabooga
a06c953692 Minor style change 2023-06-11 15:13:26 -03:00
oobabooga
ac122832f7 Make dropdown menus more similar to automatic1111 2023-06-11 14:20:16 -03:00
Amine Djeghri
8275dbc68c
Update WSL-installation-guide.md (#2626) 2023-06-11 12:30:34 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts (#2621) 2023-06-11 12:19:18 -03:00
oobabooga
ea0eabd266 Bump llama-cpp-python version 2023-06-10 21:59:29 -03:00
oobabooga
ec2b5bae39
Merge pull request #2616 from oobabooga/dev
Merge dev branch
2023-06-10 21:55:59 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models (#2571) 2023-06-09 21:26:31 -03:00
oobabooga
aff3e04df4 Remove irrelevant docs
Compiling from source, in my tests, makes no difference in
the resulting tokens/s.
2023-06-09 21:15:37 -03:00
oobabooga
d7db25dac9 Fix a permission 2023-06-09 01:44:17 -03:00
oobabooga
d033c85cf9 Fix a permission 2023-06-09 01:43:22 -03:00
oobabooga
741afd74f6 Update requirements-minimal.txt 2023-06-09 00:48:41 -03:00
oobabooga
c333e4c906 Add docs for performance optimizations 2023-06-09 00:47:48 -03:00
oobabooga
aaf240a14c
Merge pull request #2587 from oobabooga/dev 2023-06-09 00:30:59 -03:00
oobabooga
c6552785af Minor cleanup 2023-06-09 00:30:22 -03:00
oobabooga
92b45cb3f5 Merge branch 'main' into dev 2023-06-09 00:27:11 -03:00
oobabooga
8a7a8343be Detect TheBloke_WizardLM-30B-GPTQ 2023-06-09 00:26:34 -03:00
oobabooga
0f8140e99d Bump transformers/accelerate/peft/autogptq 2023-06-09 00:25:13 -03:00
FartyPants
ac40c59ac3
Added Guanaco-QLoRA to Instruct character (#2574) 2023-06-08 12:24:32 -03:00
oobabooga
db2cbe7b5a Detect WizardLM-30B-V1.0 instruction format 2023-06-08 11:43:40 -03:00
oobabooga
e0b43102e6 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-06-08 11:35:23 -03:00
matatonic
7be6fe126b
extensions/api: models api for blocking_api (updated) (#2539) 2023-06-08 11:34:36 -03:00
oobabooga
240752617d Increase download timeout to 20s 2023-06-08 11:16:38 -03:00
zaypen
084b006cfe
Update LLaMA-model.md (#2460)
Better approach of converting LLaMA model
2023-06-07 15:34:50 -03:00
dnobs
c05edfcdfc
fix: reverse-proxied URI should end with 'chat', not 'generate' (#2556) 2023-06-07 00:08:04 -03:00
oobabooga
878250d609 Merge branch 'main' into dev 2023-06-06 19:43:53 -03:00
oobabooga
f55e85e28a Fix multimodal with model loaded through AutoGPTQ 2023-06-06 19:42:40 -03:00
oobabooga
eb2601a8c3 Reorganize Parameters tab 2023-06-06 14:51:02 -03:00
oobabooga
3cc5ce3c42
Merge pull request #2551 from oobabooga/dev 2023-06-06 14:40:52 -03:00