Commit Graph

2829 Commits

Author SHA1 Message Date
oobabooga
0686a2e75f Improve instruct colors in dark mode 2023-06-18 01:44:52 -03:00
oobabooga
c5641b65d3 Handle leading spaces properly in ExLllama 2023-06-17 19:35:12 -03:00
matatonic
1e97aaac95
extensions/openai: docs update, model loader, minor fixes (#2557) 2023-06-17 19:15:24 -03:00
matatonic
2220b78e7a
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k (#2580) 2023-06-17 19:14:25 -03:00
jllllll
b1d05cbbf6
Install exllama (#83)
* Install exllama

* Handle updating exllama
2023-06-17 19:10:36 -03:00
jllllll
657049d7d0
Fix cmd_macos.sh (#82)
MacOS version of Bash does not support process substitution
2023-06-17 19:09:42 -03:00
jllllll
b2483e28d1
Check for special characters in path on Windows (#81)
Display warning message if detected
2023-06-17 19:09:22 -03:00
oobabooga
05a743d6ad Make llama.cpp use tfs parameter 2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719 Add a variable to modules/shared.py 2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff Fix repeated tokens with exllama 2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7 Use gen_begin_reuse in exllama 2023-06-17 18:00:10 -03:00
oobabooga
239b11c94b Minor bug fixes 2023-06-17 17:57:56 -03:00
Bhavika Tekwani
d8d29edf54
Install wheel using pip3 (#2719) 2023-06-16 23:46:40 -03:00
Jonathan Yankovich
a1ca1c04a1
Update ExLlama.md (#2729)
Add details for configuring exllama
2023-06-16 23:46:25 -03:00
oobabooga
b27f83c0e9 Make exllama stoppable 2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3 Fix streaming callback 2023-06-16 21:44:56 -03:00
oobabooga
1e400218e9 Fix a typo 2023-06-16 21:01:57 -03:00
oobabooga
5f392122fd Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
cb9be5db1c
Update ExLlama.md 2023-06-16 20:40:12 -03:00
oobabooga
83be8eacf0 Minor fix 2023-06-16 20:38:32 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0 Add some clarifications 2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
oobabooga
57be2eecdf
Update README.md 2023-06-16 15:04:16 -03:00
Meng-Yuan Huang
772d4080b2
Update llama.cpp-models.md for macOS (#2711) 2023-06-16 00:00:24 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) 2023-06-15 23:59:54 -03:00
dependabot[bot]
909d8c6ae3
Bump transformers from 4.30.0 to 4.30.2 (#2695) 2023-06-14 19:56:28 -03:00
oobabooga
2b9a6b9259 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58 Add some checks to AutoGPTQ loader 2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel (#2689) 2023-06-14 18:29:42 -03:00
oobabooga
134430bbe2 Minor change 2023-06-14 11:34:42 -03:00
oobabooga
474dc7355a Allow API requests to use parameter presets 2023-06-14 11:32:20 -03:00
oobabooga
8936160e54
Add WSL installer to README (thanks jllllll) 2023-06-13 00:07:34 -03:00
jllllll
c42f183d3f
Installer for WSL (#78) 2023-06-13 00:04:15 -03:00
FartyPants
9f150aedc3
A small UI change in Models menu (#2640) 2023-06-12 01:24:44 -03:00
oobabooga
da5d9a28d8 Fix tabbed extensions showing up at the bottom of the UI 2023-06-11 21:20:51 -03:00
oobabooga
ae5e2b3470 Reorganize a bit 2023-06-11 19:50:20 -03:00
oobabooga
e471919e6d Make llava/minigpt-4 work with AutoGPTQ 2023-06-11 17:56:01 -03:00
oobabooga
f4defde752 Add a menu for installing extensions 2023-06-11 17:11:06 -03:00
oobabooga
8e73806b20 Improve "Interface mode" appearance 2023-06-11 15:29:45 -03:00
oobabooga
a06c953692 Minor style change 2023-06-11 15:13:26 -03:00
oobabooga
ac122832f7 Make dropdown menus more similar to automatic1111 2023-06-11 14:20:16 -03:00
Amine Djeghri
8275dbc68c
Update WSL-installation-guide.md (#2626) 2023-06-11 12:30:34 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts (#2621) 2023-06-11 12:19:18 -03:00
oobabooga
ea0eabd266 Bump llama-cpp-python version 2023-06-10 21:59:29 -03:00
oobabooga
ec2b5bae39
Merge pull request #2616 from oobabooga/dev
Merge dev branch
2023-06-10 21:55:59 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models (#2571) 2023-06-09 21:26:31 -03:00
oobabooga
aff3e04df4 Remove irrelevant docs
Compiling from source, in my tests, makes no difference in
the resulting tokens/s.
2023-06-09 21:15:37 -03:00
oobabooga
d7db25dac9 Fix a permission 2023-06-09 01:44:17 -03:00
oobabooga
d033c85cf9 Fix a permission 2023-06-09 01:43:22 -03:00