oobabooga
|
193fe18c8c
|
Resolve conflicts
|
2023-09-21 17:45:11 -07:00 |
|
oobabooga
|
df39f455ad
|
Merge remote-tracking branch 'second-repo/main' into merge-second-repo
|
2023-09-21 17:39:54 -07:00 |
|
James Braza
|
fee38e0601
|
Simplified ExLlama cloning instructions and failure message (#3972)
|
2023-09-17 19:26:05 -03:00 |
|
oobabooga
|
e75489c252
|
Update README
|
2023-09-15 21:04:51 -07:00 |
|
missionfloyd
|
2ad6ca8874
|
Add back chat buttons with --chat-buttons (#3947)
|
2023-09-16 00:39:37 -03:00 |
|
oobabooga
|
fb864dad7b
|
Update README
|
2023-09-15 13:00:46 -07:00 |
|
oobabooga
|
2f935547c8
|
Minor changes
|
2023-09-12 15:05:21 -07:00 |
|
oobabooga
|
04a74b3774
|
Update README
|
2023-09-12 10:46:27 -07:00 |
|
Eve
|
92f3cd624c
|
Improve instructions for CPUs without AVX2 (#3786)
|
2023-09-11 11:54:04 -03:00 |
|
oobabooga
|
ed86878f02
|
Remove GGML support
|
2023-09-11 07:44:00 -07:00 |
|
oobabooga
|
40ffc3d687
|
Update README.md
|
2023-08-30 18:19:04 -03:00 |
|
oobabooga
|
5190e153ed
|
Update README.md
|
2023-08-30 14:06:29 -03:00 |
|
oobabooga
|
bc4023230b
|
Improved instructions for AMD/Metal/Intel Arc/CPUs without AVCX2
|
2023-08-30 09:40:00 -07:00 |
|
missionfloyd
|
787219267c
|
Allow downloading single file from UI (#3737)
|
2023-08-29 23:32:36 -03:00 |
|
oobabooga
|
3361728da1
|
Change some comments
|
2023-08-26 22:24:44 -07:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
oobabooga
|
83640d6f43
|
Replace ggml occurences with gguf
|
2023-08-26 01:06:59 -07:00 |
|
oobabooga
|
f4f04c8c32
|
Fix a typo
|
2023-08-25 07:08:38 -07:00 |
|
oobabooga
|
52ab2a6b9e
|
Add rope_freq_base parameter for CodeLlama
|
2023-08-25 06:55:15 -07:00 |
|
oobabooga
|
3320accfdc
|
Add CFG to llamacpp_HF (second attempt) (#3678)
|
2023-08-24 20:32:21 -03:00 |
|
oobabooga
|
d6934bc7bc
|
Implement CFG for ExLlama_HF (#3666)
|
2023-08-24 16:27:36 -03:00 |
|
oobabooga
|
1b419f656f
|
Acknowledge a16z support
|
2023-08-21 11:57:51 -07:00 |
|
oobabooga
|
54df0bfad1
|
Update README.md
|
2023-08-18 09:43:15 -07:00 |
|
oobabooga
|
f50f534b0f
|
Add note about AMD/Metal to README
|
2023-08-18 09:37:20 -07:00 |
|
oobabooga
|
7cba000421
|
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q (#3610)
|
2023-08-18 12:03:34 -03:00 |
|
oobabooga
|
32ff3da941
|
Update ancient screenshots
|
2023-08-15 17:16:24 -03:00 |
|
oobabooga
|
87dd85b719
|
Update README
|
2023-08-15 12:21:50 -07:00 |
|
oobabooga
|
a03a70bed6
|
Update README
|
2023-08-15 12:20:59 -07:00 |
|
oobabooga
|
7089b2a48f
|
Update README
|
2023-08-15 12:16:21 -07:00 |
|
oobabooga
|
155862a4a0
|
Update README
|
2023-08-15 12:11:12 -07:00 |
|
cal066
|
991bb57e43
|
ctransformers: Fix up model_type name consistency (#3567)
|
2023-08-14 15:17:24 -03:00 |
|
oobabooga
|
ccfc02a28d
|
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama)
|
2023-08-14 15:15:55 -03:00 |
|
oobabooga
|
619cb4e78b
|
Add "save defaults to settings.yaml" button (#3574)
|
2023-08-14 11:46:07 -03:00 |
|
Eve
|
66c04c304d
|
Various ctransformers fixes (#3556)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
|
2023-08-13 23:09:03 -03:00 |
|
oobabooga
|
a1a9ec895d
|
Unify the 3 interface modes (#3554)
|
2023-08-13 01:12:15 -03:00 |
|
Chris Lefever
|
0230fa4e9c
|
Add the --disable_exllama option for AutoGPTQ
|
2023-08-12 02:26:58 -04:00 |
|
oobabooga
|
4c450e6b70
|
Update README.md
|
2023-08-11 15:50:16 -03:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
oobabooga
|
949c92d7df
|
Create README.md
|
2023-08-10 14:32:40 -03:00 |
|
oobabooga
|
c7f52bbdc1
|
Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a .
|
2023-08-10 08:39:41 -07:00 |
|
jllllll
|
e3d3565b2a
|
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
|
2023-08-09 23:59:04 -05:00 |
|
jllllll
|
bee73cedbd
|
Streamline GPTQ-for-LLaMa support
|
2023-08-09 23:42:34 -05:00 |
|
oobabooga
|
2255349f19
|
Update README
|
2023-08-09 05:46:25 -07:00 |
|
oobabooga
|
d8fb506aff
|
Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
|
2023-08-08 21:25:48 -07:00 |
|
Friedemann Lipphardt
|
901b028d55
|
Add option for named cloudflare tunnels (#3364)
|
2023-08-08 22:20:27 -03:00 |
|
oobabooga
|
8df3cdfd51
|
Add SSL certificate support (#3453)
|
2023-08-04 13:57:31 -03:00 |
|
oobabooga
|
4e6dc6d99d
|
Add Contributing guidelines
|
2023-08-03 14:40:28 -07:00 |
|
oobabooga
|
87dab03dc0
|
Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432)
|
2023-08-03 11:00:36 -03:00 |
|
oobabooga
|
b17893a58f
|
Revert "Add tensor split support for llama.cpp (#3171)"
This reverts commit 031fe7225e .
|
2023-07-26 07:06:01 -07:00 |
|
oobabooga
|
69f8b35bc9
|
Revert changes to README
|
2023-07-25 20:51:19 -07:00 |
|