Commit Graph

548 Commits

Author SHA1 Message Date
oobabooga
aafd15109d Update README 2023-12-13 22:15:58 -08:00
oobabooga
634518a412 Update README 2023-12-13 22:08:41 -08:00
oobabooga
0d5ca05ab9 Update README 2023-12-13 22:06:04 -08:00
oobabooga
d241de86c4 Update README 2023-12-13 22:02:26 -08:00
oobabooga
36e850fe89
Update README.md 2023-12-13 17:55:41 -03:00
oobabooga
8c8825b777 Add QuIP# to README 2023-12-08 08:40:42 -08:00
oobabooga
f7145544f9 Update README 2023-12-04 15:44:44 -08:00
oobabooga
be88b072e9 Update --loader flag description 2023-12-04 15:41:25 -08:00
Ikko Eltociear Ashimine
06cc9a85f7
README: minor typo fix (#4793) 2023-12-03 22:46:34 -03:00
oobabooga
000b77a17d Minor docker changes 2023-11-29 21:27:23 -08:00
Callum
88620c6b39
feature/docker_improvements (#4768) 2023-11-30 02:20:23 -03:00
oobabooga
ff24648510 Credit llama-cpp-python in the README 2023-11-20 12:13:15 -08:00
oobabooga
ef6feedeb2
Add --nowebui flag for pure API mode (#4651) 2023-11-18 23:38:39 -03:00
oobabooga
8f4f4daf8b
Add --admin-key flag for API (#4649) 2023-11-18 22:33:27 -03:00
oobabooga
d1a58da52f Update ancient Docker instructions 2023-11-17 19:52:53 -08:00
oobabooga
e0ca49ed9c
Bump llama-cpp-python to 0.2.18 (2nd attempt) (#4637)
* Update requirements*.txt

* Add back seed
2023-11-18 00:31:27 -03:00
oobabooga
9d6f79db74 Revert "Bump llama-cpp-python to 0.2.18 (#4611)"
This reverts commit 923c8e25fb.
2023-11-17 05:14:25 -08:00
oobabooga
13dc3b61da Update README 2023-11-16 19:57:55 -08:00
oobabooga
923c8e25fb
Bump llama-cpp-python to 0.2.18 (#4611) 2023-11-16 22:55:14 -03:00
oobabooga
322c170566 Document logits_all 2023-11-07 14:45:11 -08:00
oobabooga
d59f1ad89a
Update README.md 2023-11-07 13:05:06 -03:00
oobabooga
ec17a5d2b7
Make OpenAI API the default API (#4430) 2023-11-06 02:38:29 -03:00
feng lui
4766a57352
transformers: add use_flash_attention_2 option (#4373) 2023-11-04 13:59:33 -03:00
oobabooga
c0655475ae Add cache_8bit option 2023-11-02 11:23:04 -07:00
oobabooga
77abd9b69b Add no_flash_attn option 2023-11-02 11:08:53 -07:00
adrianfiedler
4bc411332f
Fix broken links (#4367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-23 14:09:57 -03:00
oobabooga
df90d03e0b Replace --mul_mat_q with --no_mul_mat_q 2023-10-22 12:23:03 -07:00
oobabooga
caf6db07ad
Update README.md 2023-10-22 01:22:17 -03:00
oobabooga
506d05aede Organize command-line arguments 2023-10-21 18:52:59 -07:00
oobabooga
ac6d5d50b7
Update README.md 2023-10-21 20:03:43 -03:00
oobabooga
6efb990b60
Add a proper documentation (#3885) 2023-10-21 19:15:54 -03:00
oobabooga
b98fbe0afc Add download link 2023-10-20 23:58:05 -07:00
Brian Dashore
3345da2ea4
Add flash-attention 2 for windows (#4235) 2023-10-21 03:46:23 -03:00
mjbogusz
8f6405d2fa
Python 3.11, 3.9, 3.8 support (#4233)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-20 21:13:33 -03:00
oobabooga
43be1be598 Manually install CUDA runtime libraries 2023-10-12 21:02:44 -07:00
oobabooga
2e8b5f7c80
Update ROCm command 2023-10-08 10:12:13 -03:00
oobabooga
00187d641a
Note about pytorch 2.1 breaking change 2023-10-08 10:10:38 -03:00
oobabooga
1c6e57dd68
Note about pytorch 2.1 breaking change 2023-10-08 10:09:22 -03:00
oobabooga
d33facc9fe
Bump to pytorch 11.8 (#4209) 2023-10-07 00:23:49 -03:00
oobabooga
7ffb424c7b Add AutoAWQ to README 2023-10-05 09:22:37 -07:00
oobabooga
b6fe6acf88 Add threads_batch parameter 2023-10-01 21:28:00 -07:00
StoyanStAtanasov
7e6ff8d1f0
Enable NUMA feature for llama_cpp_python (#4040) 2023-09-26 22:05:00 -03:00
oobabooga
44438c60e5 Add INSTALL_EXTENSIONS environment variable 2023-09-25 13:12:35 -07:00
oobabooga
d0d221df49 Add --use_fast option (closes #3741) 2023-09-25 12:19:43 -07:00
oobabooga
2e7b6b0014
Create alternative requirements.txt with AMD and Metal wheels (#4052) 2023-09-24 09:58:29 -03:00
oobabooga
895ec9dadb
Update README.md 2023-09-23 15:37:39 -03:00
oobabooga
299d285ff0
Update README.md 2023-09-23 15:36:09 -03:00
oobabooga
4b4d283a4c
Update README.md 2023-09-23 00:09:59 -03:00
oobabooga
0581f1094b
Update README.md 2023-09-22 23:31:32 -03:00
oobabooga
968f98a57f
Update README.md 2023-09-22 23:23:16 -03:00
oobabooga
72b4ab4c82 Update README 2023-09-22 15:20:09 -07:00
oobabooga
589ee9f623
Update README.md 2023-09-22 16:21:48 -03:00
oobabooga
c33a94e381 Rename doc file 2023-09-22 12:17:47 -07:00
oobabooga
6c5f81f002 Rename webui.py to one_click.py 2023-09-22 12:00:06 -07:00
oobabooga
fe2acdf45f
Update README.md 2023-09-22 15:52:20 -03:00
oobabooga
193fe18c8c Resolve conflicts 2023-09-21 17:45:11 -07:00
oobabooga
df39f455ad Merge remote-tracking branch 'second-repo/main' into merge-second-repo 2023-09-21 17:39:54 -07:00
James Braza
fee38e0601
Simplified ExLlama cloning instructions and failure message (#3972) 2023-09-17 19:26:05 -03:00
oobabooga
e75489c252 Update README 2023-09-15 21:04:51 -07:00
missionfloyd
2ad6ca8874
Add back chat buttons with --chat-buttons (#3947) 2023-09-16 00:39:37 -03:00
oobabooga
fb864dad7b Update README 2023-09-15 13:00:46 -07:00
oobabooga
2f935547c8 Minor changes 2023-09-12 15:05:21 -07:00
oobabooga
04a74b3774 Update README 2023-09-12 10:46:27 -07:00
Eve
92f3cd624c
Improve instructions for CPUs without AVX2 (#3786) 2023-09-11 11:54:04 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
oobabooga
40ffc3d687
Update README.md 2023-08-30 18:19:04 -03:00
oobabooga
5190e153ed
Update README.md 2023-08-30 14:06:29 -03:00
oobabooga
bc4023230b Improved instructions for AMD/Metal/Intel Arc/CPUs without AVCX2 2023-08-30 09:40:00 -07:00
missionfloyd
787219267c
Allow downloading single file from UI (#3737) 2023-08-29 23:32:36 -03:00
oobabooga
3361728da1 Change some comments 2023-08-26 22:24:44 -07:00
oobabooga
7f5370a272 Minor fixes/cosmetics 2023-08-26 22:11:07 -07:00
oobabooga
83640d6f43 Replace ggml occurences with gguf 2023-08-26 01:06:59 -07:00
oobabooga
f4f04c8c32 Fix a typo 2023-08-25 07:08:38 -07:00
oobabooga
52ab2a6b9e Add rope_freq_base parameter for CodeLlama 2023-08-25 06:55:15 -07:00
oobabooga
3320accfdc
Add CFG to llamacpp_HF (second attempt) (#3678) 2023-08-24 20:32:21 -03:00
oobabooga
d6934bc7bc
Implement CFG for ExLlama_HF (#3666) 2023-08-24 16:27:36 -03:00
oobabooga
1b419f656f Acknowledge a16z support 2023-08-21 11:57:51 -07:00
oobabooga
54df0bfad1 Update README.md 2023-08-18 09:43:15 -07:00
oobabooga
f50f534b0f Add note about AMD/Metal to README 2023-08-18 09:37:20 -07:00
oobabooga
7cba000421
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q (#3610) 2023-08-18 12:03:34 -03:00
oobabooga
32ff3da941
Update ancient screenshots 2023-08-15 17:16:24 -03:00
oobabooga
87dd85b719 Update README 2023-08-15 12:21:50 -07:00
oobabooga
a03a70bed6 Update README 2023-08-15 12:20:59 -07:00
oobabooga
7089b2a48f Update README 2023-08-15 12:16:21 -07:00
oobabooga
155862a4a0 Update README 2023-08-15 12:11:12 -07:00
cal066
991bb57e43
ctransformers: Fix up model_type name consistency (#3567) 2023-08-14 15:17:24 -03:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama) 2023-08-14 15:15:55 -03:00
oobabooga
619cb4e78b
Add "save defaults to settings.yaml" button (#3574) 2023-08-14 11:46:07 -03:00
Eve
66c04c304d
Various ctransformers fixes (#3556)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
2023-08-13 23:09:03 -03:00
oobabooga
a1a9ec895d
Unify the 3 interface modes (#3554) 2023-08-13 01:12:15 -03:00
Chris Lefever
0230fa4e9c Add the --disable_exllama option for AutoGPTQ 2023-08-12 02:26:58 -04:00
oobabooga
4c450e6b70
Update README.md 2023-08-11 15:50:16 -03:00
cal066
7a4fcee069
Add ctransformers support (#3313)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
oobabooga
949c92d7df
Create README.md 2023-08-10 14:32:40 -03:00
oobabooga
c7f52bbdc1 Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a.
2023-08-10 08:39:41 -07:00
jllllll
e3d3565b2a
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
2023-08-09 23:59:04 -05:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
2255349f19 Update README 2023-08-09 05:46:25 -07:00
oobabooga
d8fb506aff Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Friedemann Lipphardt
901b028d55
Add option for named cloudflare tunnels (#3364) 2023-08-08 22:20:27 -03:00