Commit Graph

3271 Commits

Author SHA1 Message Date
oobabooga
71eb744b1c
Merge pull request #5002 from oobabooga/dev
Merge dev branch
2023-12-19 15:24:40 -03:00
oobabooga
0a299d5959
Bump llama-cpp-python to 0.2.24 (#5001) 2023-12-19 15:22:21 -03:00
oobabooga
83cf1a6b67 Fix Yi space issue (closes #4996) 2023-12-19 07:54:19 -08:00
oobabooga
781367bdc3
Merge pull request #4988 from oobabooga/dev
Merge dev branch
2023-12-18 23:42:16 -03:00
oobabooga
9847809a7a Add a warning about ppl evaluation without --no_use_fast 2023-12-18 18:09:24 -08:00
oobabooga
f6d701624c UI: mention that QuIP# does not work on Windows 2023-12-18 18:05:02 -08:00
oobabooga
a23a004434 Update the example template 2023-12-18 17:47:35 -08:00
oobabooga
3d10c574e7 Fix custom system messages in instruction templates 2023-12-18 17:45:06 -08:00
dependabot[bot]
9e48e50428
Update optimum requirement from ==1.15.* to ==1.16.* (#4986) 2023-12-18 21:43:29 -03:00
俞航
9fa3883630
Add ROCm wheels for exllamav2 (#4973)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-12-18 21:40:38 -03:00
Water
674be9a09a
Add HQQ quant loader (#4888)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-12-18 21:23:16 -03:00
oobabooga
b28020a9e4
Merge pull request #4980 from oobabooga/dev
Merge dev branch
2023-12-18 10:11:32 -03:00
oobabooga
64a57d9dc2 Remove duplicate instruction templates 2023-12-17 21:39:47 -08:00
oobabooga
1f9e25e76a UI: update "Saved instruction templates" dropdown after loading template 2023-12-17 21:19:06 -08:00
oobabooga
da1c8d77ea Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-12-17 21:05:10 -08:00
oobabooga
cac89df97b Instruction templates: better handle unwanted bos tokens 2023-12-17 21:04:30 -08:00
oobabooga
f0d6ead877
llama.cpp: read instruction template from GGUF metadata (#4975) 2023-12-18 01:51:58 -03:00
oobabooga
3f3cd4fbe4 UI: improve list style in chat modes 2023-12-17 20:26:57 -08:00
oobabooga
306c479d3a Minor fix to Vigogne-Chat template 2023-12-17 19:15:54 -08:00
Hirose
3f973e1fbf
Add detection for Eric Hartford's Dolphin models in models/config.yaml (#4966) 2023-12-17 23:56:34 -03:00
Eve
7c6f39382b
Add Orca-Vicuna instruction template (#4971) 2023-12-17 23:55:23 -03:00
FartyPants (FP HAM)
59da429cbd
Update Training PRO (#4972)
- rolling back safetensors to bi, until it is fixed correctly
- removing the ugly checkpoint detour
2023-12-17 23:54:06 -03:00
oobabooga
7be09836fc
Merge pull request #4961 from oobabooga/dev
Merge dev branch
2023-12-17 12:11:13 -03:00
oobabooga
f1f2c4c3f4
Add --num_experts_per_token parameter (ExLlamav2) (#4955) 2023-12-17 12:08:33 -03:00
oobabooga
12690d3ffc
Better HF grammar implementation (#4953) 2023-12-17 02:01:23 -03:00
oobabooga
aa200f8723 UI: remove no longer necessary js in Default/Notebook tabs 2023-12-16 19:39:00 -08:00
oobabooga
7a84d7b2da
Instruct style improvements (#4951) 2023-12-16 22:16:26 -03:00
oobabooga
41424907b1 Update README 2023-12-16 16:35:36 -08:00
oobabooga
d2ed0a06bf Bump ExLlamav2 to 0.0.11 (adds Mixtral support) 2023-12-16 16:34:15 -08:00
oobabooga
0087dca286 Update README 2023-12-16 12:28:51 -08:00
oobabooga
f8079d067d UI: save the sent chat message on "no model is loaded" error 2023-12-16 10:52:41 -08:00
oobabooga
443be391f2
Merge pull request #4937 from oobabooga/dev
Merge dev branch
2023-12-15 12:03:22 -03:00
oobabooga
a060908d6c Mixtral Instruct: detect prompt format for llama.cpp loader
Workaround until the tokenizer.chat_template kv field gets implemented
2023-12-15 06:59:15 -08:00
oobabooga
3bbf6c601d AutoGPTQ: Add --disable_exllamav2 flag (Mixtral CPU offloading needs this) 2023-12-15 06:46:13 -08:00
oobabooga
7de10f4c8e Bump AutoGPTQ to 0.6.0 (adds Mixtral support) 2023-12-15 06:18:49 -08:00
oobabooga
d0677caf2c Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-12-15 04:51:41 -08:00
oobabooga
69ba3cb0d9 Bump openai-whisper requirement (closes #4848) 2023-12-15 04:48:04 -08:00
Song Fuchang
127c71a22a
Update IPEX to 2.1.10+xpu (#4931)
* This will require Intel oneAPI Toolkit 2024.0
2023-12-15 03:19:01 -03:00
oobabooga
85816898f9
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930) 2023-12-15 01:58:08 -03:00
oobabooga
2cb5b68ad9
Bug fix: when generation fails, save the sent message (#4915) 2023-12-15 01:01:45 -03:00
Felipe Ferreira
11f082e417
[OpenAI Extension] Add more types to Embeddings Endpoint (#4895) 2023-12-15 00:26:16 -03:00
Kim Jaewon
e53f99faa0
[OpenAI Extension] Add 'max_logits' parameter in logits endpoint (#4916) 2023-12-15 00:22:43 -03:00
oobabooga
eaa1fe67f3
Remove elevenlabs extension (#4928) 2023-12-15 00:00:07 -03:00
oobabooga
c3e0fcfc52
Merge pull request #4927 from oobabooga/dev
Merge dev branch
2023-12-14 22:39:08 -03:00
oobabooga
f336f8a811 Merge branch 'main' into dev 2023-12-14 17:38:16 -08:00
oobabooga
dde7921057 One-click installer: minor message change 2023-12-14 17:27:32 -08:00
oobabooga
fd1449de20 One-click installer: fix minor bug introduced in previous commit 2023-12-14 16:52:44 -08:00
oobabooga
4ae2dcebf5 One-click installer: more friendly progress messages 2023-12-14 16:48:00 -08:00
oobabooga
8acecf3aee Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924) 2023-12-14 09:41:36 -08:00
oobabooga
8835ea3704
Bump llama-cpp-python to 0.2.23 (NVIDIA & CPU-only, no AMD, no Metal) (#4924) 2023-12-14 14:39:43 -03:00