oobabooga
|
95600073bc
|
Add an informative error when extension requirements are missing
|
2023-12-19 20:20:45 -08:00 |
|
Lounger
|
f9accd38e0
|
UI: Update chat instruct styles
|
2023-12-20 02:54:08 +01:00 |
|
oobabooga
|
d8279dc710
|
Replace character name placeholders in chat context (closes #5007)
|
2023-12-19 17:31:46 -08:00 |
|
Lounger
|
ff3e845b04
|
UI: Header boy is dropping shadows
|
2023-12-20 01:24:34 +01:00 |
|
Lounger
|
40d5bf6c35
|
Set margin on other tabs too
|
2023-12-19 23:42:13 +01:00 |
|
Lounger
|
f42074b6c1
|
UI: Remove header margin on chat tab
|
2023-12-19 23:27:11 +01:00 |
|
oobabooga
|
e83e6cedbe
|
Organize the model menu
|
2023-12-19 13:18:26 -08:00 |
|
oobabooga
|
f4ae0075e8
|
Fix conversion from old template format to jinja2
|
2023-12-19 13:16:52 -08:00 |
|
oobabooga
|
de138b8ba6
|
Add llama-cpp-python wheels with tensor cores support (#5003)
|
2023-12-19 17:30:53 -03:00 |
|
oobabooga
|
0a299d5959
|
Bump llama-cpp-python to 0.2.24 (#5001)
|
2023-12-19 15:22:21 -03:00 |
|
oobabooga
|
83cf1a6b67
|
Fix Yi space issue (closes #4996)
|
2023-12-19 07:54:19 -08:00 |
|
oobabooga
|
9847809a7a
|
Add a warning about ppl evaluation without --no_use_fast
|
2023-12-18 18:09:24 -08:00 |
|
oobabooga
|
f6d701624c
|
UI: mention that QuIP# does not work on Windows
|
2023-12-18 18:05:02 -08:00 |
|
oobabooga
|
a23a004434
|
Update the example template
|
2023-12-18 17:47:35 -08:00 |
|
oobabooga
|
3d10c574e7
|
Fix custom system messages in instruction templates
|
2023-12-18 17:45:06 -08:00 |
|
dependabot[bot]
|
9e48e50428
|
Update optimum requirement from ==1.15.* to ==1.16.* (#4986)
|
2023-12-18 21:43:29 -03:00 |
|
俞航
|
9fa3883630
|
Add ROCm wheels for exllamav2 (#4973)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:40:38 -03:00 |
|
Water
|
674be9a09a
|
Add HQQ quant loader (#4888)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-12-18 21:23:16 -03:00 |
|
oobabooga
|
64a57d9dc2
|
Remove duplicate instruction templates
|
2023-12-17 21:39:47 -08:00 |
|
oobabooga
|
1f9e25e76a
|
UI: update "Saved instruction templates" dropdown after loading template
|
2023-12-17 21:19:06 -08:00 |
|
oobabooga
|
da1c8d77ea
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2023-12-17 21:05:10 -08:00 |
|
oobabooga
|
cac89df97b
|
Instruction templates: better handle unwanted bos tokens
|
2023-12-17 21:04:30 -08:00 |
|
oobabooga
|
f0d6ead877
|
llama.cpp: read instruction template from GGUF metadata (#4975)
|
2023-12-18 01:51:58 -03:00 |
|
oobabooga
|
3f3cd4fbe4
|
UI: improve list style in chat modes
|
2023-12-17 20:26:57 -08:00 |
|
oobabooga
|
306c479d3a
|
Minor fix to Vigogne-Chat template
|
2023-12-17 19:15:54 -08:00 |
|
Hirose
|
3f973e1fbf
|
Add detection for Eric Hartford's Dolphin models in models/config.yaml (#4966)
|
2023-12-17 23:56:34 -03:00 |
|
Eve
|
7c6f39382b
|
Add Orca-Vicuna instruction template (#4971)
|
2023-12-17 23:55:23 -03:00 |
|
FartyPants (FP HAM)
|
59da429cbd
|
Update Training PRO (#4972)
- rolling back safetensors to bi, until it is fixed correctly
- removing the ugly checkpoint detour
|
2023-12-17 23:54:06 -03:00 |
|
oobabooga
|
f1f2c4c3f4
|
Add --num_experts_per_token parameter (ExLlamav2) (#4955)
|
2023-12-17 12:08:33 -03:00 |
|
oobabooga
|
12690d3ffc
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
|
oobabooga
|
aa200f8723
|
UI: remove no longer necessary js in Default/Notebook tabs
|
2023-12-16 19:39:00 -08:00 |
|
oobabooga
|
7a84d7b2da
|
Instruct style improvements (#4951)
|
2023-12-16 22:16:26 -03:00 |
|
oobabooga
|
41424907b1
|
Update README
|
2023-12-16 16:35:36 -08:00 |
|
oobabooga
|
d2ed0a06bf
|
Bump ExLlamav2 to 0.0.11 (adds Mixtral support)
|
2023-12-16 16:34:15 -08:00 |
|
oobabooga
|
0087dca286
|
Update README
|
2023-12-16 12:28:51 -08:00 |
|
oobabooga
|
f8079d067d
|
UI: save the sent chat message on "no model is loaded" error
|
2023-12-16 10:52:41 -08:00 |
|
oobabooga
|
a060908d6c
|
Mixtral Instruct: detect prompt format for llama.cpp loader
Workaround until the tokenizer.chat_template kv field gets implemented
|
2023-12-15 06:59:15 -08:00 |
|
oobabooga
|
3bbf6c601d
|
AutoGPTQ: Add --disable_exllamav2 flag (Mixtral CPU offloading needs this)
|
2023-12-15 06:46:13 -08:00 |
|
oobabooga
|
7de10f4c8e
|
Bump AutoGPTQ to 0.6.0 (adds Mixtral support)
|
2023-12-15 06:18:49 -08:00 |
|
oobabooga
|
d0677caf2c
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2023-12-15 04:51:41 -08:00 |
|
oobabooga
|
69ba3cb0d9
|
Bump openai-whisper requirement (closes #4848)
|
2023-12-15 04:48:04 -08:00 |
|
Song Fuchang
|
127c71a22a
|
Update IPEX to 2.1.10+xpu (#4931)
* This will require Intel oneAPI Toolkit 2024.0
|
2023-12-15 03:19:01 -03:00 |
|
oobabooga
|
85816898f9
|
Bump llama-cpp-python to 0.2.23 (including Linux ROCm and MacOS >= 12) (#4930)
|
2023-12-15 01:58:08 -03:00 |
|
oobabooga
|
2cb5b68ad9
|
Bug fix: when generation fails, save the sent message (#4915)
|
2023-12-15 01:01:45 -03:00 |
|
Felipe Ferreira
|
11f082e417
|
[OpenAI Extension] Add more types to Embeddings Endpoint (#4895)
|
2023-12-15 00:26:16 -03:00 |
|
Kim Jaewon
|
e53f99faa0
|
[OpenAI Extension] Add 'max_logits' parameter in logits endpoint (#4916)
|
2023-12-15 00:22:43 -03:00 |
|
oobabooga
|
eaa1fe67f3
|
Remove elevenlabs extension (#4928)
|
2023-12-15 00:00:07 -03:00 |
|
oobabooga
|
f336f8a811
|
Merge branch 'main' into dev
|
2023-12-14 17:38:16 -08:00 |
|
oobabooga
|
dde7921057
|
One-click installer: minor message change
|
2023-12-14 17:27:32 -08:00 |
|
oobabooga
|
fd1449de20
|
One-click installer: fix minor bug introduced in previous commit
|
2023-12-14 16:52:44 -08:00 |
|