Commit Graph

73 Commits

Author SHA1 Message Date
oobabooga
a060908d6c Mixtral Instruct: detect prompt format for llama.cpp loader
Workaround until the tokenizer.chat_template kv field gets implemented
2023-12-15 06:59:15 -08:00
oobabooga
f4b956b47c Detect yi instruction template 2023-11-27 10:45:47 -08:00
Eve
d06ce7b75c
add openhermes mistral support (#4730) 2023-11-27 15:41:06 -03:00
oobabooga
b81d6ad8a4
Detect Orca 2 template (#4697) 2023-11-21 15:26:42 -03:00
deevis
deba039c03
(fix): OpenOrca-Platypus2 models should use correct instruction_template and custom_stopping_strings (#4435) 2023-11-01 01:51:00 -03:00
oobabooga
ef1489cd4d Remove unused parameter in AutoAWQ 2023-10-23 20:45:43 -07:00
Haotian Liu
32984ea2f0
Support LLaVA v1.5 (#4305) 2023-10-20 02:28:14 -03:00
oobabooga
bb71272903 Detect WizardCoder-Python-34B & Phind-CodeLlama-34B 2023-10-19 14:35:56 -07:00
Eve
6e2dec82f1
add chatml support + mistral-openorca (#4275) 2023-10-13 11:49:17 -03:00
cal066
cc632c3f33
AutoAWQ: initial support (#3999) 2023-10-05 13:19:18 -03:00
oobabooga
96da2e1c0d Read more metadata (config.json & quantize_config.json) 2023-09-29 06:14:16 -07:00
oobabooga
1dd13e4643 Read Transformers config.json metadata 2023-09-28 19:19:47 -07:00
oobabooga
92a39c619b Add Mistral support 2023-09-28 15:41:03 -07:00
Gennadij
460c40d8ab
Read more GGUF metadata (scale_linear and freq_base) (#3877) 2023-09-12 17:02:42 -03:00
Eve
90fca6a77d
add pygmalion-2 and mythalion support (#3821) 2023-09-12 15:57:49 -03:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders (#3881) 2023-09-12 14:33:07 -03:00
oobabooga
ed86878f02 Remove GGML support 2023-09-11 07:44:00 -07:00
oobabooga
4affa08821 Do not impose instruct mode while loading models 2023-09-02 11:31:33 -07:00
oobabooga
0bcecaa216 Set mode: instruct for CodeLlama-instruct 2023-08-25 07:59:23 -07:00
oobabooga
5c7d8bfdfd Detect CodeLlama settings 2023-08-25 07:06:57 -07:00
oobabooga
3e7c624f8e Add a template for OpenOrca-Platypus2 2023-08-17 15:03:08 -07:00
cal066
991bb57e43
ctransformers: Fix up model_type name consistency (#3567) 2023-08-14 15:17:24 -03:00
Eve
66c04c304d
Various ctransformers fixes (#3556)
---------

Co-authored-by: cal066 <cal066@users.noreply.github.com>
2023-08-13 23:09:03 -03:00
Gennadij
e12a1852d9
Add Vicuna-v1.5 detection (#3524) 2023-08-10 13:42:24 -03:00
oobabooga
a3295dd666 Detect n_gqa and prompt template for wizardlm-70b 2023-08-09 10:51:16 -07:00
GiganticPrime
5bfcfcfc5a
Added the logic for starchat model series (#3185) 2023-08-09 09:26:12 -03:00
oobabooga
4ba30f6765 Add OpenChat template 2023-08-08 14:10:04 -07:00
matatonic
32e7cbb635
More models: +StableBeluga2 (#3415) 2023-08-03 16:02:54 -03:00
oobabooga
c8a59d79be Add a template for NewHope 2023-08-01 13:27:29 -07:00
oobabooga
de5de045e0 Set rms_norm_eps to 5e-6 for every llama-2 ggml model, not just 70b 2023-07-26 08:26:56 -07:00
oobabooga
7bc408b472 Change rms_norm_eps to 5e-6 for llama-2-70b ggml
Based on https://github.com/ggerganov/llama.cpp/pull/2384
2023-07-25 14:54:57 -07:00
oobabooga
08c622df2e Autodetect rms_norm_eps and n_gqa for llama-2-70b 2023-07-24 15:27:34 -07:00
oobabooga
e0631e309f
Create instruction template for Llama-v2 (#3194) 2023-07-18 17:19:18 -03:00
oobabooga
656b457795 Add Airoboros-v1.2 template 2023-07-17 07:27:42 -07:00
matatonic
3778816b8d
models/config.yaml: +platypus/gplatty, +longchat, +vicuna-33b, +Redmond-Hermes-Coder, +wizardcoder, +more (#2928)
* +platypus/gplatty

* +longchat, +vicuna-33b, +Redmond-Hermes-Coder

* +wizardcoder

* +superplatty

* +Godzilla, +WizardLM-V1.1, +rwkv 8k,
+wizard-mega fix </s>

---------

Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-07-11 18:53:48 -03:00
oobabooga
31c297d7e0 Various changes 2023-07-04 18:50:01 -07:00
Honkware
3147f0b8f8 xgen config 2023-06-29 01:32:53 -05:00
matatonic
da0ea9e0f3
set +landmark, +superhot-8k to 8k length (#2903) 2023-06-27 22:05:52 -03:00
oobabooga
c52290de50
ExLlama with long context (#2875) 2023-06-25 22:49:26 -03:00
matatonic
68ae5d8262
more models: +orca_mini (#2859) 2023-06-25 01:54:53 -03:00
matatonic
8c36c19218
8k size only for minotaur-15B (#2815)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
2023-06-24 10:14:19 -03:00
matatonic
d94ea31d54
more models. +minotaur 8k (#2806) 2023-06-21 21:05:08 -03:00
matatonic
90be1d9fe1
More models (match more) & templates (starchat-beta, tulu) (#2790) 2023-06-21 12:30:44 -03:00
matatonic
2220b78e7a
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k (#2580) 2023-06-17 19:14:25 -03:00
oobabooga
8a7a8343be Detect TheBloke_WizardLM-30B-GPTQ 2023-06-09 00:26:34 -03:00
oobabooga
db2cbe7b5a Detect WizardLM-30B-V1.0 instruction format 2023-06-08 11:43:40 -03:00
oobabooga
6a75bda419 Assign some 4096 seq lengths 2023-06-05 12:07:52 -03:00
oobabooga
e61316ce0b Detect airoboros and Nous-Hermes 2023-06-05 11:52:13 -03:00
oobabooga
f344ccdddb Add a template for bluemoon 2023-06-01 14:42:12 -03:00
Carl Kenner
c86231377b
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159) 2023-05-19 11:42:41 -03:00