oobabooga
|
31c297d7e0
|
Various changes
|
2023-07-04 18:50:01 -07:00 |
|
Honkware
|
3147f0b8f8
|
xgen config
|
2023-06-29 01:32:53 -05:00 |
|
matatonic
|
da0ea9e0f3
|
set +landmark, +superhot-8k to 8k length (#2903)
|
2023-06-27 22:05:52 -03:00 |
|
oobabooga
|
c52290de50
|
ExLlama with long context (#2875)
|
2023-06-25 22:49:26 -03:00 |
|
matatonic
|
68ae5d8262
|
more models: +orca_mini (#2859)
|
2023-06-25 01:54:53 -03:00 |
|
matatonic
|
8c36c19218
|
8k size only for minotaur-15B (#2815)
Co-authored-by: Matthew Ashton <mashton-gitlab@zhero.org>
|
2023-06-24 10:14:19 -03:00 |
|
matatonic
|
d94ea31d54
|
more models. +minotaur 8k (#2806)
|
2023-06-21 21:05:08 -03:00 |
|
matatonic
|
90be1d9fe1
|
More models (match more) & templates (starchat-beta, tulu) (#2790)
|
2023-06-21 12:30:44 -03:00 |
|
matatonic
|
2220b78e7a
|
models/config.yaml: +alpacino, +alpasta, +hippogriff, +gpt4all-snoozy, +lazarus, +based, -airoboros 4k (#2580)
|
2023-06-17 19:14:25 -03:00 |
|
oobabooga
|
8a7a8343be
|
Detect TheBloke_WizardLM-30B-GPTQ
|
2023-06-09 00:26:34 -03:00 |
|
oobabooga
|
db2cbe7b5a
|
Detect WizardLM-30B-V1.0 instruction format
|
2023-06-08 11:43:40 -03:00 |
|
oobabooga
|
6a75bda419
|
Assign some 4096 seq lengths
|
2023-06-05 12:07:52 -03:00 |
|
oobabooga
|
e61316ce0b
|
Detect airoboros and Nous-Hermes
|
2023-06-05 11:52:13 -03:00 |
|
oobabooga
|
f344ccdddb
|
Add a template for bluemoon
|
2023-06-01 14:42:12 -03:00 |
|
Carl Kenner
|
c86231377b
|
Wizard Mega, Ziya, KoAlpaca, OpenBuddy, Chinese-Vicuna, Vigogne, Bactrian, H2O support, fix Baize (#2159)
|
2023-05-19 11:42:41 -03:00 |
|
oobabooga
|
499c2e009e
|
Remove problematic regex from models/config.yaml
|
2023-05-19 11:20:35 -03:00 |
|
oobabooga
|
fcb46282c5
|
Add a rule to config.yaml
|
2023-05-12 06:11:58 -03:00 |
|
oobabooga
|
5eaa914e1b
|
Fix settings.json being ignored because of config.yaml
|
2023-05-12 06:09:45 -03:00 |
|
matatonic
|
309b72e549
|
[extension/openai] add edits & image endpoints & fix prompt return in non --chat modes (#1935)
|
2023-05-11 11:06:39 -03:00 |
|
oobabooga
|
dfd9ba3e90
|
Remove duplicate code
|
2023-05-10 02:07:22 -03:00 |
|
minipasila
|
334486f527
|
Added instruct-following template for Metharme (#1679)
|
2023-05-09 22:29:22 -03:00 |
|
Carl Kenner
|
814f754451
|
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596)
|
2023-05-09 20:37:31 -03:00 |
|
Matthew McAllister
|
06c7db017d
|
Add config for pygmalion-7b and metharme-7b (#1887)
|
2023-05-09 20:31:27 -03:00 |
|
oobabooga
|
b5260b24f1
|
Add support for custom chat styles (#1917)
|
2023-05-08 12:35:03 -03:00 |
|
oobabooga
|
00e333d790
|
Add MOSS support
|
2023-05-04 23:20:34 -03:00 |
|
oobabooga
|
97a6a50d98
|
Use oasst tokenizer instead of universal tokenizer
|
2023-05-04 15:55:39 -03:00 |
|
oobabooga
|
dbddedca3f
|
Detect oasst-sft-6-llama-30b
|
2023-05-04 15:13:37 -03:00 |
|
oobabooga
|
91745f63c3
|
Use Vicuna-v0 by default for Vicuna models
|
2023-04-26 17:45:38 -03:00 |
|
TiagoGF
|
a941c19337
|
Fixing Vicuna text generation (#1579)
|
2023-04-26 16:20:27 -03:00 |
|
oobabooga
|
d87ca8f2af
|
LLaVA fixes
|
2023-04-26 03:47:34 -03:00 |
|
oobabooga
|
a777c058af
|
Precise prompts for instruct mode
|
2023-04-26 03:21:53 -03:00 |
|
Wojtab
|
12212cf6be
|
LLaVA support (#1487)
|
2023-04-23 20:32:22 -03:00 |
|
Forkoz
|
c6fe1ced01
|
Add ChatGLM support (#1256)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-04-16 19:15:03 -03:00 |
|
oobabooga
|
cb95a2432c
|
Add Koala support
|
2023-04-16 14:41:06 -03:00 |
|
oobabooga
|
b937c9d8c2
|
Add skip_special_tokens checkbox for Dolly model (#1218)
|
2023-04-16 14:24:49 -03:00 |
|
oobabooga
|
7d7d122edb
|
Cover one more model
|
2023-04-14 11:15:59 -03:00 |
|
oobabooga
|
8eba88061a
|
Remove unused config
|
2023-04-14 11:12:17 -03:00 |
|
oobabooga
|
8e31f2bad4
|
Automatically set wbits/groupsize/instruct based on model name (#1167)
|
2023-04-14 11:07:28 -03:00 |
|