oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP ( #2648 )
2023-06-15 23:59:54 -03:00
oobabooga
00b94847da
Remove softprompt support
2023-06-06 07:42:23 -03:00
oobabooga
3a5cfe96f0
Increase chat_prompt_size_max
2023-06-05 17:37:37 -03:00
oobabooga
f276d88546
Use AutoGPTQ by default for GPTQ models
2023-06-05 15:41:48 -03:00
oobabooga
19f78684e6
Add "Start reply with" feature to chat mode
2023-06-02 13:58:08 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model ( #1668 )
2023-06-02 01:33:55 -03:00
oobabooga
a83f9aa65b
Update shared.py
2023-06-01 12:08:39 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
oobabooga
2f811b1bdf
Change a warning message
2023-05-28 22:48:20 -03:00
oobabooga
00ebea0b2a
Use YAML for presets and settings
2023-05-28 22:34:12 -03:00
oobabooga
8efdc01ffb
Better default for compute_dtype
2023-05-25 15:05:53 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag ( #2283 )
2023-05-23 20:39:26 -03:00
oobabooga
cd3618d7fb
Add support for RWKV in Hugging Face format
2023-05-23 02:07:28 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json ( #2288 )
2023-05-22 19:45:11 -03:00
oobabooga
d63ef59a0f
Apply LLaMA-Precise preset to Vicuna by default
2023-05-21 23:00:42 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
oobabooga
1a8151a2b6
Add AutoGPTQ support (basic) ( #2132 )
2023-05-17 11:12:12 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow ( #2100 )
2023-05-17 10:41:09 -03:00
atriantafy
26cf8c2545
add api port options ( #1990 )
2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp ( #1936 )
2023-05-15 20:19:55 -03:00
oobabooga
c07215cc08
Improve the default Assistant character
2023-05-15 19:39:08 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option ( #2060 )
2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode ( #2049 )
2023-05-14 10:43:55 -03:00
oobabooga
e283ddc559
Change how spaces are handled in continue/generation attempts
2023-05-12 12:50:29 -03:00
oobabooga
5eaa914e1b
Fix settings.json being ignored because of config.yaml
2023-05-12 06:09:45 -03:00
oobabooga
f7dbddfff5
Add a variable for tts extensions to use
2023-05-11 16:12:46 -03:00
oobabooga
bdf1274b5d
Remove duplicate code
2023-05-10 01:34:04 -03:00
minipasila
334486f527
Added instruct-following template for Metharme ( #1679 )
2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following ( #1596 )
2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) ( #1741 )
2023-05-09 20:18:02 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model ( #1762 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles ( #1917 )
2023-05-08 12:35:03 -03:00
oobabooga
00e333d790
Add MOSS support
2023-05-04 23:20:34 -03:00
oobabooga
b6ff138084
Add --checkpoint argument for GPTQ
2023-05-04 15:17:20 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
oobabooga
f54256e348
Rename no_mmap to no-mmap
2023-05-03 09:50:31 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative ( #1649 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
f39c99fa14
Load more than one LoRA with --lora, fix a bug
2023-04-25 22:58:48 -03:00
oobabooga
b6af2e56a2
Add --character flag, add character to settings.json
2023-04-24 13:19:42 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp ( #1115 )
2023-04-24 03:46:18 -03:00
oobabooga
b1ee674d75
Make interface state (mostly) persistent on page reload
2023-04-24 03:05:47 -03:00
Wojtab
12212cf6be
LLaVA support ( #1487 )
2023-04-23 20:32:22 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints ( #990 )
...
Previous title: Add api_streaming extension and update api-example-stream to use it
* Merge with latest main
* Add parameter capturing encoder_repetition_penalty
* Change some defaults, minor fixes
* Add --api, --public-api flags
* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.
* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'
* Update the API examples
* Change a comment
* Update README
* Remove the gradio API
* Remove unused import
* Minor change
* Remove unused import
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga
fcb594b90e
Don't require llama.cpp models to be placed in subfolders
2023-04-22 14:56:48 -03:00
oobabooga
7438f4f6ba
Change GPTQ triton default settings
2023-04-22 12:27:30 -03:00
oobabooga
eddd016449
Minor deletion
2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50
Fix evaluate comment saving
2023-04-21 12:34:08 -03:00
oobabooga
702fe92d42
Increase truncation_length_max value
2023-04-19 17:35:38 -03:00
oobabooga
ac2973ffc6
Add a warning for --share
2023-04-17 19:34:28 -03:00
oobabooga
89bc540557
Update README
2023-04-17 10:55:35 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef ( #1229 )
2023-04-17 01:11:18 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support ( #1200 )
2023-04-16 23:26:52 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support ( #1256 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model ( #1218 )
2023-04-16 14:24:49 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
...
* Remove eval of API input
* Remove unnecessary eval/exec for security
* Use ast.literal_eval
* Use ast.literal_eval
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
3a337cfded
Use argparse defaults
2023-04-14 15:35:06 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support ( #1103 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name ( #1167 )
2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option ( #1122 )
2023-04-13 21:35:08 -03:00
Light
cf58058c33
Change warmup_autotune to a negative switch.
2023-04-13 20:59:49 +08:00
Light
15d5a043f2
Merge remote-tracking branch 'origin/main' into triton
2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42
Add --model-menu option
2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e
Update settings-template.json
2023-04-12 18:30:43 -03:00
oobabooga
1566d8e344
Add model settings to the Models tab
2023-04-12 17:20:18 -03:00
Light
f3591ccfa1
Keep minimal change.
2023-04-12 23:26:06 +08:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token
2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes ( #903 )
2023-04-11 12:30:06 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training ( #938 )
...
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga
bd04ff27ad
Make the bos token optional
2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1
Don't treat Intruct mode histories as regular histories
...
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama ( #950 )
2023-04-09 23:08:40 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset ( #767 )
2023-04-05 18:52:36 -03:00
oobabooga
e722c240af
Add Instruct mode
2023-04-05 13:54:50 -03:00
oobabooga
65d8a24a6d
Show profile pictures in the Character tab
2023-04-04 22:28:49 -03:00
oobabooga
b24147c7ca
Document --pre_layer
2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270
Update settings template
2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly ( #743 )
...
* Add greetings field
* add greeting field and make it interactive
* Minor changes
* Fix a bug
* Simplify clear_chat_log
* Change a label
* Minor change
* Simplifications
* Simplification
* Simplify loading the default character history
* Fix regression
---------
Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
oobabooga
d4a9b5ea97
Remove redundant preset (see the plot in #587 )
2023-03-30 17:34:44 -03:00
oobabooga
55755e27b9
Don't hardcode prompts in the settings dict/json
2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160
Adapt to the new model names
2023-03-29 21:47:36 -03:00
oobabooga
010b259dde
Update documentation
2023-03-28 17:46:00 -03:00
oobabooga
036163a751
Change description
2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3
Some simplifications
2023-03-27 23:29:52 -03:00
oobabooga
fde92048af
Merge branch 'main' into catalpaaa-lora-and-model-dir
2023-03-27 23:16:44 -03:00