kaiokendev
5a4bd3918c
Add SuperBIG extension (alpha) ( #1548 )
...
---------
Co-authored-by: kaiokendev <>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-07 03:50:12 -03:00
oobabooga
81be7c2dd4
Specify gradio_client version
2023-05-06 21:50:04 -03:00
oobabooga
85238de421
Remove unused variable
2023-05-06 11:03:12 -03:00
oobabooga
de9c4e260e
Minor fixes to elevenlabs_tts
2023-05-06 10:57:34 -03:00
Steve Randall
b03a2ac512
Elevenlabs Extension Improvement and migration to official API ( #1830 )
2023-05-06 10:56:31 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
ee3c8a893e
Update Extensions.md
2023-05-05 19:04:50 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
Tom Jobbins
876fbb97c0
Allow downloading model from HF branch via UI ( #1662 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-05 13:59:01 -03:00
oobabooga
849ad04c96
Change background color of instruct code blocks
2023-05-05 12:02:45 -03:00
oobabooga
c728f2b5f0
Better handle new line characters in code blocks
2023-05-05 11:22:36 -03:00
oobabooga
207a031e8d
CSS change to instruct mode
2023-05-05 00:36:15 -03:00
oobabooga
e5d6d822b1
Minor CSS change to instruct mode
2023-05-04 23:41:00 -03:00
oobabooga
a50c2ab82a
Add missing file
2023-05-04 23:29:46 -03:00
oobabooga
00e333d790
Add MOSS support
2023-05-04 23:20:34 -03:00
oobabooga
f673f4a4ca
Change --verbose behavior
2023-05-04 15:56:06 -03:00
oobabooga
97a6a50d98
Use oasst tokenizer instead of universal tokenizer
2023-05-04 15:55:39 -03:00
oobabooga
b6ff138084
Add --checkpoint argument for GPTQ
2023-05-04 15:17:20 -03:00
oobabooga
dbddedca3f
Detect oasst-sft-6-llama-30b
2023-05-04 15:13:37 -03:00
Wojtek Kowaluk
1436c5845a
fix ggml detection regex in model downloader ( #1779 )
2023-05-04 11:48:36 -03:00
Mylo
bd531c2dc2
Make --trust-remote-code work for all models ( #1772 )
2023-05-04 02:01:28 -03:00
oobabooga
0e6d17304a
Clearer syntax for instruction-following characters
2023-05-03 22:50:39 -03:00
oobabooga
9c77ab4fc2
Improve some warnings
2023-05-03 22:06:46 -03:00
oobabooga
057b1b2978
Add credits
2023-05-03 21:49:55 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
oobabooga
0a48b29cd8
Prevent websocket disconnection on the client side
2023-05-03 20:44:30 -03:00
oobabooga
4bf7253ec5
Fix typing bug in api
2023-05-03 19:27:20 -03:00
oobabooga
d6410a1b36
Bump recommended monkey patch commit
2023-05-03 14:49:25 -03:00
oobabooga
60be76f0fc
Revert gradio bump (gallery is broken)
2023-05-03 11:53:30 -03:00
Thireus ☠
4883e20fa7
Fix openai extension script.py - TypeError: '_Environ' object is not callable ( #1753 )
2023-05-03 09:51:49 -03:00
oobabooga
f54256e348
Rename no_mmap to no-mmap
2023-05-03 09:50:31 -03:00
oobabooga
875da16b7b
Minor CSS improvements in chat mode
2023-05-02 23:38:51 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token ( #1678 )
...
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab
80c2f25131
LLaVA: small fixes ( #1664 )
...
* change multimodal projector to the correct one
* remove reference to custom stopping strings from readme
* fix stopping strings if tokenizer extension adds/removes tokens
* add API example
* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga
c31b0f15a7
Remove some spaces
2023-05-02 23:07:07 -03:00
oobabooga
320fcfde4e
Style/pep8 improvements
2023-05-02 23:05:38 -03:00
oobabooga
ecd79caa68
Update Extensions.md
2023-05-02 22:52:32 -03:00
matatonic
7ac41b87df
add openai compatible api ( #1475 )
2023-05-02 22:49:53 -03:00
oobabooga
4e09df4034
Only show extension in UI if it has an ui() function
2023-05-02 19:20:02 -03:00
oobabooga
d016c38640
Bump gradio version
2023-05-02 19:19:33 -03:00
oobabooga
88cdf6ed3d
Prevent websocket from disconnecting
2023-05-02 19:03:19 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative ( #1649 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input ( #1707 )
2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing ( #1625 )
2023-05-02 01:21:54 -03:00
oobabooga
e6a78c00f2
Update Docker.md
2023-05-02 00:51:10 -03:00
Tom Jobbins
3c67fc0362
Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage ( #1660 )
2023-05-02 00:46:26 -03:00
Lawrence M Stewart
78bd4d3a5c
Update LLaMA-model.md ( #1700 )
...
protobuf needs to be 3.20.x or lower
2023-05-02 00:44:09 -03:00
Dhaladom
f659415170
fixed variable name "context" to "prompt" ( #1716 )
2023-05-02 00:43:40 -03:00
dependabot[bot]
280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 ( #1720 )
2023-05-02 00:42:39 -03:00
oobabooga
56b13d5d48
Bump llama-cpp-python version
2023-05-02 00:41:54 -03:00