Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative ( #1649 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input ( #1707 )
2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing ( #1625 )
2023-05-02 01:21:54 -03:00
oobabooga
e6a78c00f2
Update Docker.md
2023-05-02 00:51:10 -03:00
Tom Jobbins
3c67fc0362
Allow groupsize 1024, needed for larger models eg 30B to lower VRAM usage ( #1660 )
2023-05-02 00:46:26 -03:00
Lawrence M Stewart
78bd4d3a5c
Update LLaMA-model.md ( #1700 )
...
protobuf needs to be 3.20.x or lower
2023-05-02 00:44:09 -03:00
Dhaladom
f659415170
fixed variable name "context" to "prompt" ( #1716 )
2023-05-02 00:43:40 -03:00
dependabot[bot]
280c2f285f
Bump safetensors from 0.3.0 to 0.3.1 ( #1720 )
2023-05-02 00:42:39 -03:00
oobabooga
56b13d5d48
Bump llama-cpp-python version
2023-05-02 00:41:54 -03:00
Lőrinc Pap
ee68ec9079
Update folder produced by download-model ( #1601 )
2023-04-27 12:03:02 -03:00
oobabooga
91745f63c3
Use Vicuna-v0 by default for Vicuna models
2023-04-26 17:45:38 -03:00
oobabooga
93e5c066ae
Update RWKV Raven template
2023-04-26 17:31:03 -03:00
oobabooga
c83210c460
Move the rstrips
2023-04-26 17:17:22 -03:00
oobabooga
1d8b8222e9
Revert #1579 , apply the proper fix
...
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
TiagoGF
a941c19337
Fixing Vicuna text generation ( #1579 )
2023-04-26 16:20:27 -03:00
oobabooga
d87ca8f2af
LLaVA fixes
2023-04-26 03:47:34 -03:00
oobabooga
9c2e7c0fab
Fix path on models.py
2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py
2023-04-26 01:55:40 -03:00
oobabooga
4c491aa142
Add Alpaca prompt with Input field
2023-04-25 23:50:32 -03:00
oobabooga
68ed73dd89
Make API extension print its exceptions
2023-04-25 23:23:47 -03:00
oobabooga
f642135517
Make universal tokenizer, xformers, sdp-attention apply to monkey patch
2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14
Load more than one LoRA with --lora, fix a bug
2023-04-25 22:58:48 -03:00
oobabooga
15940e762e
Fix missing initial space for LlamaTokenizer
2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) ( #1535 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
USBhost
95aa43b9c2
Update LLaMA download docs
2023-04-25 21:28:15 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 ( #1546 )
...
* full dynamic model type support on modern peft
* remove shuffle option
2023-04-25 21:27:30 -03:00
Wojtab
65beb51b0b
fix returned dtypes for LLaVA ( #1547 )
2023-04-25 21:25:34 -03:00
oobabooga
9b272bc8e5
Monkey patch fixes
2023-04-25 21:20:26 -03:00
oobabooga
da812600f4
Apply settings regardless of setup() function
2023-04-25 01:16:23 -03:00
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() ( #1484 )
2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e
Add spaces
2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit
2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132
Remove obsolete function
2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2
Add --character flag, add character to settings.json
2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc
Only load the default history if it's empty
2023-04-24 11:50:51 -03:00
MajdajkD
c86e9a3372
fix websocket batching ( #1511 )
2023-04-24 03:51:32 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp ( #1115 )
2023-04-24 03:46:18 -03:00
oobabooga
2f6e2ddeac
Bump llama-cpp-python version
2023-04-24 03:42:03 -03:00
oobabooga
caaa556159
Move extensions block definition to the bottom
2023-04-24 03:30:35 -03:00
oobabooga
b1ee674d75
Make interface state (mostly) persistent on page reload
2023-04-24 03:05:47 -03:00
oobabooga
47809e28aa
Minor changes
2023-04-24 01:04:48 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions
2023-04-24 00:47:40 -03:00
Wojtab
04b98a8485
Fix Continue for LLaVA ( #1507 )
2023-04-23 22:58:15 -03:00
Wojtab
12212cf6be
LLaVA support ( #1487 )
2023-04-23 20:32:22 -03:00
oobabooga
9197d3fec8
Update Extensions.md
2023-04-23 16:11:17 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints ( #990 )
...
Previous title: Add api_streaming extension and update api-example-stream to use it
* Merge with latest main
* Add parameter capturing encoder_repetition_penalty
* Change some defaults, minor fixes
* Add --api, --public-api flags
* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.
* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'
* Update the API examples
* Change a comment
* Update README
* Remove the gradio API
* Remove unused import
* Minor change
* Remove unused import
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs ( #1493 )
2023-04-23 12:54:41 -03:00
oobabooga
7ff645899e
Fix bug in api extension
2023-04-22 17:33:36 -03:00
AICatgirls
b992c9236a
Prevent API extension responses from getting cut off with --chat enabled ( #1467 )
2023-04-22 16:06:43 -03:00