Colin
|
f3c9103e04
|
Revert walrus operator for params['max_memory'] (#5878)
|
2024-04-24 01:09:14 -03:00 |
|
oobabooga
|
9b623b8a78
|
Bump llama-cpp-python to 0.2.64, use official wheels (#5921)
|
2024-04-23 23:17:05 -03:00 |
|
oobabooga
|
f27e1ba302
|
Add a /v1/internal/chat-prompt endpoint (#5879)
|
2024-04-19 00:24:46 -03:00 |
|
oobabooga
|
e158299fb4
|
Fix loading sharted GGUF models through llamacpp_HF
|
2024-04-11 14:50:05 -07:00 |
|
wangshuai09
|
fd4e46bce2
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
|
Ashley Kleynhans
|
70c637bf90
|
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794)
|
2024-04-11 18:19:16 -03:00 |
|
oobabooga
|
3e3a7c4250
|
Bump llama-cpp-python to 0.2.61 & fix the crash
|
2024-04-11 14:15:34 -07:00 |
|
Victorivus
|
c423d51a83
|
Fix issue #5783 for character images with transparency (#5827)
|
2024-04-11 02:23:43 -03:00 |
|
Alex O'Connell
|
b94cd6754e
|
UI: Respect model and lora directory settings when downloading files (#5842)
|
2024-04-11 01:55:02 -03:00 |
|
oobabooga
|
17c4319e2d
|
Fix loading command-r context length metadata
|
2024-04-10 21:39:59 -07:00 |
|
oobabooga
|
cbd65ba767
|
Add a simple min_p preset, make it the default (#5836)
|
2024-04-09 12:50:16 -03:00 |
|
oobabooga
|
d02744282b
|
Minor logging change
|
2024-04-06 18:56:58 -07:00 |
|
oobabooga
|
dd6e4ac55f
|
Prevent double <BOS_TOKEN> with Command R+
|
2024-04-06 13:14:32 -07:00 |
|
oobabooga
|
1bdceea2d4
|
UI: Focus on the chat input after starting a new chat
|
2024-04-06 12:57:57 -07:00 |
|
oobabooga
|
168a0f4f67
|
UI: do not load the "gallery" extension by default
|
2024-04-06 12:43:21 -07:00 |
|
oobabooga
|
64a76856bd
|
Metadata: Fix loading Command R+ template with multiple options
|
2024-04-06 07:32:17 -07:00 |
|
oobabooga
|
1b87844928
|
Minor fix
|
2024-04-05 18:43:43 -07:00 |
|
oobabooga
|
6b7f7555fc
|
Logging message to make transformers loader a bit more transparent
|
2024-04-05 18:40:02 -07:00 |
|
oobabooga
|
0f536dd97d
|
UI: Fix the "Show controls" action
|
2024-04-05 12:18:33 -07:00 |
|
oobabooga
|
308452b783
|
Bitsandbytes: load preconverted 4bit models without additional flags
|
2024-04-04 18:10:24 -07:00 |
|
oobabooga
|
d423021a48
|
Remove CTransformers support (#5807)
|
2024-04-04 20:23:58 -03:00 |
|
oobabooga
|
13fe38eb27
|
Remove specialized code for gpt-4chan
|
2024-04-04 16:11:47 -07:00 |
|
oobabooga
|
9ab7365b56
|
Read rope_theta for DBRX model (thanks turboderp)
|
2024-04-01 20:25:31 -07:00 |
|
oobabooga
|
db5f6cd1d8
|
Fix ExLlamaV2 loaders using unnecessary "bits" metadata
|
2024-03-30 21:51:39 -07:00 |
|
oobabooga
|
624faa1438
|
Fix ExLlamaV2 context length setting (closes #5750)
|
2024-03-30 21:33:16 -07:00 |
|
oobabooga
|
9653a9176c
|
Minor improvements to Parameters tab
|
2024-03-29 10:41:24 -07:00 |
|
oobabooga
|
35da6b989d
|
Organize the parameters tab (#5767)
|
2024-03-28 16:45:03 -03:00 |
|
Yiximail
|
8c9aca239a
|
Fix prompt incorrectly set to empty when suffix is empty string (#5757)
|
2024-03-26 16:33:09 -03:00 |
|
oobabooga
|
2a92a842ce
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
|
oobabooga
|
49b111e2dd
|
Lint
|
2024-03-17 08:33:23 -07:00 |
|
oobabooga
|
d890c99b53
|
Fix StreamingLLM when content is removed from the beginning of the prompt
|
2024-03-14 09:18:54 -07:00 |
|
oobabooga
|
d828844a6f
|
Small fix: don't save truncation_length to settings.yaml
It should derive from model metadata or from a command-line flag.
|
2024-03-14 08:56:28 -07:00 |
|
oobabooga
|
2ef5490a36
|
UI: make light theme less blinding
|
2024-03-13 08:23:16 -07:00 |
|
oobabooga
|
40a60e0297
|
Convert attention_sink_size to int (closes #5696)
|
2024-03-13 08:15:49 -07:00 |
|
oobabooga
|
edec3bf3b0
|
UI: avoid caching convert_to_markdown calls during streaming
|
2024-03-13 08:14:34 -07:00 |
|
oobabooga
|
8152152dd6
|
Small fix after 28076928ac
|
2024-03-11 19:56:35 -07:00 |
|
oobabooga
|
28076928ac
|
UI: Add a new "User description" field for user personality/biography (#5691)
|
2024-03-11 23:41:57 -03:00 |
|
oobabooga
|
63701f59cf
|
UI: mention that n_gpu_layers > 0 is necessary for the GPU to be used
|
2024-03-11 18:54:15 -07:00 |
|
oobabooga
|
46031407b5
|
Increase the cache size of convert_to_markdown to 4096
|
2024-03-11 18:43:04 -07:00 |
|
oobabooga
|
9eca197409
|
Minor logging change
|
2024-03-11 16:31:13 -07:00 |
|
oobabooga
|
afadc787d7
|
Optimize the UI by caching convert_to_markdown calls
|
2024-03-10 20:10:07 -07:00 |
|
oobabooga
|
056717923f
|
Document StreamingLLM
|
2024-03-10 19:15:23 -07:00 |
|
oobabooga
|
15d90d9bd5
|
Minor logging change
|
2024-03-10 18:20:50 -07:00 |
|
oobabooga
|
cf0697936a
|
Optimize StreamingLLM by over 10x
|
2024-03-08 21:48:28 -08:00 |
|
oobabooga
|
afb51bd5d6
|
Add StreamingLLM for llamacpp & llamacpp_HF (2nd attempt) (#5669)
|
2024-03-09 00:25:33 -03:00 |
|
oobabooga
|
549bb88975
|
Increase height of "Custom stopping strings" UI field
|
2024-03-08 12:54:30 -08:00 |
|
oobabooga
|
238f69accc
|
Move "Command for chat-instruct mode" to the main chat tab (closes #5634)
|
2024-03-08 12:52:52 -08:00 |
|
oobabooga
|
bae14c8f13
|
Right-truncate long chat completion prompts instead of left-truncating
Instructions are usually at the beginning of the prompt.
|
2024-03-07 08:50:24 -08:00 |
|
Bartowski
|
104573f7d4
|
Update cache_4bit documentation (#5649)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2024-03-07 13:08:21 -03:00 |
|
oobabooga
|
2ec1d96c91
|
Add cache_4bit option for ExLlamaV2 (#5645)
|
2024-03-06 23:02:25 -03:00 |
|