Commit Graph

3644 Commits

Author SHA1 Message Date
oobabooga
4aa481282b Detect the xwin-lm-70b-v0.1 instruction template 2024-04-24 17:02:20 -07:00
oobabooga
c9b0df16ee Lint 2024-04-24 09:55:00 -07:00
oobabooga
4094813f8d Lint 2024-04-24 09:53:41 -07:00
oobabooga
64e2a9a0a7 Fix the Phi-3 template when used in the UI 2024-04-24 01:34:11 -07:00
oobabooga
f0538efb99 Remove obsolete --tensorcores references 2024-04-24 00:31:28 -07:00
Colin
f3c9103e04
Revert walrus operator for params['max_memory'] (#5878) 2024-04-24 01:09:14 -03:00
Jari Van Melckebeke
c725d97368
nvidia docker: make sure gradio listens on 0.0.0.0 (#5918) 2024-04-23 23:17:55 -03:00
oobabooga
9b623b8a78
Bump llama-cpp-python to 0.2.64, use official wheels (#5921) 2024-04-23 23:17:05 -03:00
Ashley Kleynhans
0877741b03
Bumped ExLlamaV2 to version 0.0.19 to resolve #5851 (#5880) 2024-04-19 19:04:40 -03:00
oobabooga
f27e1ba302
Add a /v1/internal/chat-prompt endpoint (#5879) 2024-04-19 00:24:46 -03:00
oobabooga
b30bce3b2f Bump transformers to 4.40 2024-04-18 16:19:31 -07:00
Philipp Emanuel Weidmann
a0c69749e6
Revert sse-starlette version bump because it breaks API request cancellation (#5873) 2024-04-18 15:05:00 -03:00
mamei16
8985a8538b
Fix whisper STT (#5856) 2024-04-14 10:55:58 -03:00
dependabot[bot]
597556cb77
Bump sse-starlette from 1.6.5 to 2.1.0 (#5831) 2024-04-11 18:54:05 -03:00
oobabooga
e158299fb4 Fix loading sharted GGUF models through llamacpp_HF 2024-04-11 14:50:05 -07:00
wangshuai09
fd4e46bce2
Add Ascend NPU support (basic) (#5541) 2024-04-11 18:42:20 -03:00
zaypen
a90509d82e
Model downloader: Take HF_ENDPOINT in consideration (#5571) 2024-04-11 18:28:10 -03:00
Ashley Kleynhans
70c637bf90
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794) 2024-04-11 18:19:16 -03:00
oobabooga
3e3a7c4250 Bump llama-cpp-python to 0.2.61 & fix the crash 2024-04-11 14:15:34 -07:00
oobabooga
5f5ceaf025 Revert "Bump llama-cpp-python to 0.2.61"
This reverts commit 3ae61c0338.
2024-04-11 13:24:57 -07:00
dependabot[bot]
bd71a504b8
Update gradio requirement from ==4.25.* to ==4.26.* (#5832) 2024-04-11 02:24:53 -03:00
Victorivus
c423d51a83
Fix issue #5783 for character images with transparency (#5827) 2024-04-11 02:23:43 -03:00
Alex O'Connell
b94cd6754e
UI: Respect model and lora directory settings when downloading files (#5842) 2024-04-11 01:55:02 -03:00
oobabooga
17c4319e2d Fix loading command-r context length metadata 2024-04-10 21:39:59 -07:00
oobabooga
3ae61c0338 Bump llama-cpp-python to 0.2.61 2024-04-10 21:39:46 -07:00
oobabooga
cbd65ba767
Add a simple min_p preset, make it the default (#5836) 2024-04-09 12:50:16 -03:00
oobabooga
ed4001e324 Bump ExLlamaV2 to 0.0.18 2024-04-08 18:05:16 -07:00
oobabooga
f6828de3f2 Downgrade llama-cpp-python to 0.2.56 2024-04-07 07:00:12 -07:00
Jared Van Bortel
39ff9c9dcf
requirements: add psutil (#5819) 2024-04-06 23:02:20 -03:00
oobabooga
d02744282b Minor logging change 2024-04-06 18:56:58 -07:00
oobabooga
dfb01f9a63 Bump llama-cpp-python to 0.2.60 2024-04-06 18:32:36 -07:00
oobabooga
096f75a432 Documentation: remove obsolete RWKV docs 2024-04-06 14:06:39 -07:00
oobabooga
dd6e4ac55f Prevent double <BOS_TOKEN> with Command R+ 2024-04-06 13:14:32 -07:00
oobabooga
1bdceea2d4 UI: Focus on the chat input after starting a new chat 2024-04-06 12:57:57 -07:00
oobabooga
168a0f4f67 UI: do not load the "gallery" extension by default 2024-04-06 12:43:21 -07:00
oobabooga
64a76856bd Metadata: Fix loading Command R+ template with multiple options 2024-04-06 07:32:17 -07:00
oobabooga
1b87844928 Minor fix 2024-04-05 18:43:43 -07:00
oobabooga
6b7f7555fc Logging message to make transformers loader a bit more transparent 2024-04-05 18:40:02 -07:00
oobabooga
4e739dc211 Add an instruction template for Command R 2024-04-05 18:22:25 -07:00
oobabooga
8a8dbf2f16 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-04-05 12:42:23 -07:00
oobabooga
0f536dd97d UI: Fix the "Show controls" action 2024-04-05 12:18:33 -07:00
dependabot[bot]
a4c67e1974
Bump aqlm[cpu,gpu] from 1.1.2 to 1.1.3 (#5790) 2024-04-05 13:26:49 -03:00
oobabooga
14f6194211 Bump Gradio to 4.25 2024-04-05 09:22:44 -07:00
oobabooga
308452b783 Bitsandbytes: load preconverted 4bit models without additional flags 2024-04-04 18:10:24 -07:00
oobabooga
d423021a48
Remove CTransformers support (#5807) 2024-04-04 20:23:58 -03:00
oobabooga
13fe38eb27 Remove specialized code for gpt-4chan 2024-04-04 16:11:47 -07:00
oobabooga
3952560da8 Bump llama-cpp-python to 0.2.59 2024-04-04 11:20:48 -07:00
oobabooga
9ab7365b56 Read rope_theta for DBRX model (thanks turboderp) 2024-04-01 20:25:31 -07:00
oobabooga
db5f6cd1d8 Fix ExLlamaV2 loaders using unnecessary "bits" metadata 2024-03-30 21:51:39 -07:00
oobabooga
624faa1438 Fix ExLlamaV2 context length setting (closes #5750) 2024-03-30 21:33:16 -07:00