jllllll
|
78dbec4c4e
|
Add 'scipy' to requirements.txt #2335 (#2343)
Unlisted dependency of bitsandbytes
|
2023-05-25 23:26:25 -03:00 |
|
Luis Lopez
|
0dbc3d9b2c
|
Fix get_documents_ids_distances return error when n_results = 0 (#2347)
|
2023-05-25 23:25:36 -03:00 |
|
jllllll
|
07a4f0569f
|
Update README.md to account for BnB Windows wheel (#2341)
|
2023-05-25 18:44:26 -03:00 |
|
oobabooga
|
acfd876f29
|
Some qol changes to "Perplexity evaluation"
|
2023-05-25 15:06:22 -03:00 |
|
oobabooga
|
8efdc01ffb
|
Better default for compute_dtype
|
2023-05-25 15:05:53 -03:00 |
|
oobabooga
|
fc33216477
|
Small fix for n_ctx in llama.cpp
|
2023-05-25 13:55:51 -03:00 |
|
oobabooga
|
35009c32f0
|
Beautify all CSS
|
2023-05-25 13:12:34 -03:00 |
|
oobabooga
|
231305d0f5
|
Update README.md
|
2023-05-25 12:05:08 -03:00 |
|
oobabooga
|
37d4ad012b
|
Add a button for rendering markdown for any model
|
2023-05-25 11:59:27 -03:00 |
|
oobabooga
|
9a43656a50
|
Add bitsandbytes note
|
2023-05-25 11:21:52 -03:00 |
|
jllllll
|
b1b3bb6923
|
Improve environment isolation (#68)
|
2023-05-25 11:15:05 -03:00 |
|
oobabooga
|
c8ce2e777b
|
Add instructions for CPU mode users
|
2023-05-25 10:57:52 -03:00 |
|
oobabooga
|
996c49daa7
|
Remove bitsandbytes installation step
Following 548f05e106
|
2023-05-25 10:50:20 -03:00 |
|
oobabooga
|
548f05e106
|
Add windows bitsandbytes wheel by jllllll
|
2023-05-25 10:48:22 -03:00 |
|
DGdev91
|
cf088566f8
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
|
Luis Lopez
|
ee674afa50
|
Add superbooga time weighted history retrieval (#2080)
|
2023-05-25 10:22:45 -03:00 |
|
oobabooga
|
a04266161d
|
Update README.md
|
2023-05-25 01:23:46 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
oobabooga
|
63ce5f9c28
|
Add back a missing bos token
|
2023-05-24 13:54:36 -03:00 |
|
Alex "mcmonkey" Goodwin
|
3cd7c5bdd0
|
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315)
|
2023-05-24 12:43:22 -03:00 |
|
eiery
|
9967e08b1f
|
update llama-cpp-python to v0.1.53 for ggml v3, fixes #2245 (#2264)
|
2023-05-24 10:25:28 -03:00 |
|
Gabriel Terrien
|
e50ade438a
|
FIX silero_tts/elevenlabs_tts activation/deactivation (#2313)
|
2023-05-24 10:06:38 -03:00 |
|
Gabriel Terrien
|
fc116711b0
|
FIX save_model_settings function to also update shared.model_config (#2282)
|
2023-05-24 10:01:07 -03:00 |
|
flurb18
|
d37a28730d
|
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
|
2023-05-24 09:38:20 -03:00 |
|
Anthony K
|
7dc87984a2
|
Fix spelling mistake in new name var of chat api (#2309)
|
2023-05-23 23:03:03 -03:00 |
|
oobabooga
|
1490c0af68
|
Remove RWKV from requirements.txt
|
2023-05-23 20:49:20 -03:00 |
|
Gabriel Terrien
|
7aed53559a
|
Support of the --gradio-auth flag (#2283)
|
2023-05-23 20:39:26 -03:00 |
|
Atinoda
|
4155aaa96a
|
Add mention to alternative docker repository (#2145)
|
2023-05-23 20:35:53 -03:00 |
|
matatonic
|
9714072692
|
[extensions/openai] use instruction templates with chat_completions (#2291)
|
2023-05-23 19:58:41 -03:00 |
|
oobabooga
|
74aae34beb
|
Allow passing your name to the chat API
|
2023-05-23 19:39:18 -03:00 |
|
oobabooga
|
fb6a00f4e5
|
Small AutoGPTQ fix
|
2023-05-23 15:20:01 -03:00 |
|
oobabooga
|
c2d2ef7c13
|
Update Generation-parameters.md
|
2023-05-23 02:11:28 -03:00 |
|
oobabooga
|
b0845ae4e8
|
Update RWKV-model.md
|
2023-05-23 02:10:08 -03:00 |
|
oobabooga
|
cd3618d7fb
|
Add support for RWKV in Hugging Face format
|
2023-05-23 02:07:28 -03:00 |
|
oobabooga
|
75adc110d4
|
Fix "perplexity evaluation" progress messages
|
2023-05-23 01:54:52 -03:00 |
|
oobabooga
|
4d94a111d4
|
memoize load_character to speed up the chat API
|
2023-05-23 00:50:58 -03:00 |
|
oobabooga
|
8b9ba3d7b4
|
Fix a typo
|
2023-05-22 20:13:03 -03:00 |
|
Gabriel Terrien
|
0f51b64bb3
|
Add a "dark_theme" option to settings.json (#2288)
|
2023-05-22 19:45:11 -03:00 |
|
oobabooga
|
c5446ae0e2
|
Fix a link
|
2023-05-22 19:38:34 -03:00 |
|
oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
ec7437f00a
|
Better way to toggle light/dark mode
|
2023-05-22 03:19:01 -03:00 |
|
oobabooga
|
d46f5a58a3
|
Add a button for toggling dark/light mode
|
2023-05-22 03:11:44 -03:00 |
|
dependabot[bot]
|
baf75356d4
|
Bump transformers from 4.29.1 to 4.29.2 (#2268)
|
2023-05-22 02:50:18 -03:00 |
|
oobabooga
|
4372eb228c
|
Increase the interface area by 10px
|
2023-05-22 00:55:33 -03:00 |
|
oobabooga
|
753f6c5250
|
Attempt at making interface restart more robust
|
2023-05-22 00:26:07 -03:00 |
|
oobabooga
|
30225b9dd0
|
Fix --no-stream queue bug
|
2023-05-22 00:02:59 -03:00 |
|
oobabooga
|
288912baf1
|
Add a description for the extensions checkbox group
|
2023-05-21 23:33:37 -03:00 |
|
oobabooga
|
6e77844733
|
Add a description for penalty_alpha
|
2023-05-21 23:09:30 -03:00 |
|
oobabooga
|
d63ef59a0f
|
Apply LLaMA-Precise preset to Vicuna by default
|
2023-05-21 23:00:42 -03:00 |
|
oobabooga
|
e3d578502a
|
Improve "Chat settings" tab appearance a bit
|
2023-05-21 22:58:14 -03:00 |
|