oobabooga
|
00b94847da
|
Remove softprompt support
|
2023-06-06 07:42:23 -03:00 |
|
oobabooga
|
0aebc838a0
|
Don't save the history for 'None' character
|
2023-06-06 07:21:07 -03:00 |
|
oobabooga
|
9f215523e2
|
Remove some unused imports
|
2023-06-06 07:05:46 -03:00 |
|
oobabooga
|
0f0108ce34
|
Never load the history for default character
|
2023-06-06 07:00:11 -03:00 |
|
oobabooga
|
11f38b5c2b
|
Add AutoGPTQ LoRA support
|
2023-06-05 23:32:57 -03:00 |
|
oobabooga
|
3a5cfe96f0
|
Increase chat_prompt_size_max
|
2023-06-05 17:37:37 -03:00 |
|
oobabooga
|
f276d88546
|
Use AutoGPTQ by default for GPTQ models
|
2023-06-05 15:41:48 -03:00 |
|
oobabooga
|
9b0e95abeb
|
Fix "regenerate" when "Start reply with" is set
|
2023-06-05 11:56:03 -03:00 |
|
oobabooga
|
19f78684e6
|
Add "Start reply with" feature to chat mode
|
2023-06-02 13:58:08 -03:00 |
|
GralchemOz
|
f7b07c4705
|
Fix the missing Chinese character bug (#2497)
|
2023-06-02 13:45:41 -03:00 |
|
oobabooga
|
2f6631195a
|
Add desc_act checkbox to the UI
|
2023-06-02 01:45:46 -03:00 |
|
LaaZa
|
9c066601f5
|
Extend AutoGPTQ support for any GPTQ model (#1668)
|
2023-06-02 01:33:55 -03:00 |
|
oobabooga
|
a83f9aa65b
|
Update shared.py
|
2023-06-01 12:08:39 -03:00 |
|
oobabooga
|
b6c407f51d
|
Don't stream at more than 24 fps
This is a performance optimization
|
2023-05-31 23:41:42 -03:00 |
|
Forkoz
|
9ab90d8b60
|
Fix warning for qlora (#2438)
|
2023-05-30 11:09:18 -03:00 |
|
oobabooga
|
3578dd3611
|
Change a warning message
|
2023-05-29 22:40:54 -03:00 |
|
oobabooga
|
3a6e194bc7
|
Change a warning message
|
2023-05-29 22:39:23 -03:00 |
|
Luis Lopez
|
9e7204bef4
|
Add tail-free and top-a sampling (#2357)
|
2023-05-29 21:40:01 -03:00 |
|
oobabooga
|
1394f44e14
|
Add triton checkbox for AutoGPTQ
|
2023-05-29 15:32:45 -03:00 |
|
oobabooga
|
f34d20922c
|
Minor fix
|
2023-05-29 13:31:17 -03:00 |
|
oobabooga
|
983eef1e29
|
Attempt at evaluating falcon perplexity (failed)
|
2023-05-29 13:28:25 -03:00 |
|
Honkware
|
204731952a
|
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-29 10:20:18 -03:00 |
|
Forkoz
|
60ae80cf28
|
Fix hang in tokenizer for AutoGPTQ llama models. (#2399)
|
2023-05-28 23:10:10 -03:00 |
|
oobabooga
|
2f811b1bdf
|
Change a warning message
|
2023-05-28 22:48:20 -03:00 |
|
oobabooga
|
9ee1e37121
|
Fix return message when no model is loaded
|
2023-05-28 22:46:32 -03:00 |
|
oobabooga
|
00ebea0b2a
|
Use YAML for presets and settings
|
2023-05-28 22:34:12 -03:00 |
|
oobabooga
|
acfd876f29
|
Some qol changes to "Perplexity evaluation"
|
2023-05-25 15:06:22 -03:00 |
|
oobabooga
|
8efdc01ffb
|
Better default for compute_dtype
|
2023-05-25 15:05:53 -03:00 |
|
oobabooga
|
37d4ad012b
|
Add a button for rendering markdown for any model
|
2023-05-25 11:59:27 -03:00 |
|
DGdev91
|
cf088566f8
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
oobabooga
|
63ce5f9c28
|
Add back a missing bos token
|
2023-05-24 13:54:36 -03:00 |
|
Alex "mcmonkey" Goodwin
|
3cd7c5bdd0
|
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315)
|
2023-05-24 12:43:22 -03:00 |
|
flurb18
|
d37a28730d
|
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
|
2023-05-24 09:38:20 -03:00 |
|
Gabriel Terrien
|
7aed53559a
|
Support of the --gradio-auth flag (#2283)
|
2023-05-23 20:39:26 -03:00 |
|
oobabooga
|
fb6a00f4e5
|
Small AutoGPTQ fix
|
2023-05-23 15:20:01 -03:00 |
|
oobabooga
|
cd3618d7fb
|
Add support for RWKV in Hugging Face format
|
2023-05-23 02:07:28 -03:00 |
|
oobabooga
|
75adc110d4
|
Fix "perplexity evaluation" progress messages
|
2023-05-23 01:54:52 -03:00 |
|
oobabooga
|
4d94a111d4
|
memoize load_character to speed up the chat API
|
2023-05-23 00:50:58 -03:00 |
|
Gabriel Terrien
|
0f51b64bb3
|
Add a "dark_theme" option to settings.json (#2288)
|
2023-05-22 19:45:11 -03:00 |
|
oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
d63ef59a0f
|
Apply LLaMA-Precise preset to Vicuna by default
|
2023-05-21 23:00:42 -03:00 |
|
oobabooga
|
dcc3e54005
|
Various "impersonate" fixes
|
2023-05-21 22:54:28 -03:00 |
|
oobabooga
|
e116d31180
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
|
oobabooga
|
fb91406e93
|
Fix generation_attempts continuing after an empty reply
|
2023-05-21 22:14:50 -03:00 |
|
oobabooga
|
e18534fe12
|
Fix "continue" in chat-instruct mode
|
2023-05-21 22:05:59 -03:00 |
|
oobabooga
|
8ac3636966
|
Add epsilon_cutoff/eta_cutoff parameters (#2258)
|
2023-05-21 15:11:57 -03:00 |
|
oobabooga
|
1e5821bd9e
|
Fix silero tts autoplay (attempt #2)
|
2023-05-21 13:25:11 -03:00 |
|
oobabooga
|
a5d5bb9390
|
Fix silero tts autoplay
|
2023-05-21 12:11:59 -03:00 |
|
oobabooga
|
05593a7834
|
Minor bug fix
|
2023-05-20 23:22:36 -03:00 |
|