Forkoz
|
60ae80cf28
|
Fix hang in tokenizer for AutoGPTQ llama models. (#2399)
|
2023-05-28 23:10:10 -03:00 |
|
oobabooga
|
2f811b1bdf
|
Change a warning message
|
2023-05-28 22:48:20 -03:00 |
|
oobabooga
|
9ee1e37121
|
Fix return message when no model is loaded
|
2023-05-28 22:46:32 -03:00 |
|
oobabooga
|
00ebea0b2a
|
Use YAML for presets and settings
|
2023-05-28 22:34:12 -03:00 |
|
oobabooga
|
acfd876f29
|
Some qol changes to "Perplexity evaluation"
|
2023-05-25 15:06:22 -03:00 |
|
oobabooga
|
8efdc01ffb
|
Better default for compute_dtype
|
2023-05-25 15:05:53 -03:00 |
|
oobabooga
|
37d4ad012b
|
Add a button for rendering markdown for any model
|
2023-05-25 11:59:27 -03:00 |
|
DGdev91
|
cf088566f8
|
Make llama.cpp read prompt size and seed from settings (#2299)
|
2023-05-25 10:29:31 -03:00 |
|
oobabooga
|
361451ba60
|
Add --load-in-4bit parameter (#2320)
|
2023-05-25 01:14:13 -03:00 |
|
oobabooga
|
63ce5f9c28
|
Add back a missing bos token
|
2023-05-24 13:54:36 -03:00 |
|
Alex "mcmonkey" Goodwin
|
3cd7c5bdd0
|
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315)
|
2023-05-24 12:43:22 -03:00 |
|
flurb18
|
d37a28730d
|
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
|
2023-05-24 09:38:20 -03:00 |
|
Gabriel Terrien
|
7aed53559a
|
Support of the --gradio-auth flag (#2283)
|
2023-05-23 20:39:26 -03:00 |
|
oobabooga
|
fb6a00f4e5
|
Small AutoGPTQ fix
|
2023-05-23 15:20:01 -03:00 |
|
oobabooga
|
cd3618d7fb
|
Add support for RWKV in Hugging Face format
|
2023-05-23 02:07:28 -03:00 |
|
oobabooga
|
75adc110d4
|
Fix "perplexity evaluation" progress messages
|
2023-05-23 01:54:52 -03:00 |
|
oobabooga
|
4d94a111d4
|
memoize load_character to speed up the chat API
|
2023-05-23 00:50:58 -03:00 |
|
Gabriel Terrien
|
0f51b64bb3
|
Add a "dark_theme" option to settings.json (#2288)
|
2023-05-22 19:45:11 -03:00 |
|
oobabooga
|
c0fd7f3257
|
Add mirostat parameters for llama.cpp (#2287)
|
2023-05-22 19:37:24 -03:00 |
|
oobabooga
|
d63ef59a0f
|
Apply LLaMA-Precise preset to Vicuna by default
|
2023-05-21 23:00:42 -03:00 |
|
oobabooga
|
dcc3e54005
|
Various "impersonate" fixes
|
2023-05-21 22:54:28 -03:00 |
|
oobabooga
|
e116d31180
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
|
oobabooga
|
fb91406e93
|
Fix generation_attempts continuing after an empty reply
|
2023-05-21 22:14:50 -03:00 |
|
oobabooga
|
e18534fe12
|
Fix "continue" in chat-instruct mode
|
2023-05-21 22:05:59 -03:00 |
|
oobabooga
|
8ac3636966
|
Add epsilon_cutoff/eta_cutoff parameters (#2258)
|
2023-05-21 15:11:57 -03:00 |
|
oobabooga
|
1e5821bd9e
|
Fix silero tts autoplay (attempt #2)
|
2023-05-21 13:25:11 -03:00 |
|
oobabooga
|
a5d5bb9390
|
Fix silero tts autoplay
|
2023-05-21 12:11:59 -03:00 |
|
oobabooga
|
05593a7834
|
Minor bug fix
|
2023-05-20 23:22:36 -03:00 |
|
Matthew McAllister
|
ab6acddcc5
|
Add Save/Delete character buttons (#1870)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-05-20 21:48:45 -03:00 |
|
oobabooga
|
c5af549d4b
|
Add chat API (#2233)
|
2023-05-20 18:42:17 -03:00 |
|
Konstantin Gukov
|
1b52bddfcc
|
Mitigate UnboundLocalError (#2136)
|
2023-05-19 14:46:18 -03:00 |
|
Alex "mcmonkey" Goodwin
|
50c70e28f0
|
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108)
|
2023-05-19 12:58:54 -03:00 |
|
oobabooga
|
9d5025f531
|
Improve error handling while loading GPTQ models
|
2023-05-19 11:20:08 -03:00 |
|
oobabooga
|
b667ffa51d
|
Simplify GPTQ_loader.py
|
2023-05-17 16:22:56 -03:00 |
|
oobabooga
|
ef10ffc6b4
|
Add various checks to model loading functions
|
2023-05-17 16:14:54 -03:00 |
|
oobabooga
|
abd361b3a0
|
Minor change
|
2023-05-17 11:33:43 -03:00 |
|
oobabooga
|
21ecc3701e
|
Avoid a name conflict
|
2023-05-17 11:23:13 -03:00 |
|
oobabooga
|
fb91c07191
|
Minor bug fix
|
2023-05-17 11:16:37 -03:00 |
|
oobabooga
|
1a8151a2b6
|
Add AutoGPTQ support (basic) (#2132)
|
2023-05-17 11:12:12 -03:00 |
|
Alex "mcmonkey" Goodwin
|
1f50dbe352
|
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100)
|
2023-05-17 10:41:09 -03:00 |
|
oobabooga
|
ce21804ec7
|
Allow extensions to define a new tab
|
2023-05-17 01:31:56 -03:00 |
|
oobabooga
|
a84f499718
|
Allow extensions to define custom CSS and JS
|
2023-05-17 00:30:54 -03:00 |
|
oobabooga
|
7584d46c29
|
Refactor models.py (#2113)
|
2023-05-16 19:52:22 -03:00 |
|
oobabooga
|
5cd6dd4287
|
Fix no-mmap bug
|
2023-05-16 17:35:49 -03:00 |
|
Forkoz
|
d205ec9706
|
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
|
2023-05-16 13:40:19 -03:00 |
|
atriantafy
|
26cf8c2545
|
add api port options (#1990)
|
2023-05-15 20:44:16 -03:00 |
|
Andrei
|
e657dd342d
|
Add in-memory cache support for llama.cpp (#1936)
|
2023-05-15 20:19:55 -03:00 |
|
Jakub Strnad
|
0227e738ed
|
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087)
|
2023-05-15 19:51:23 -03:00 |
|
oobabooga
|
c07215cc08
|
Improve the default Assistant character
|
2023-05-15 19:39:08 -03:00 |
|
oobabooga
|
4e66f68115
|
Create get_max_memory_dict() function
|
2023-05-15 19:38:27 -03:00 |
|