oobabooga
b284f2407d
Make ExLlama_HF the new default for GPTQ
2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent ( #3017 )
...
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ ( #2818 )
2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b
Make --model work with argument like models/folder_name
2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039
Add decode functions to llama.cpp/exllama
2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551
Add truncation to exllama
2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37
Minor change to ui.py
2023-07-07 09:09:14 -07:00
oobabooga
de994331a4
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3
Block a cloudfare request
2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits ( #3036 )
2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py ( #3020 )
...
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830
Remove universal llama tokenizer support
...
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003
2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0
Fix start_with
2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb
Fix loading some histories (thanks kaiokendev)
2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d
Make greetings appear in --multi-user mode
2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error ( #2953 )
2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload ( #2951 )
2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py ( #2954 )
...
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP ( #2912 )
2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. ( #2921 )
2023-06-29 14:56:25 -03:00
oobabooga
79db629665
Minor bug fix
2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
20740ab16e
Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )"
...
This reverts commit 37a16d23a7
.
2023-06-28 18:10:34 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )
2023-06-28 12:36:07 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
oobabooga
22d455b072
Add LoRA support to ExLlama_HF
2023-06-26 00:10:33 -03:00
oobabooga
c52290de50
ExLlama with long context ( #2875 )
2023-06-25 22:49:26 -03:00
oobabooga
9290c6236f
Keep ExLlama_HF if already selected
2023-06-25 19:06:28 -03:00
oobabooga
75fd763f99
Fix chat saving issue ( closes #2863 )
2023-06-25 18:14:57 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f31281a8de
Fix loading instruction templates containing literal '\n'
2023-06-25 02:13:26 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
oobabooga
365b672531
Minor change to prevent future bugs
2023-06-25 01:38:54 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
oobabooga
cec5fb0ef6
Failed attempt at evaluating exllama_hf perplexity
2023-06-24 12:02:25 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
oobabooga
ec482f3dae
Apply input extensions after yielding *Is typing...*
2023-06-24 11:07:11 -03:00
oobabooga
3e80f2aceb
Apply the output extensions only once
...
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
missionfloyd
51a388fa34
Organize chat history/character import menu ( #2845 )
...
* Organize character import menu
* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space ( #2847 )
2023-06-24 09:43:00 -03:00
oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
Panchovix
5646690769
Fix some models not loading on exllama_hf ( #2835 )
2023-06-23 11:31:02 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf ( #2803 )
2023-06-22 16:05:25 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter ( #2744 )
2023-06-18 13:26:30 -03:00
oobabooga
c5641b65d3
Handle leading spaces properly in ExLllama
2023-06-17 19:35:12 -03:00
oobabooga
05a743d6ad
Make llama.cpp use tfs parameter
2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff
Fix repeated tokens with exllama
2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7
Use gen_begin_reuse in exllama
2023-06-17 18:00:10 -03:00
oobabooga
b27f83c0e9
Make exllama stoppable
2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0
Add some clarifications
2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP ( #2648 )
2023-06-15 23:59:54 -03:00
oobabooga
2b9a6b9259
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58
Add some checks to AutoGPTQ loader
2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel ( #2689 )
2023-06-14 18:29:42 -03:00
oobabooga
474dc7355a
Allow API requests to use parameter presets
2023-06-14 11:32:20 -03:00
oobabooga
e471919e6d
Make llava/minigpt-4 work with AutoGPTQ
2023-06-11 17:56:01 -03:00
oobabooga
f4defde752
Add a menu for installing extensions
2023-06-11 17:11:06 -03:00
oobabooga
ac122832f7
Make dropdown menus more similar to automatic1111
2023-06-11 14:20:16 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts ( #2621 )
2023-06-11 12:19:18 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models ( #2571 )
2023-06-09 21:26:31 -03:00
oobabooga
6015616338
Style changes
2023-06-06 13:06:05 -03:00
oobabooga
f040073ef1
Handle the case of older autogptq install
2023-06-06 13:05:05 -03:00
oobabooga
bc58dc40bd
Fix a minor bug
2023-06-06 12:57:13 -03:00
oobabooga
00b94847da
Remove softprompt support
2023-06-06 07:42:23 -03:00
oobabooga
0aebc838a0
Don't save the history for 'None' character
2023-06-06 07:21:07 -03:00
oobabooga
9f215523e2
Remove some unused imports
2023-06-06 07:05:46 -03:00
oobabooga
0f0108ce34
Never load the history for default character
2023-06-06 07:00:11 -03:00
oobabooga
11f38b5c2b
Add AutoGPTQ LoRA support
2023-06-05 23:32:57 -03:00
oobabooga
3a5cfe96f0
Increase chat_prompt_size_max
2023-06-05 17:37:37 -03:00
oobabooga
f276d88546
Use AutoGPTQ by default for GPTQ models
2023-06-05 15:41:48 -03:00
oobabooga
9b0e95abeb
Fix "regenerate" when "Start reply with" is set
2023-06-05 11:56:03 -03:00
oobabooga
19f78684e6
Add "Start reply with" feature to chat mode
2023-06-02 13:58:08 -03:00
GralchemOz
f7b07c4705
Fix the missing Chinese character bug ( #2497 )
2023-06-02 13:45:41 -03:00
oobabooga
2f6631195a
Add desc_act checkbox to the UI
2023-06-02 01:45:46 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model ( #1668 )
2023-06-02 01:33:55 -03:00
oobabooga
a83f9aa65b
Update shared.py
2023-06-01 12:08:39 -03:00
oobabooga
b6c407f51d
Don't stream at more than 24 fps
...
This is a performance optimization
2023-05-31 23:41:42 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora ( #2438 )
2023-05-30 11:09:18 -03:00
oobabooga
3578dd3611
Change a warning message
2023-05-29 22:40:54 -03:00
oobabooga
3a6e194bc7
Change a warning message
2023-05-29 22:39:23 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling ( #2357 )
2023-05-29 21:40:01 -03:00
oobabooga
1394f44e14
Add triton checkbox for AutoGPTQ
2023-05-29 15:32:45 -03:00
oobabooga
f34d20922c
Minor fix
2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29
Attempt at evaluating falcon perplexity (failed)
2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. ( #2399 )
2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf
Change a warning message
2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121
Fix return message when no model is loaded
2023-05-28 22:46:32 -03:00
oobabooga
00ebea0b2a
Use YAML for presets and settings
2023-05-28 22:34:12 -03:00
oobabooga
acfd876f29
Some qol changes to "Perplexity evaluation"
2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb
Better default for compute_dtype
2023-05-25 15:05:53 -03:00
oobabooga
37d4ad012b
Add a button for rendering markdown for any model
2023-05-25 11:59:27 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
oobabooga
63ce5f9c28
Add back a missing bos token
2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after
option to control which part of your input to train on ( #2315 )
2023-05-24 12:43:22 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag ( #2283 )
2023-05-23 20:39:26 -03:00
oobabooga
fb6a00f4e5
Small AutoGPTQ fix
2023-05-23 15:20:01 -03:00
oobabooga
cd3618d7fb
Add support for RWKV in Hugging Face format
2023-05-23 02:07:28 -03:00
oobabooga
75adc110d4
Fix "perplexity evaluation" progress messages
2023-05-23 01:54:52 -03:00
oobabooga
4d94a111d4
memoize load_character to speed up the chat API
2023-05-23 00:50:58 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json ( #2288 )
2023-05-22 19:45:11 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
d63ef59a0f
Apply LLaMA-Precise preset to Vicuna by default
2023-05-21 23:00:42 -03:00
oobabooga
dcc3e54005
Various "impersonate" fixes
2023-05-21 22:54:28 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
oobabooga
fb91406e93
Fix generation_attempts continuing after an empty reply
2023-05-21 22:14:50 -03:00
oobabooga
e18534fe12
Fix "continue" in chat-instruct mode
2023-05-21 22:05:59 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters ( #2258 )
2023-05-21 15:11:57 -03:00
oobabooga
1e5821bd9e
Fix silero tts autoplay (attempt #2 )
2023-05-21 13:25:11 -03:00
oobabooga
a5d5bb9390
Fix silero tts autoplay
2023-05-21 12:11:59 -03:00
oobabooga
05593a7834
Minor bug fix
2023-05-20 23:22:36 -03:00
Matthew McAllister
ab6acddcc5
Add Save/Delete character buttons ( #1870 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
oobabooga
c5af549d4b
Add chat API ( #2233 )
2023-05-20 18:42:17 -03:00
Konstantin Gukov
1b52bddfcc
Mitigate UnboundLocalError ( #2136 )
2023-05-19 14:46:18 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs ( #2108 )
2023-05-19 12:58:54 -03:00
oobabooga
9d5025f531
Improve error handling while loading GPTQ models
2023-05-19 11:20:08 -03:00
oobabooga
b667ffa51d
Simplify GPTQ_loader.py
2023-05-17 16:22:56 -03:00
oobabooga
ef10ffc6b4
Add various checks to model loading functions
2023-05-17 16:14:54 -03:00
oobabooga
abd361b3a0
Minor change
2023-05-17 11:33:43 -03:00
oobabooga
21ecc3701e
Avoid a name conflict
2023-05-17 11:23:13 -03:00
oobabooga
fb91c07191
Minor bug fix
2023-05-17 11:16:37 -03:00
oobabooga
1a8151a2b6
Add AutoGPTQ support (basic) ( #2132 )
2023-05-17 11:12:12 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow ( #2100 )
2023-05-17 10:41:09 -03:00
oobabooga
ce21804ec7
Allow extensions to define a new tab
2023-05-17 01:31:56 -03:00
oobabooga
a84f499718
Allow extensions to define custom CSS and JS
2023-05-17 00:30:54 -03:00
oobabooga
7584d46c29
Refactor models.py ( #2113 )
2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287
Fix no-mmap bug
2023-05-16 17:35:49 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected ( #2099 )
...
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
atriantafy
26cf8c2545
add api port options ( #1990 )
2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp ( #1936 )
2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models ( #2087 )
2023-05-15 19:51:23 -03:00
oobabooga
c07215cc08
Improve the default Assistant character
2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115
Create get_max_memory_dict() function
2023-05-15 19:38:27 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option ( #2060 )
2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode ( #2049 )
2023-05-14 10:43:55 -03:00
oobabooga
df37ba5256
Update impersonate_wrapper
2023-05-12 12:59:48 -03:00
oobabooga
e283ddc559
Change how spaces are handled in continue/generation attempts
2023-05-12 12:50:29 -03:00
oobabooga
2eeb27659d
Fix bug in --cpu-memory
2023-05-12 06:17:07 -03:00
oobabooga
5eaa914e1b
Fix settings.json being ignored because of config.yaml
2023-05-12 06:09:45 -03:00
oobabooga
71693161eb
Better handle spaces in LlamaTokenizer
2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a
Fix a bug
2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d
Always return only the new tokens in generation functions
2023-05-11 17:07:20 -03:00
oobabooga
394bb253db
Syntax improvement
2023-05-11 16:27:50 -03:00
oobabooga
f7dbddfff5
Add a variable for tts extensions to use
2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions ( #2003 )
2023-05-11 15:37:04 -03:00
oobabooga
b7a589afc8
Improve the Metharme prompt
2023-05-10 16:09:32 -03:00
oobabooga
b01c4884cb
Better stopping strings for instruct mode
2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7
Add markdown table rendering
2023-05-10 13:41:23 -03:00
oobabooga
3316e33d14
Remove unused code
2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt
( #1965 )
2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6
Fix prompt order in the dropdown
2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90
Remove duplicate code
2023-05-10 02:07:22 -03:00
oobabooga
bdf1274b5d
Remove duplicate code
2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements ( #1957 )
2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme ( #1679 )
2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following ( #1596 )
2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) ( #1741 )
2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location ( #1953 )
2023-05-09 19:22:10 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model ( #1762 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages ( #1354 )
2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) ( #1147 )
2023-05-09 02:37:42 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. ( #1663 )
2023-05-08 22:55:41 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails ( #1871 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00
oobabooga
68dcbc7ebd
Fix chat history handling in instruct mode
2023-05-08 16:41:21 -03:00
Clay Shoaf
79ac94cc2f
fixed LoRA loading issue ( #1865 )
2023-05-08 16:21:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles ( #1917 )
2023-05-08 12:35:03 -03:00
EgrorBs
d3ea70f453
More trust_remote_code=trust_remote_code ( #1899 )
2023-05-07 23:48:20 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes ( #1896 )
2023-05-07 23:47:02 -03:00
oobabooga
9754d6a811
Fix an error message
2023-05-07 17:44:05 -03:00
camenduru
ba65a48ec8
trust_remote_code=shared.args.trust_remote_code ( #1891 )
2023-05-07 17:42:44 -03:00
oobabooga
6b67cb6611
Generalize superbooga to chat mode
2023-05-07 15:05:26 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
oobabooga
c728f2b5f0
Better handle new line characters in code blocks
2023-05-05 11:22:36 -03:00
oobabooga
00e333d790
Add MOSS support
2023-05-04 23:20:34 -03:00
oobabooga
f673f4a4ca
Change --verbose behavior
2023-05-04 15:56:06 -03:00
oobabooga
97a6a50d98
Use oasst tokenizer instead of universal tokenizer
2023-05-04 15:55:39 -03:00
oobabooga
b6ff138084
Add --checkpoint argument for GPTQ
2023-05-04 15:17:20 -03:00
Mylo
bd531c2dc2
Make --trust-remote-code work for all models ( #1772 )
2023-05-04 02:01:28 -03:00
oobabooga
0e6d17304a
Clearer syntax for instruction-following characters
2023-05-03 22:50:39 -03:00
oobabooga
9c77ab4fc2
Improve some warnings
2023-05-03 22:06:46 -03:00
oobabooga
057b1b2978
Add credits
2023-05-03 21:49:55 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
oobabooga
f54256e348
Rename no_mmap to no-mmap
2023-05-03 09:50:31 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token ( #1678 )
...
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab
80c2f25131
LLaVA: small fixes ( #1664 )
...
* change multimodal projector to the correct one
* remove reference to custom stopping strings from readme
* fix stopping strings if tokenizer extension adds/removes tokens
* add API example
* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga
4e09df4034
Only show extension in UI if it has an ui() function
2023-05-02 19:20:02 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative ( #1649 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input ( #1707 )
2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing ( #1625 )
2023-05-02 01:21:54 -03:00
oobabooga
c83210c460
Move the rstrips
2023-04-26 17:17:22 -03:00
oobabooga
1d8b8222e9
Revert #1579 , apply the proper fix
...
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
oobabooga
9c2e7c0fab
Fix path on models.py
2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py
2023-04-26 01:55:40 -03:00
oobabooga
f642135517
Make universal tokenizer, xformers, sdp-attention apply to monkey patch
2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14
Load more than one LoRA with --lora, fix a bug
2023-04-25 22:58:48 -03:00
oobabooga
15940e762e
Fix missing initial space for LlamaTokenizer
2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) ( #1535 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 ( #1546 )
...
* full dynamic model type support on modern peft
* remove shuffle option
2023-04-25 21:27:30 -03:00
oobabooga
9b272bc8e5
Monkey patch fixes
2023-04-25 21:20:26 -03:00
oobabooga
da812600f4
Apply settings regardless of setup() function
2023-04-25 01:16:23 -03:00
da3dsoul
ebca3f86d5
Apply the settings for extensions after import, but before setup() ( #1484 )
2023-04-25 00:23:11 -03:00
oobabooga
b0ce750d4e
Add spaces
2023-04-25 00:10:21 -03:00
oobabooga
1a0c12c6f2
Refactor text-generation.py a bit
2023-04-24 19:24:12 -03:00
oobabooga
2f4f124132
Remove obsolete function
2023-04-24 13:27:24 -03:00
oobabooga
b6af2e56a2
Add --character flag, add character to settings.json
2023-04-24 13:19:42 -03:00
oobabooga
0c32ae27cc
Only load the default history if it's empty
2023-04-24 11:50:51 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp ( #1115 )
2023-04-24 03:46:18 -03:00
oobabooga
b1ee674d75
Make interface state (mostly) persistent on page reload
2023-04-24 03:05:47 -03:00
oobabooga
435f8cc0e7
Simplify some chat functions
2023-04-24 00:47:40 -03:00
Wojtab
12212cf6be
LLaVA support ( #1487 )
2023-04-23 20:32:22 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints ( #990 )
...
Previous title: Add api_streaming extension and update api-example-stream to use it
* Merge with latest main
* Add parameter capturing encoder_repetition_penalty
* Change some defaults, minor fixes
* Add --api, --public-api flags
* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.
* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'
* Update the API examples
* Change a comment
* Update README
* Remove the gradio API
* Remove unused import
* Minor change
* Remove unused import
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs ( #1493 )
2023-04-23 12:54:41 -03:00
oobabooga
c0b5c09860
Minor change
2023-04-22 15:15:31 -03:00
oobabooga
fcb594b90e
Don't require llama.cpp models to be placed in subfolders
2023-04-22 14:56:48 -03:00
oobabooga
7438f4f6ba
Change GPTQ triton default settings
2023-04-22 12:27:30 -03:00
USBhost
e1aa9d5173
Support upstream GPTQ once again. ( #1451 )
2023-04-21 12:43:56 -03:00
oobabooga
eddd016449
Minor deletion
2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50
Fix evaluate comment saving
2023-04-21 12:34:08 -03:00
oobabooga
5e023ae64d
Change dropdown menu highlight color
2023-04-21 02:47:18 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models ( #1322 )
2023-04-21 00:20:33 -03:00
oobabooga
7bb9036ac9
Add universal LLaMA tokenizer support
2023-04-19 21:23:51 -03:00
Alex "mcmonkey" Goodwin
ee30625cd1
4-Bit LoRA training + several new training options and fixes
2023-04-19 19:39:03 -03:00
oobabooga
702fe92d42
Increase truncation_length_max value
2023-04-19 17:35:38 -03:00
oobabooga
9d9ae62938
Fix stopping strings in the gradio API
2023-04-19 13:52:21 -03:00
oobabooga
649e4017a5
Style improvements
2023-04-19 00:36:28 -03:00
oobabooga
000f65a2ef
Delete unused file
2023-04-18 04:01:14 -03:00
oobabooga
36f7c022f2
Rename a file
2023-04-18 01:38:33 -03:00
oobabooga
b069bb1f2e
Update monkey_patch_gradio.py
2023-04-18 01:32:42 -03:00
oobabooga
00186f76f4
Monkey patch gradio to prevent it from calling home
2023-04-18 01:13:16 -03:00
Tynan Burke
6a810b16b2
typo in training.py ( #1329 )
2023-04-17 21:40:46 -03:00
oobabooga
ac2973ffc6
Add a warning for --share
2023-04-17 19:34:28 -03:00
oobabooga
c544386824
Reset your name when choosing a character
2023-04-17 13:56:40 -03:00
oobabooga
c3dc348d1c
Don't show 'None' in the LoRA list
2023-04-17 13:52:23 -03:00
oobabooga
89bc540557
Update README
2023-04-17 10:55:35 -03:00
catalpaaa
07de7d0426
Load llamacpp before quantized model ( #1307 )
2023-04-17 10:47:26 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef ( #1229 )
2023-04-17 01:11:18 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support ( #1200 )
2023-04-16 23:26:52 -03:00
oobabooga
46a8aa8c09
Readability
2023-04-16 21:26:19 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support ( #1256 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
6a03ad0824
Remove fix_newlines() calls from chat.py
2023-04-16 18:25:44 -03:00
oobabooga
5342f72968
Properly handle blockquote blocks
2023-04-16 18:00:12 -03:00
oobabooga
27f3a78834
Better detect when no model is loaded
2023-04-16 17:35:54 -03:00
oobabooga
c8ad960018
Add defaults to the gradio API
2023-04-16 17:33:28 -03:00
oobabooga
beb95f5fe2
Add a style for the "chat" mode
2023-04-16 16:44:50 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model ( #1218 )
2023-04-16 14:24:49 -03:00
oobabooga
b705b4210c
Minor changes to training.py
2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c
Make training.py more readable
2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 ( #1098 )
...
* add support for other model types
dependent on future-peft-changes but with fallback to function now
* use encoding=utf8 for training format
* make shuffling optional
and describe dropout a bit more
* add eval_steps to control evaluation
* make callbacks not depend on globals
* make save steps controllable
* placeholder of initial loading-existing-model support
and var name cleanup
* save/load parameters
* last bit of cleanup
* remove `gptq_bits` ref as main branch removed that setting
* add higher_rank_limit option
2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter
* fix math on save_steps
2023-04-16 02:35:13 -03:00
kernyan
ac19d5101f
revert incorrect eos_token_id change from #814 ( #1261 )
...
- fixes #1054
2023-04-16 01:47:01 -03:00
oobabooga
a2127239de
Fix a bug
2023-04-16 01:41:37 -03:00
oobabooga
9d3c6d2dc3
Fix a bug
2023-04-16 01:40:47 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
...
* Remove eval of API input
* Remove unnecessary eval/exec for security
* Use ast.literal_eval
* Use ast.literal_eval
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
d2ea925fa5
Bump llama-cpp-python to use LlamaCache
2023-04-16 00:53:40 -03:00
oobabooga
ac189011cb
Add "Save current settings for this model" button
2023-04-15 12:54:02 -03:00
oobabooga
abef355ed0
Remove deprecated flag
2023-04-15 01:21:19 -03:00
oobabooga
c3aa79118e
Minor generate_chat_prompt simplification
2023-04-14 23:02:08 -03:00
oobabooga
3a337cfded
Use argparse defaults
2023-04-14 15:35:06 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support ( #1103 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
1901d238e1
Minor change to API code
2023-04-14 12:11:47 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name ( #1167 )
2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option ( #1122 )
2023-04-13 21:35:08 -03:00
oobabooga
a75e02de4d
Simplify GPTQ_loader.py
2023-04-13 12:13:07 -03:00
oobabooga
ca293bb713
Show a warning if two quantized models are found
2023-04-13 12:04:27 -03:00
oobabooga
8b482b4127
Merge #1073 from sgsdxzy/triton
...
* Multi-GPU support for triton
* Better quantized model filename detection
2023-04-13 11:31:21 -03:00
oobabooga
fde6d06167
Prioritize names with the groupsize in them
2023-04-13 11:27:03 -03:00
oobabooga
f2bf1a2c9e
Add some comments, remove obsolete code
2023-04-13 11:17:32 -03:00
Light
da74cd7c44
Generalized weight search path.
2023-04-13 21:43:32 +08:00
oobabooga
04866dc4fc
Add a warning for when no model is loaded
2023-04-13 10:35:08 -03:00
Light
cf58058c33
Change warmup_autotune to a negative switch.
2023-04-13 20:59:49 +08:00
Light
15d5a043f2
Merge remote-tracking branch 'origin/main' into triton
2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42
Add --model-menu option
2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e
Update settings-template.json
2023-04-12 18:30:43 -03:00
oobabooga
10e939c9b4
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-04-12 17:21:59 -03:00
oobabooga
1566d8e344
Add model settings to the Models tab
2023-04-12 17:20:18 -03:00
Light
a405064ceb
Better dispatch.
2023-04-13 01:48:17 +08:00
Light
f3591ccfa1
Keep minimal change.
2023-04-12 23:26:06 +08:00
Lukas
5ad92c940e
lora training fixes: ( #970 )
...
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
oobabooga
80f4eabb2a
Fix send_pictures extension
2023-04-12 10:27:06 -03:00
oobabooga
8265d45db8
Add send dummy message/reply buttons
...
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
37d52c96bc
Fix Continue in chat mode
2023-04-11 21:46:17 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token
2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes ( #903 )
2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
...
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.
The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training ( #938 )
...
(It's very slow)
2023-04-10 17:29:00 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters ( #1008 )
2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad
Make the bos token optional
2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1
Don't treat Intruct mode histories as regular histories
...
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
769aa900ea
Print the used seed
2023-04-10 10:53:31 -03:00
Alex "mcmonkey" Goodwin
30befe492a
fix random seeds to actually randomize
...
Without this fix, manual seeds get locked in.
2023-04-10 06:29:10 -07:00
oobabooga
1911504f82
Minor bug fix
2023-04-09 23:45:41 -03:00
oobabooga
dba2000d2b
Do things that I am not proud of
2023-04-09 23:40:49 -03:00
oobabooga
65552d2157
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-04-09 23:19:53 -03:00
oobabooga
8c6155251a
More robust 4-bit model loading
2023-04-09 23:19:28 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama ( #950 )
2023-04-09 23:08:40 -03:00
Brian O'Connor
625d81f495
Update character log logic ( #977 )
...
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
2023-04-09 22:20:21 -03:00
oobabooga
a3085dba07
Fix LlamaTokenizer eos_token (attempt)
2023-04-09 21:19:39 -03:00
oobabooga
120f5662cf
Better handle spaces for Continue
2023-04-09 20:37:31 -03:00
oobabooga
b27d757fd1
Minor change
2023-04-09 20:06:20 -03:00
oobabooga
d29f4624e9
Add a Continue button to chat mode
2023-04-09 20:04:16 -03:00
oobabooga
cc693a7546
Remove obsolete code
2023-04-09 00:51:07 -03:00
oobabooga
cb169d0834
Minor formatting changes
2023-04-08 17:34:07 -03:00
oobabooga
0b458bf82d
Simplify a function
2023-04-07 21:37:41 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 ( #596 )
2023-04-07 21:36:04 -03:00
oobabooga
1dc464dcb0
Sort imports
2023-04-07 14:42:03 -03:00
oobabooga
42ea6a3fc0
Change the timing for setup() calls
2023-04-07 12:20:57 -03:00
oobabooga
768354239b
Change training file encoding
2023-04-07 11:15:52 -03:00
oobabooga
6762e62a40
Simplifications
2023-04-07 11:14:32 -03:00
oobabooga
a453d4e9c4
Reorganize some chat functions
2023-04-07 11:07:03 -03:00
Maya
8fa182cfa7
Fix regeneration of first message in instruct mode ( #881 )
2023-04-07 10:45:42 -03:00
oobabooga
46c4654226
More PEP8 stuff
2023-04-07 00:52:02 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
OWKenobi
310bf46a94
Instruction Character Vicuna, Instruction Mode Bugfix ( #838 )
2023-04-06 17:40:44 -03:00
oobabooga
113f94b61e
Bump transformers (16-bit llama must be reconverted/redownloaded)
2023-04-06 16:04:03 -03:00
oobabooga
03cb44fc8c
Add new llama.cpp library (2048 context, temperature, etc now work)
2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support ( #820 )
2023-04-06 12:16:48 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements ( #763 )
2023-04-06 02:04:11 -03:00
oobabooga
e94ab5dac1
Minor fixes
2023-04-06 01:43:10 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API
2023-04-06 01:22:15 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset ( #767 )
2023-04-05 18:52:36 -03:00
Forkoz
8203ce0cac
Stop character pic from being cached when changing chars or clearing. ( #798 )
...
Tested on both FF and chromium
2023-04-05 14:25:01 -03:00
oobabooga
7f66421369
Fix loading characters
2023-04-05 14:22:32 -03:00
oobabooga
e722c240af
Add Instruct mode
2023-04-05 13:54:50 -03:00
oobabooga
3d6cb5ed63
Minor rewrite
2023-04-05 01:21:40 -03:00
oobabooga
f3a2e0b8a9
Disable pre_layer when the model type is not llama
2023-04-05 01:19:26 -03:00
catalpaaa
4ab679480e
allow quantized model to be loaded from model dir ( #760 )
2023-04-04 23:19:38 -03:00
oobabooga
ae1fe45bc0
One more cache reset
2023-04-04 23:15:57 -03:00
oobabooga
8ef89730a5
Try to better handle browser image cache
2023-04-04 23:09:28 -03:00
oobabooga
cc6c7a37f3
Add make_thumbnail function
2023-04-04 23:03:58 -03:00
oobabooga
80dfba05f3
Better crop/resize cached images
2023-04-04 22:52:15 -03:00
oobabooga
65d8a24a6d
Show profile pictures in the Character tab
2023-04-04 22:28:49 -03:00
OWKenobi
ee4547cd34
Detect "vicuna" as llama model type ( #772 )
2023-04-04 13:23:27 -03:00
oobabooga
b24147c7ca
Document --pre_layer
2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270
Update settings template
2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly ( #743 )
...
* Add greetings field
* add greeting field and make it interactive
* Minor changes
* Fix a bug
* Simplify clear_chat_log
* Change a label
* Minor change
* Simplifications
* Simplification
* Simplify loading the default character history
* Fix regression
---------
Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
Alex "mcmonkey" Goodwin
8b1f20aa04
Fix some old JSON characters not loading ( #740 )
2023-04-03 10:49:28 -03:00
oobabooga
8b442305ac
Rename another variable
2023-04-03 01:15:20 -03:00
oobabooga
08448fb637
Rename a variable
2023-04-03 01:02:11 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
Alex "mcmonkey" Goodwin
ea97303509
Apply dialogue format in all character fields not just example dialogue ( #650 )
2023-04-02 21:54:29 -03:00
TheTerrasque
2157bb4319
New yaml character format ( #337 from TheTerrasque/feature/yaml-characters)
...
This doesn't break backward compatibility with JSON characters.
2023-04-02 20:34:25 -03:00
oobabooga
5f3f3faa96
Better handle CUDA out of memory errors in chat mode
2023-04-02 17:48:00 -03:00
oobabooga
b0890a7925
Add shared.is_chat() function
2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py
2023-04-01 13:56:47 -03:00
oobabooga
fcda3f8776
Add also_return_rows to generate_chat_prompt
2023-04-01 01:12:13 -03:00
oobabooga
2c52310642
Add --threads flag for llama.cpp
2023-03-31 21:18:05 -03:00
oobabooga
eeafd60713
Fix streaming
2023-03-31 19:05:38 -03:00
oobabooga
52065ae4cd
Add repetition_penalty
2023-03-31 19:01:34 -03:00
oobabooga
2259143fec
Fix llama.cpp with --no-stream
2023-03-31 18:43:45 -03:00
oobabooga
3a47a602a3
Detect ggml*.bin files automatically
2023-03-31 17:18:21 -03:00
oobabooga
0aee7341d8
Properly count tokens/s for llama.cpp in chat mode
2023-03-31 17:04:32 -03:00
oobabooga
ea3ba6fc73
Merge branch 'feature/llamacpp' of github.com:thomasantony/text-generation-webui into thomasantony-feature/llamacpp
2023-03-31 14:45:53 -03:00
oobabooga
09b0a3aafb
Add repetition_penalty
2023-03-31 14:45:17 -03:00
oobabooga
4d98623041
Merge branch 'main' into feature/llamacpp
2023-03-31 14:37:04 -03:00
oobabooga
4c27562157
Minor changes
2023-03-31 14:33:46 -03:00
oobabooga
9d1dcf880a
General improvements
2023-03-31 14:27:01 -03:00
oobabooga
770ff0efa9
Merge branch 'main' of github.com:oobabooga/text-generation-webui
2023-03-31 12:22:22 -03:00
oobabooga
1d1d9e40cd
Add seed to settings
2023-03-31 12:22:07 -03:00
Maya
b246d17513
Fix type object is not subscriptable
...
Fix `type object is not subscriptable` on python 3.8
2023-03-31 14:20:31 +03:00
oobabooga
d4a9b5ea97
Remove redundant preset (see the plot in #587 )
2023-03-30 17:34:44 -03:00
Thomas Antony
7fa5d96c22
Update to use new llamacpp API
2023-03-30 11:23:05 +01:00
Thomas Antony
79fa2b6d7e
Add support for alpaca
2023-03-30 11:23:04 +01:00
Thomas Antony
a5f5736e74
Add to text_generation.py
2023-03-30 11:22:38 +01:00
Thomas Antony
7745faa7bb
Add llamacpp to models.py
2023-03-30 11:22:37 +01:00
Thomas Antony
7a562481fa
Initial version of llamacpp_model.py
2023-03-30 11:22:07 +01:00
oobabooga
a21e580782
Move an import
2023-03-29 22:50:58 -03:00
oobabooga
55755e27b9
Don't hardcode prompts in the settings dict/json
2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160
Adapt to the new model names
2023-03-29 21:47:36 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models
2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636
2023-03-29 11:48:17 -03:00
oobabooga
1edfb96778
Fix loading extensions from within the interface
2023-03-28 23:27:02 -03:00
oobabooga
304f812c63
Gracefully handle CUDA out of memory errors with streaming
2023-03-28 19:20:50 -03:00
oobabooga
010b259dde
Update documentation
2023-03-28 17:46:00 -03:00
oobabooga
0bec15ebcd
Reorder imports
2023-03-28 17:34:15 -03:00
Maya Eary
41ec682834
Disable kernel threshold for gpt-j
2023-03-28 22:45:38 +03:00
Maya
1ac003d41c
Merge branch 'oobabooga:main' into feature/gpt-j-4bit-v2
2023-03-28 22:30:39 +03:00
Maya Eary
1c075d8d21
Fix typo
2023-03-28 20:43:50 +03:00
Maya Eary
c8207d474f
Generalized load_quantized
2023-03-28 20:38:55 +03:00
oobabooga
8579fe51dd
Fix new lines in the HTML tab
2023-03-28 12:59:34 -03:00
Alex "mcmonkey" Goodwin
e817fac542
better defaults
2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf
implement initial Raw Text File Input
...
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3
change number minimums to 0
...
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556
use new shared.args.lora_dir
2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
31f04dc615
Merge branch 'main' into add-train-lora-tab
2023-03-27 20:03:30 -07:00
oobabooga
53da672315
Fix FlexGen
2023-03-27 23:44:21 -03:00
oobabooga
ee95e55df6
Fix RWKV tokenizer
2023-03-27 23:42:29 -03:00
oobabooga
036163a751
Change description
2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3
Some simplifications
2023-03-27 23:29:52 -03:00
oobabooga
fde92048af
Merge branch 'main' into catalpaaa-lora-and-model-dir
2023-03-27 23:16:44 -03:00
Alex "mcmonkey" Goodwin
8a97f6ba29
corrections per the PR comments
2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6
couple missed camelCases
2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin
6368dad7db
Fix camelCase to snake_case to match repo format standard
2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4
Small style changes
2023-03-27 21:24:39 -03:00
oobabooga
c2cad30772
Merge branch 'main' into mcmonkey4eva-add-train-lora-tab
2023-03-27 21:05:44 -03:00
Alex "mcmonkey" Goodwin
9ced75746d
add total time estimate
2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d
interrupt button
2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95
initial progress tracker in UI
2023-03-27 10:25:08 -07:00
oobabooga
48a6c9513e
Merge pull request #572 from clusterfudge/issues/571
...
Potential fix for issues/571
2023-03-27 14:06:38 -03:00
Alex "mcmonkey" Goodwin
c07bcd0850
add some outputs to indicate progress updates (sorta)
...
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
oobabooga
af65c12900
Change Stop button behavior
2023-03-27 13:23:59 -03:00
Alex "mcmonkey" Goodwin
d911c22af9
use shared rows to make the LoRA Trainer interface a bit more compact / clean
2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8
Merge branch 'main' into add-train-lora-tab
2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5
Handle unloading LoRA from dropdown menu icon
2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c
Change notebook mode appearance
2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size ( #530 )
...
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb
Potential fix for issues/571
2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1
make 'model' variables less ambiguous
2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e
document options better
2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0
fix error from prepare call running twice in a row
2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c
automatically strip empty data entries (for better alpaca dataset compat)
2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559
2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
...
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir
2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps
2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
...
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf
lora-dir, model-dir and login auth
...
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident)
2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident)
2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py
2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py
2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
...
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py
2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py
2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload
2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py
2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
...
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts
2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import
2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import
2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py
2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py
2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode
2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt)
2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names ( #518 )
...
* Update shared.py
* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode
2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode
2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming
2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply
2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab
2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function
2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments
2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading ( #460 )
...
This works in a 4GB card now:
```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6
Do not send empty user input as part of the prompt.
...
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload
2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
...
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling ( #454 )
...
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9
--no-cache and --gpu-memory in MiB for fine VRAM control
2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc
Make custom LoRAs work by default #385
2023-03-19 12:11:35 -03:00
Maya
acdbd6b708
Check if app should display extensions ui
2023-03-19 13:31:21 +00:00
Maya
81c9d130f2
Fix global
2023-03-19 13:25:49 +00:00
Maya
099d7a844b
Add setup method to extensions
2023-03-19 13:22:24 +00:00
oobabooga
c753261338
Disable stop_at_newline by default
2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e
Don't include PeftModel every time
2023-03-18 10:55:24 -03:00
oobabooga
e26763a510
Minor changes
2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5
clean up duplicated code
2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee
add mps support on apple silicon
2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6
Add some LoRA params
2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c
Use markdown in the "HTML" tab
2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4
Add a comment
2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
...
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075
Remove unused import
2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d
Sort the imports
2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74
Remove LoRA tab, move it into the Parameters menu
2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e
Several QoL changes related to LoRA
2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
...
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c
Add files
2023-03-16 21:35:53 -03:00
oobabooga
104293f411
Add LoRA support
2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821
Don't split the layers in 8-bit mode by default
2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333
Small changes
2023-03-16 13:34:23 -03:00
awoo
83cb20aad8
Add support for --gpu-memory witn --load-in-8bit
2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1
Remove unused imports
2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077
Keep GALACTICA special tokens ( #300 )
2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092
Add Interface mode tab
2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd
Make the "no GPU has been detected" message more descriptive
2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee
Show/hide the extensions block using javascript
2023-03-15 16:35:28 -03:00
oobabooga
348596f634
Fix broken extensions
2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8
Optimize the HTML generation speed
2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893
Minor fix
2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5
Rename a file
2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f
Further reorganize the UI
2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352
Prevent *Is typing* from disappearing instantly while streaming
2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1
Move all css/js into separate files
2023-03-15 12:35:11 -03:00
oobabooga
693b53d957
Merge branch 'main' into HideLord-main
2023-03-15 12:08:56 -03:00
oobabooga
1413931705
Add a header bar and redesign the interface ( #293 )
2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6
Add 'hallucinations' filter #326
...
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions
2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7
Rename a file, add deprecation warning for --load-in-4bit
2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f
Merge branch 'main' into Zerogoki00-opt4-bit
2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16
Implement "*Is recording a voice message...*" for TTS #303
2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244
Some minor fixes to the GPTQ loader
2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6
use updated load_quantized
2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a
determine model type from model name
2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e
remove default value from argument
2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main
2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c
make argument non case-sensitive
2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3
rename method
2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc
add argument --gptq-model-type and remove duplicate arguments
2023-03-13 20:01:34 +03:00
Ayanami Rei
edbc61139f
use new quant loader
2023-03-13 20:00:38 +03:00
Ayanami Rei
345b6dee8c
refactor quant models loader and add support of OPT
2023-03-13 19:59:57 +03:00
oobabooga
66b6971b61
Update README
2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f
Document --auto-launch
2023-03-13 12:43:33 -03:00
oobabooga
372363bc3d
Fix GPTQ load_quant call on Windows
2023-03-13 12:07:02 -03:00
oobabooga
0c224cf4f4
Fix GALACTICA ( #285 )
2023-03-13 10:32:28 -03:00
oobabooga
2c4699a7e9
Change a comment
2023-03-13 00:20:02 -03:00
oobabooga
0a7acb3bd9
Remove redundant comments
2023-03-13 00:12:21 -03:00
oobabooga
77294b27dd
Use str(Path) instead of os.path.abspath(Path)
2023-03-13 00:08:01 -03:00
oobabooga
b9e0712b92
Fix Open Assistant
2023-03-12 23:58:25 -03:00
oobabooga
1ddcd4d0ba
Clean up silero_tts
...
This should only be used with --no-stream.
The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
HideLord
683556f411
Adding markdown support and slight refactoring.
2023-03-12 21:34:09 +02:00