Commit Graph

435 Commits

Author SHA1 Message Date
Light
cf58058c33 Change warmup_autotune to a negative switch. 2023-04-13 20:59:49 +08:00
Light
15d5a043f2 Merge remote-tracking branch 'origin/main' into triton 2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42 Add --model-menu option 2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e Update settings-template.json 2023-04-12 18:30:43 -03:00
oobabooga
10e939c9b4 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-12 17:21:59 -03:00
oobabooga
1566d8e344 Add model settings to the Models tab 2023-04-12 17:20:18 -03:00
Light
a405064ceb Better dispatch. 2023-04-13 01:48:17 +08:00
Light
f3591ccfa1 Keep minimal change. 2023-04-12 23:26:06 +08:00
Lukas
5ad92c940e
lora training fixes: (#970)
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
oobabooga
80f4eabb2a Fix send_pictures extension 2023-04-12 10:27:06 -03:00
oobabooga
8265d45db8 Add send dummy message/reply buttons
Useful for starting a new reply.
2023-04-11 22:21:41 -03:00
oobabooga
37d52c96bc Fix Continue in chat mode 2023-04-11 21:46:17 -03:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
oobabooga
0f212093a3
Refactor the UI
A single dictionary called 'interface_state' is now passed as input to all functions. The values are updated only when necessary.

The goal is to make it easier to add new elements to the UI.
2023-04-11 11:46:30 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
Alex "mcmonkey" Goodwin
0caf718a21
add on-page documentation to parameters (#1008) 2023-04-10 17:19:12 -03:00
oobabooga
bd04ff27ad Make the bos token optional 2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
oobabooga
769aa900ea Print the used seed 2023-04-10 10:53:31 -03:00
Alex "mcmonkey" Goodwin
30befe492a fix random seeds to actually randomize
Without this fix, manual seeds get locked in.
2023-04-10 06:29:10 -07:00
oobabooga
1911504f82 Minor bug fix 2023-04-09 23:45:41 -03:00
oobabooga
dba2000d2b Do things that I am not proud of 2023-04-09 23:40:49 -03:00
oobabooga
65552d2157 Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-04-09 23:19:53 -03:00
oobabooga
8c6155251a More robust 4-bit model loading 2023-04-09 23:19:28 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
Brian O'Connor
625d81f495
Update character log logic (#977)
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
2023-04-09 22:20:21 -03:00
oobabooga
a3085dba07 Fix LlamaTokenizer eos_token (attempt) 2023-04-09 21:19:39 -03:00
oobabooga
120f5662cf Better handle spaces for Continue 2023-04-09 20:37:31 -03:00
oobabooga
b27d757fd1 Minor change 2023-04-09 20:06:20 -03:00
oobabooga
d29f4624e9 Add a Continue button to chat mode 2023-04-09 20:04:16 -03:00
oobabooga
cc693a7546 Remove obsolete code 2023-04-09 00:51:07 -03:00
oobabooga
cb169d0834 Minor formatting changes 2023-04-08 17:34:07 -03:00
oobabooga
0b458bf82d Simplify a function 2023-04-07 21:37:41 -03:00
Φφ
ffd102e5c0
SD Api Pics extension, v.1.1 (#596) 2023-04-07 21:36:04 -03:00
oobabooga
1dc464dcb0 Sort imports 2023-04-07 14:42:03 -03:00
oobabooga
42ea6a3fc0 Change the timing for setup() calls 2023-04-07 12:20:57 -03:00
oobabooga
768354239b Change training file encoding 2023-04-07 11:15:52 -03:00
oobabooga
6762e62a40 Simplifications 2023-04-07 11:14:32 -03:00
oobabooga
a453d4e9c4 Reorganize some chat functions 2023-04-07 11:07:03 -03:00
Maya
8fa182cfa7
Fix regeneration of first message in instruct mode (#881) 2023-04-07 10:45:42 -03:00
oobabooga
46c4654226 More PEP8 stuff 2023-04-07 00:52:02 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
OWKenobi
310bf46a94
Instruction Character Vicuna, Instruction Mode Bugfix (#838) 2023-04-06 17:40:44 -03:00
oobabooga
113f94b61e Bump transformers (16-bit llama must be reconverted/redownloaded) 2023-04-06 16:04:03 -03:00
oobabooga
03cb44fc8c Add new llama.cpp library (2048 context, temperature, etc now work) 2023-04-06 13:12:14 -03:00
EyeDeck
39f3fec913
Broaden GPTQ-for-LLaMA branch support (#820) 2023-04-06 12:16:48 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements (#763) 2023-04-06 02:04:11 -03:00
oobabooga
e94ab5dac1 Minor fixes 2023-04-06 01:43:10 -03:00
oobabooga
3f3e42e26c
Refactor several function calls and the API 2023-04-06 01:22:15 -03:00