IggoOnCode
|
09d8119e3c
|
Add CPU LoRA training (#938)
(It's very slow)
|
2023-04-10 17:29:00 -03:00 |
|
Alex "mcmonkey" Goodwin
|
0caf718a21
|
add on-page documentation to parameters (#1008)
|
2023-04-10 17:19:12 -03:00 |
|
oobabooga
|
bd04ff27ad
|
Make the bos token optional
|
2023-04-10 16:44:22 -03:00 |
|
oobabooga
|
0f1627eff1
|
Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
|
2023-04-10 15:48:07 -03:00 |
|
oobabooga
|
769aa900ea
|
Print the used seed
|
2023-04-10 10:53:31 -03:00 |
|
Alex "mcmonkey" Goodwin
|
30befe492a
|
fix random seeds to actually randomize
Without this fix, manual seeds get locked in.
|
2023-04-10 06:29:10 -07:00 |
|
oobabooga
|
1911504f82
|
Minor bug fix
|
2023-04-09 23:45:41 -03:00 |
|
oobabooga
|
dba2000d2b
|
Do things that I am not proud of
|
2023-04-09 23:40:49 -03:00 |
|
oobabooga
|
65552d2157
|
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
2023-04-09 23:19:53 -03:00 |
|
oobabooga
|
8c6155251a
|
More robust 4-bit model loading
|
2023-04-09 23:19:28 -03:00 |
|
MarkovInequality
|
992663fa20
|
Added xformers support to Llama (#950)
|
2023-04-09 23:08:40 -03:00 |
|
Brian O'Connor
|
625d81f495
|
Update character log logic (#977)
* When logs are cleared, save the cleared log over the old log files
* Generate a log file when a character is loaded the first time
|
2023-04-09 22:20:21 -03:00 |
|
oobabooga
|
a3085dba07
|
Fix LlamaTokenizer eos_token (attempt)
|
2023-04-09 21:19:39 -03:00 |
|
oobabooga
|
120f5662cf
|
Better handle spaces for Continue
|
2023-04-09 20:37:31 -03:00 |
|
oobabooga
|
b27d757fd1
|
Minor change
|
2023-04-09 20:06:20 -03:00 |
|
oobabooga
|
d29f4624e9
|
Add a Continue button to chat mode
|
2023-04-09 20:04:16 -03:00 |
|
oobabooga
|
cc693a7546
|
Remove obsolete code
|
2023-04-09 00:51:07 -03:00 |
|
oobabooga
|
cb169d0834
|
Minor formatting changes
|
2023-04-08 17:34:07 -03:00 |
|
oobabooga
|
0b458bf82d
|
Simplify a function
|
2023-04-07 21:37:41 -03:00 |
|
Φφ
|
ffd102e5c0
|
SD Api Pics extension, v.1.1 (#596)
|
2023-04-07 21:36:04 -03:00 |
|
oobabooga
|
1dc464dcb0
|
Sort imports
|
2023-04-07 14:42:03 -03:00 |
|
oobabooga
|
42ea6a3fc0
|
Change the timing for setup() calls
|
2023-04-07 12:20:57 -03:00 |
|
oobabooga
|
768354239b
|
Change training file encoding
|
2023-04-07 11:15:52 -03:00 |
|
oobabooga
|
6762e62a40
|
Simplifications
|
2023-04-07 11:14:32 -03:00 |
|
oobabooga
|
a453d4e9c4
|
Reorganize some chat functions
|
2023-04-07 11:07:03 -03:00 |
|
Maya
|
8fa182cfa7
|
Fix regeneration of first message in instruct mode (#881)
|
2023-04-07 10:45:42 -03:00 |
|
oobabooga
|
46c4654226
|
More PEP8 stuff
|
2023-04-07 00:52:02 -03:00 |
|
oobabooga
|
ea6e77df72
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
|
OWKenobi
|
310bf46a94
|
Instruction Character Vicuna, Instruction Mode Bugfix (#838)
|
2023-04-06 17:40:44 -03:00 |
|
oobabooga
|
113f94b61e
|
Bump transformers (16-bit llama must be reconverted/redownloaded)
|
2023-04-06 16:04:03 -03:00 |
|
oobabooga
|
03cb44fc8c
|
Add new llama.cpp library (2048 context, temperature, etc now work)
|
2023-04-06 13:12:14 -03:00 |
|
EyeDeck
|
39f3fec913
|
Broaden GPTQ-for-LLaMA branch support (#820)
|
2023-04-06 12:16:48 -03:00 |
|
Alex "mcmonkey" Goodwin
|
0c7ef26981
|
Lora trainer improvements (#763)
|
2023-04-06 02:04:11 -03:00 |
|
oobabooga
|
e94ab5dac1
|
Minor fixes
|
2023-04-06 01:43:10 -03:00 |
|
oobabooga
|
3f3e42e26c
|
Refactor several function calls and the API
|
2023-04-06 01:22:15 -03:00 |
|
SDS
|
378d21e80c
|
Add LLaMA-Precise preset (#767)
|
2023-04-05 18:52:36 -03:00 |
|
Forkoz
|
8203ce0cac
|
Stop character pic from being cached when changing chars or clearing. (#798)
Tested on both FF and chromium
|
2023-04-05 14:25:01 -03:00 |
|
oobabooga
|
7f66421369
|
Fix loading characters
|
2023-04-05 14:22:32 -03:00 |
|
oobabooga
|
e722c240af
|
Add Instruct mode
|
2023-04-05 13:54:50 -03:00 |
|
oobabooga
|
3d6cb5ed63
|
Minor rewrite
|
2023-04-05 01:21:40 -03:00 |
|
oobabooga
|
f3a2e0b8a9
|
Disable pre_layer when the model type is not llama
|
2023-04-05 01:19:26 -03:00 |
|
catalpaaa
|
4ab679480e
|
allow quantized model to be loaded from model dir (#760)
|
2023-04-04 23:19:38 -03:00 |
|
oobabooga
|
ae1fe45bc0
|
One more cache reset
|
2023-04-04 23:15:57 -03:00 |
|
oobabooga
|
8ef89730a5
|
Try to better handle browser image cache
|
2023-04-04 23:09:28 -03:00 |
|
oobabooga
|
cc6c7a37f3
|
Add make_thumbnail function
|
2023-04-04 23:03:58 -03:00 |
|
oobabooga
|
80dfba05f3
|
Better crop/resize cached images
|
2023-04-04 22:52:15 -03:00 |
|
oobabooga
|
65d8a24a6d
|
Show profile pictures in the Character tab
|
2023-04-04 22:28:49 -03:00 |
|
OWKenobi
|
ee4547cd34
|
Detect "vicuna" as llama model type (#772)
|
2023-04-04 13:23:27 -03:00 |
|
oobabooga
|
b24147c7ca
|
Document --pre_layer
|
2023-04-03 17:34:25 -03:00 |
|
oobabooga
|
4c9ed09270
|
Update settings template
|
2023-04-03 14:59:26 -03:00 |
|