Sean Fitzgerald
0bac80d9eb
Potential fix for issues/571
2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1
make 'model' variables less ambiguous
2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e
document options better
2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
8134c4b334
add training/datsets to gitignore for #570
2023-03-25 12:41:18 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0
fix error from prepare call running twice in a row
2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c
automatically strip empty data entries (for better alpaca dataset compat)
2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00
Φφ
1a1e420e65
Silero_tts streaming fix
...
Temporarily suppress the streaming during the audio response as it would interfere with the audio (making it stutter and play anew)
2023-03-25 21:33:30 +03:00
Alex "mcmonkey" Goodwin
9ccf505ccd
improve/simplify gitignore
...
- add repositories
- remove the redundant "/*" on folders
- remove the exclusions for files that already exist
2023-03-25 10:04:00 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559
2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
...
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir
2023-03-25 01:28:33 -07:00
oobabooga
70f9565f37
Update README.md
2023-03-25 02:35:30 -03:00
oobabooga
25be9698c7
Fix LoRA on mps
2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
...
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
d51cb8292b
Update server.py
...
yea i should go to bed
2023-03-24 17:36:31 -07:00
catalpaaa
9e2963e0c8
Update server.py
2023-03-24 17:35:45 -07:00
catalpaaa
ec2a1facee
Update server.py
2023-03-24 17:34:33 -07:00
catalpaaa
b37c54edcf
lora-dir, model-dir and login auth
...
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident)
2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident)
2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py
2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py
2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
...
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py
2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py
2023-03-24 17:17:29 -03:00
oobabooga
c14e598f14
Merge pull request #433 from mayaeary/fix/api-reload
...
Fix api extension duplicating
2023-03-24 16:56:10 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload
2023-03-24 16:54:41 -03:00
oobabooga
4a724ed22f
Reorder imports
2023-03-24 16:53:56 -03:00
oobabooga
8fad84abc2
Update extensions.py
2023-03-24 16:51:27 -03:00
oobabooga
d8e950d6bd
Don't load the model twice when using --lora
2023-03-24 16:30:32 -03:00
oobabooga
fd99995b01
Make the Stop button more consistent in chat mode
2023-03-24 15:59:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
...
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts
2023-03-24 02:03:30 -03:00
oobabooga
04417b658b
Update README.md
2023-03-24 01:40:43 -03:00
oobabooga
bb4cb22453
Download .pt files using download-model.py (for 4-bit models)
2023-03-24 00:49:04 -03:00
oobabooga
143b5b5edf
Mention one-click-bandaid in the README
2023-03-23 23:28:50 -03:00
EyeDeck
dcfd866402
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import
2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import
2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py
2023-03-23 22:12:24 -03:00
oobabooga
9bdb3c784d
Minor fix
2023-03-23 22:02:40 -03:00
oobabooga
b0abb327d8
Update LoRA.py
2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs
2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode
2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt)
2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names ( #518 )
...
* Update shared.py
* Update settings-template.json
2023-03-23 13:36:00 -03:00
Φφ
483d173d23
Code reuse + indication
...
Now shows the message in the console when unloading weights. Also reload_model() calls unload_model() first to free the memory so that multiple reloads won't overfill it.
2023-03-23 07:06:26 +03:00
Φφ
1917b15275
Unload and reload models on request
2023-03-23 07:06:26 +03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode
2023-03-23 01:05:13 -03:00