oobabooga
b705b4210c
Minor changes to training.py
2023-04-16 03:08:37 -03:00
oobabooga
5c513a5f5c
Make training.py more readable
2023-04-16 02:46:27 -03:00
Alex "mcmonkey" Goodwin
a3eec62b50
Lora trainer improvements part 3 ( #1098 )
...
* add support for other model types
dependent on future-peft-changes but with fallback to function now
* use encoding=utf8 for training format
* make shuffling optional
and describe dropout a bit more
* add eval_steps to control evaluation
* make callbacks not depend on globals
* make save steps controllable
* placeholder of initial loading-existing-model support
and var name cleanup
* save/load parameters
* last bit of cleanup
* remove `gptq_bits` ref as main branch removed that setting
* add higher_rank_limit option
2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter
* fix math on save_steps
2023-04-16 02:35:13 -03:00
oobabooga
abef355ed0
Remove deprecated flag
2023-04-15 01:21:19 -03:00
Lukas
5ad92c940e
lora training fixes: ( #970 )
...
Fix wrong input format being picked
Fix crash when an entry in the dataset has an attribute of value None
2023-04-12 11:38:01 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training ( #938 )
...
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga
768354239b
Change training file encoding
2023-04-07 11:15:52 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability ( #862 )
2023-04-07 00:15:45 -03:00
Alex "mcmonkey" Goodwin
0c7ef26981
Lora trainer improvements ( #763 )
2023-04-06 02:04:11 -03:00
oobabooga
2a267011dc
Use Path.stem for simplicity
2023-04-03 00:56:14 -03:00
oobabooga
58349f44a0
Handle training exception for unsupported models
2023-03-29 11:55:34 -03:00
oobabooga
a6d0373063
Fix training dataset loading #636
2023-03-29 11:48:17 -03:00
Alex "mcmonkey" Goodwin
e817fac542
better defaults
2023-03-27 22:29:23 -07:00
Alex "mcmonkey" Goodwin
2e08af4edf
implement initial Raw Text File Input
...
also bump default Rank & Alpha for values that will make sense in testing if you don't know what you're doing and leave the defaults.
2023-03-27 22:15:32 -07:00
Alex "mcmonkey" Goodwin
b749952fe3
change number minimums to 0
...
gradio calculates 'step' relative to the minimum, so at '1' the step values were all offset awkwardly. 0 isn't valid, but, uh, just don't slam the slider to the left.
2023-03-27 21:22:43 -07:00
Alex "mcmonkey" Goodwin
ec6224f556
use new shared.args.lora_dir
2023-03-27 20:04:16 -07:00
Alex "mcmonkey" Goodwin
8a97f6ba29
corrections per the PR comments
2023-03-27 18:39:06 -07:00
Alex "mcmonkey" Goodwin
7fab7ea1b6
couple missed camelCases
2023-03-27 18:19:06 -07:00
Alex "mcmonkey" Goodwin
6368dad7db
Fix camelCase to snake_case to match repo format standard
2023-03-27 18:17:42 -07:00
oobabooga
2f0571bfa4
Small style changes
2023-03-27 21:24:39 -03:00
Alex "mcmonkey" Goodwin
9ced75746d
add total time estimate
2023-03-27 10:57:27 -07:00
Alex "mcmonkey" Goodwin
16ea4fc36d
interrupt button
2023-03-27 10:43:01 -07:00
Alex "mcmonkey" Goodwin
8fc723fc95
initial progress tracker in UI
2023-03-27 10:25:08 -07:00
Alex "mcmonkey" Goodwin
c07bcd0850
add some outputs to indicate progress updates (sorta)
...
Actual progressbar still needed. Also minor formatting fixes.
2023-03-27 09:41:06 -07:00
Alex "mcmonkey" Goodwin
d911c22af9
use shared rows to make the LoRA Trainer interface a bit more compact / clean
2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1
make 'model' variables less ambiguous
2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e
document options better
2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0
fix error from prepare call running twice in a row
2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c
automatically strip empty data entries (for better alpaca dataset compat)
2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a
initial lora training tab
2023-03-25 12:08:26 -07:00