* Initial commit
* Initial commit with new code
* Add comments
* Move GPTQ out of if
* Fix install on Arch Linux
* Fix case where install was aborted
If the install was aborted before a model was downloaded, webui wouldn't run.
* Update start_windows.bat
Add necessary flags to Miniconda installer
Disable Start Menu shortcut creation
Disable ssl on Conda
Change Python version to latest 3.10,
I've noticed that explicitly specifying 3.10.9 can break the included Python installation
* Update bitsandbytes wheel link to 0.38.1
Disable ssl on Conda
* Add check for spaces in path
Installation of Miniconda will fail in this case
* Mirror changes to mac and linux scripts
* Start with model-menu
* Add updaters
* Fix line endings
* Add check for path with spaces
* Fix one-click updating
* Fix one-click updating
* Clean up update scripts
* Add environment scripts
---------
Co-authored-by: jllllll <3887729+jllllll@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
* add support for other model types
dependent on future-peft-changes but with fallback to function now
* use encoding=utf8 for training format
* make shuffling optional
and describe dropout a bit more
* add eval_steps to control evaluation
* make callbacks not depend on globals
* make save steps controllable
* placeholder of initial loading-existing-model support
and var name cleanup
* save/load parameters
* last bit of cleanup
* remove `gptq_bits` ref as main branch removed that setting
* add higher_rank_limit option
2048 is basically unreachable due to VRAM, but i trained at 1536 with batch size = 1 on a 7B model.
Note that it's in the do_train input just to save as a parameter
* fix math on save_steps