mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Updating Using-LoRAs.md doc to clarify resuming training (#1474)
This commit is contained in:
parent
fe02281477
commit
e03b873460
@ -54,8 +54,9 @@ print(f"Predicted {len(output)} tokens for '{sentence}':\n{output}")
|
||||
|
||||
The Training tab in the interface can be used to train a LoRA. The parameters are self-documenting and good defaults are included.
|
||||
|
||||
This was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570).
|
||||
You can interrupt and resume LoRA training in this tab. If the name and rank are the same, training will resume using the `adapter_model.bin` in your LoRA folder. You can resume from a past checkpoint by replacing this file using the contents of one of the checkpoint folders. Note that the learning rate and steps will be reset, and you may want to set the learning rate to the last reported rate in the console output.
|
||||
|
||||
LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570).
|
||||
|
||||
#### Using the original alpaca-lora code
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user