oobabooga
262f8ae5bb
Use default gr.Dataframe for evaluation table
2023-10-27 06:49:14 -07:00
Abhilash Majumder
778a010df8
Intel Gpu support initialization ( #4340 )
2023-10-26 23:39:51 -03:00
adrianfiedler
4bc411332f
Fix broken links ( #4367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-10-23 14:09:57 -03:00
omo
4405513ca5
Option to select/target additional linear modules/layers in LORA training ( #4178 )
2023-10-22 15:57:19 -03:00
oobabooga
f17f7a6913
Increase the evaluation table height
2023-10-16 12:55:35 -07:00
oobabooga
188d20e9e5
Reduce the evaluation table height
2023-10-16 10:53:42 -07:00
oobabooga
71cac7a1b2
Increase the height of the evaluation table
2023-10-15 21:56:40 -07:00
oobabooga
fae8062d39
Bump to latest gradio (3.47) ( #4258 )
2023-10-10 22:20:49 -03:00
oobabooga
abe99cddeb
Extend evaluation slider bounds
2023-09-29 13:06:26 -07:00
oobabooga
1ca54faaf0
Improve --multi-user mode
2023-09-26 06:42:33 -07:00
John Smith
cc7b7ba153
fix lora training with alpaca_lora_4bit ( #3853 )
2023-09-11 01:22:20 -03:00
oobabooga
8545052c9d
Add the option to use samplers in the logit viewer
2023-08-22 20:18:16 -07:00
oobabooga
25e5eaa6a6
Remove outdated training warning
2023-08-22 13:16:44 -07:00
oobabooga
335c49cc7e
Bump peft and transformers
2023-08-22 13:14:59 -07:00
oobabooga
b96fd22a81
Refactor the training tab ( #3619 )
2023-08-18 16:58:38 -03:00
oobabooga
65aa11890f
Refactor everything ( #3481 )
2023-08-06 21:49:27 -03:00
oobabooga
3e70bce576
Properly format exceptions in the UI
2023-08-03 06:57:21 -07:00
Foxtr0t1337
85b3a26e25
Ignore values which are not string in training.py ( #3287 )
2023-07-25 19:00:25 -03:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora ( #2438 )
2023-05-30 11:09:18 -03:00
oobabooga
3a6e194bc7
Change a warning message
2023-05-29 22:39:23 -03:00
oobabooga
acfd876f29
Some qol changes to "Perplexity evaluation"
2023-05-25 15:06:22 -03:00
oobabooga
63ce5f9c28
Add back a missing bos token
2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after
option to control which part of your input to train on ( #2315 )
2023-05-24 12:43:22 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs ( #2108 )
2023-05-19 12:58:54 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected ( #2099 )
...
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token ( #1678 )
...
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 ( #1546 )
...
* full dynamic model type support on modern peft
* remove shuffle option
2023-04-25 21:27:30 -03:00
Alex "mcmonkey" Goodwin
459e725af9
Lora trainer docs ( #1493 )
2023-04-23 12:54:41 -03:00
oobabooga
d46b9b7c50
Fix evaluate comment saving
2023-04-21 12:34:08 -03:00
oobabooga
c4f4f41389
Add an "Evaluate" tab to calculate the perplexities of models ( #1322 )
2023-04-21 00:20:33 -03:00
Alex "mcmonkey" Goodwin
ee30625cd1
4-Bit LoRA training + several new training options and fixes
2023-04-19 19:39:03 -03:00