oobabooga
|
7d97287e69
|
Update settings-template.json
|
2023-03-17 11:41:12 -03:00 |
|
oobabooga
|
214dc6868e
|
Several QoL changes related to LoRA
|
2023-03-17 11:24:52 -03:00 |
|
oobabooga
|
0ac562bdba
|
Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253
|
2023-03-12 10:46:16 -03:00 |
|
oobabooga
|
169209805d
|
Model-aware prompts and presets
|
2023-03-02 11:25:04 -03:00 |
|
oobabooga
|
7c2babfe39
|
Rename greed to "generation attempts"
|
2023-02-25 01:42:19 -03:00 |
|
oobabooga
|
7be372829d
|
Set chat prompt size in tokens
|
2023-02-15 10:18:50 -03:00 |
|
oobabooga
|
d0ea6d5f86
|
Make the maximum history size in prompt unlimited by default
|
2023-01-22 17:17:35 -03:00 |
|
oobabooga
|
deacb96c34
|
Change the pygmalion default context
|
2023-01-22 00:49:59 -03:00 |
|
oobabooga
|
185587a33e
|
Add a history size parameter to the chat
If too many messages are used in the prompt, the model
gets really slow. It is useful to have the ability to
limit this.
|
2023-01-20 17:03:09 -03:00 |
|
oobabooga
|
e61138bdad
|
Minor fixes
|
2023-01-19 19:04:54 -03:00 |
|
oobabooga
|
c6083f3dca
|
Fix the template
|
2023-01-15 15:57:00 -03:00 |
|
oobabooga
|
88d67427e1
|
Implement default settings customization using a json file
|
2023-01-15 15:23:41 -03:00 |
|