oobabooga
|
c2cad30772
|
Merge branch 'main' into mcmonkey4eva-add-train-lora-tab
|
2023-03-27 21:05:44 -03:00 |
|
Alex "mcmonkey" Goodwin
|
9ced75746d
|
add total time estimate
|
2023-03-27 10:57:27 -07:00 |
|
Alex "mcmonkey" Goodwin
|
16ea4fc36d
|
interrupt button
|
2023-03-27 10:43:01 -07:00 |
|
Alex "mcmonkey" Goodwin
|
8fc723fc95
|
initial progress tracker in UI
|
2023-03-27 10:25:08 -07:00 |
|
oobabooga
|
48a6c9513e
|
Merge pull request #572 from clusterfudge/issues/571
Potential fix for issues/571
|
2023-03-27 14:06:38 -03:00 |
|
Alex "mcmonkey" Goodwin
|
c07bcd0850
|
add some outputs to indicate progress updates (sorta)
Actual progressbar still needed. Also minor formatting fixes.
|
2023-03-27 09:41:06 -07:00 |
|
oobabooga
|
af65c12900
|
Change Stop button behavior
|
2023-03-27 13:23:59 -03:00 |
|
Alex "mcmonkey" Goodwin
|
d911c22af9
|
use shared rows to make the LoRA Trainer interface a bit more compact / clean
|
2023-03-27 08:31:49 -07:00 |
|
Alex "mcmonkey" Goodwin
|
e439228ed8
|
Merge branch 'main' into add-train-lora-tab
|
2023-03-27 08:21:19 -07:00 |
|
oobabooga
|
3dc61284d5
|
Handle unloading LoRA from dropdown menu icon
|
2023-03-27 00:04:43 -03:00 |
|
oobabooga
|
1c77fdca4c
|
Change notebook mode appearance
|
2023-03-26 22:20:30 -03:00 |
|
oobabooga
|
49c10c5570
|
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**
See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
|
2023-03-26 00:11:33 -03:00 |
|
Sean Fitzgerald
|
0bac80d9eb
|
Potential fix for issues/571
|
2023-03-25 13:08:45 -07:00 |
|
Alex "mcmonkey" Goodwin
|
f1ba2196b1
|
make 'model' variables less ambiguous
|
2023-03-25 12:57:36 -07:00 |
|
Alex "mcmonkey" Goodwin
|
8da237223e
|
document options better
|
2023-03-25 12:48:35 -07:00 |
|
Alex "mcmonkey" Goodwin
|
5c49a0dcd0
|
fix error from prepare call running twice in a row
|
2023-03-25 12:37:32 -07:00 |
|
Alex "mcmonkey" Goodwin
|
7bf601107c
|
automatically strip empty data entries (for better alpaca dataset compat)
|
2023-03-25 12:28:46 -07:00 |
|
Alex "mcmonkey" Goodwin
|
566898a79a
|
initial lora training tab
|
2023-03-25 12:08:26 -07:00 |
|
oobabooga
|
8c8e8b4450
|
Fix the early stopping callback #559
|
2023-03-25 12:35:52 -03:00 |
|
oobabooga
|
a1f12d607f
|
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
|
2023-03-25 11:56:18 -03:00 |
|
oobabooga
|
25be9698c7
|
Fix LoRA on mps
|
2023-03-25 01:18:32 -03:00 |
|
oobabooga
|
3da633a497
|
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
|
2023-03-24 23:51:01 -03:00 |
|
oobabooga
|
9fa47c0eed
|
Revert GPTQ_loader.py (accident)
|
2023-03-24 19:57:12 -03:00 |
|
oobabooga
|
a6bf54739c
|
Revert models.py (accident)
|
2023-03-24 19:56:45 -03:00 |
|
oobabooga
|
0a16224451
|
Update GPTQ_loader.py
|
2023-03-24 19:54:36 -03:00 |
|
oobabooga
|
a80aa65986
|
Update models.py
|
2023-03-24 19:53:20 -03:00 |
|
oobabooga
|
507db0929d
|
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
|
2023-03-24 17:22:22 -03:00 |
|
oobabooga
|
6e1b16c2aa
|
Update html_generator.py
|
2023-03-24 17:18:27 -03:00 |
|
oobabooga
|
ffb0187e83
|
Update chat.py
|
2023-03-24 17:17:29 -03:00 |
|
oobabooga
|
bfe960731f
|
Merge branch 'main' into fix/api-reload
|
2023-03-24 16:54:41 -03:00 |
|
oobabooga
|
8fad84abc2
|
Update extensions.py
|
2023-03-24 16:51:27 -03:00 |
|
Forkoz
|
b740c5b284
|
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
|
2023-03-24 08:56:07 -05:00 |
|
oobabooga
|
4f5c2ce785
|
Fix chat_generation_attempts
|
2023-03-24 02:03:30 -03:00 |
|
EyeDeck
|
dcfd866402
|
Allow loading of .safetensors through GPTQ-for-LLaMa
|
2023-03-23 21:31:34 -04:00 |
|
oobabooga
|
8747c74339
|
Another missing import
|
2023-03-23 22:19:01 -03:00 |
|
oobabooga
|
7078d168c3
|
Missing import
|
2023-03-23 22:16:08 -03:00 |
|
oobabooga
|
d1327f99f9
|
Fix broken callbacks.py
|
2023-03-23 22:12:24 -03:00 |
|
oobabooga
|
b0abb327d8
|
Update LoRA.py
|
2023-03-23 22:02:09 -03:00 |
|
oobabooga
|
bf22d16ebc
|
Clear cache while switching LoRAs
|
2023-03-23 21:56:26 -03:00 |
|
oobabooga
|
4578e88ffd
|
Stop the bot from talking for you in chat mode
|
2023-03-23 21:38:20 -03:00 |
|
oobabooga
|
9bf6ecf9e2
|
Fix LoRA device map (attempt)
|
2023-03-23 16:49:41 -03:00 |
|
oobabooga
|
c5ebcc5f7e
|
Change the default names (#518)
* Update shared.py
* Update settings-template.json
|
2023-03-23 13:36:00 -03:00 |
|
oobabooga
|
29bd41d453
|
Fix LoRA in CPU mode
|
2023-03-23 01:05:13 -03:00 |
|
oobabooga
|
eac27f4f55
|
Make LoRAs work in 16-bit mode
|
2023-03-23 00:55:33 -03:00 |
|
oobabooga
|
bfa81e105e
|
Fix FlexGen streaming
|
2023-03-23 00:22:14 -03:00 |
|
oobabooga
|
de6a09dc7f
|
Properly separate the original prompt from the reply
|
2023-03-23 00:12:40 -03:00 |
|
wywywywy
|
61346b88ea
|
Add "seed" menu in the Parameters tab
|
2023-03-22 15:40:20 -03:00 |
|
oobabooga
|
45b7e53565
|
Only catch proper Exceptions in the text generation function
|
2023-03-20 20:36:02 -03:00 |
|
oobabooga
|
db4219a340
|
Update comments
|
2023-03-20 16:40:08 -03:00 |
|
oobabooga
|
7618f3fe8c
|
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:
```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
|
2023-03-20 16:30:56 -03:00 |
|