Commit Graph

354 Commits

Author SHA1 Message Date
Alex "mcmonkey" Goodwin
d911c22af9 use shared rows to make the LoRA Trainer interface a bit more compact / clean 2023-03-27 08:31:49 -07:00
Alex "mcmonkey" Goodwin
e439228ed8 Merge branch 'main' into add-train-lora-tab 2023-03-27 08:21:19 -07:00
oobabooga
3dc61284d5 Handle unloading LoRA from dropdown menu icon 2023-03-27 00:04:43 -03:00
oobabooga
1c77fdca4c Change notebook mode appearance 2023-03-26 22:20:30 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
Sean Fitzgerald
0bac80d9eb Potential fix for issues/571 2023-03-25 13:08:45 -07:00
Alex "mcmonkey" Goodwin
f1ba2196b1 make 'model' variables less ambiguous 2023-03-25 12:57:36 -07:00
Alex "mcmonkey" Goodwin
8da237223e document options better 2023-03-25 12:48:35 -07:00
Alex "mcmonkey" Goodwin
5c49a0dcd0 fix error from prepare call running twice in a row 2023-03-25 12:37:32 -07:00
Alex "mcmonkey" Goodwin
7bf601107c automatically strip empty data entries (for better alpaca dataset compat) 2023-03-25 12:28:46 -07:00
Alex "mcmonkey" Goodwin
566898a79a initial lora training tab 2023-03-25 12:08:26 -07:00
oobabooga
8c8e8b4450
Fix the early stopping callback #559 2023-03-25 12:35:52 -03:00
oobabooga
a1f12d607f
Merge pull request #538 from Ph0rk0z/display-input-context
Add display of context when input was generated
2023-03-25 11:56:18 -03:00
catalpaaa
f740ee558c
Merge branch 'oobabooga:main' into lora-and-model-dir 2023-03-25 01:28:33 -07:00
oobabooga
25be9698c7
Fix LoRA on mps 2023-03-25 01:18:32 -03:00
oobabooga
3da633a497
Merge pull request #529 from EyeDeck/main
Allow loading of .safetensors through GPTQ-for-LLaMa
2023-03-24 23:51:01 -03:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
9fa47c0eed
Revert GPTQ_loader.py (accident) 2023-03-24 19:57:12 -03:00
oobabooga
a6bf54739c
Revert models.py (accident) 2023-03-24 19:56:45 -03:00
oobabooga
0a16224451
Update GPTQ_loader.py 2023-03-24 19:54:36 -03:00
oobabooga
a80aa65986
Update models.py 2023-03-24 19:53:20 -03:00
oobabooga
507db0929d
Do not use empty user messages in chat mode
This allows the bot to send messages by clicking on Generate with empty inputs.
2023-03-24 17:22:22 -03:00
oobabooga
6e1b16c2aa
Update html_generator.py 2023-03-24 17:18:27 -03:00
oobabooga
ffb0187e83
Update chat.py 2023-03-24 17:17:29 -03:00
oobabooga
bfe960731f
Merge branch 'main' into fix/api-reload 2023-03-24 16:54:41 -03:00
oobabooga
8fad84abc2
Update extensions.py 2023-03-24 16:51:27 -03:00
Forkoz
b740c5b284
Add display of context when input was generated
Not sure if I did this right but it does move with the conversation and seems to match value.
2023-03-24 08:56:07 -05:00
oobabooga
4f5c2ce785
Fix chat_generation_attempts 2023-03-24 02:03:30 -03:00
EyeDeck
dcfd866402 Allow loading of .safetensors through GPTQ-for-LLaMa 2023-03-23 21:31:34 -04:00
oobabooga
8747c74339
Another missing import 2023-03-23 22:19:01 -03:00
oobabooga
7078d168c3
Missing import 2023-03-23 22:16:08 -03:00
oobabooga
d1327f99f9
Fix broken callbacks.py 2023-03-23 22:12:24 -03:00
oobabooga
b0abb327d8
Update LoRA.py 2023-03-23 22:02:09 -03:00
oobabooga
bf22d16ebc
Clear cache while switching LoRAs 2023-03-23 21:56:26 -03:00
oobabooga
4578e88ffd
Stop the bot from talking for you in chat mode 2023-03-23 21:38:20 -03:00
oobabooga
9bf6ecf9e2
Fix LoRA device map (attempt) 2023-03-23 16:49:41 -03:00
oobabooga
c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
29bd41d453
Fix LoRA in CPU mode 2023-03-23 01:05:13 -03:00
oobabooga
eac27f4f55
Make LoRAs work in 16-bit mode 2023-03-23 00:55:33 -03:00
oobabooga
bfa81e105e
Fix FlexGen streaming 2023-03-23 00:22:14 -03:00
oobabooga
de6a09dc7f
Properly separate the original prompt from the reply 2023-03-23 00:12:40 -03:00
wywywywy
61346b88ea
Add "seed" menu in the Parameters tab 2023-03-22 15:40:20 -03:00
oobabooga
45b7e53565
Only catch proper Exceptions in the text generation function 2023-03-20 20:36:02 -03:00
oobabooga
db4219a340
Update comments 2023-03-20 16:40:08 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
Vladimir Belitskiy
e96687b1d6 Do not send empty user input as part of the prompt.
However, if extensions modify the empty prompt to be non-empty,
it'l still work as before.
2023-03-20 14:27:39 -04:00
oobabooga
9a3bed50c3
Attempt at fixing 4-bit with CPU offload 2023-03-20 15:11:56 -03:00
Vladimir Belitskiy
ca47e016b4
Do not display empty user messages in chat mode.
There doesn't seem to be much value to them - they just take up space while also making it seem like there's still some sort of pseudo-dialogue going on, instead of a monologue by the bot.
2023-03-20 12:55:57 -04:00
oobabooga
75a7a84ef2
Exception handling (#454)
* Update text_generation.py
* Update extensions.py
2023-03-20 13:36:52 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
Maya
acdbd6b708 Check if app should display extensions ui 2023-03-19 13:31:21 +00:00
Maya
81c9d130f2 Fix global 2023-03-19 13:25:49 +00:00
Maya
099d7a844b Add setup method to extensions 2023-03-19 13:22:24 +00:00
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga
e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
Wojtek Kowaluk
30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
oobabooga
9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga
e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
awoo
83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00
oobabooga
1c378965e1 Remove unused imports 2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077 Keep GALACTICA special tokens (#300) 2023-03-16 00:46:59 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
66256ac1dd Make the "no GPU has been detected" message more descriptive 2023-03-15 19:31:27 -03:00
oobabooga
c1959c26ee Show/hide the extensions block using javascript 2023-03-15 16:35:28 -03:00
oobabooga
348596f634 Fix broken extensions 2023-03-15 15:11:16 -03:00
oobabooga
c5f14fb9b8 Optimize the HTML generation speed 2023-03-15 14:19:28 -03:00
oobabooga
bf812c4893 Minor fix 2023-03-15 14:05:35 -03:00
oobabooga
05ee323ce5 Rename a file 2023-03-15 13:26:32 -03:00
oobabooga
d30a14087f Further reorganize the UI 2023-03-15 13:24:54 -03:00
oobabooga
cf2da86352 Prevent *Is typing* from disappearing instantly while streaming 2023-03-15 12:51:13 -03:00
oobabooga
ec972b85d1 Move all css/js into separate files 2023-03-15 12:35:11 -03:00
oobabooga
693b53d957 Merge branch 'main' into HideLord-main 2023-03-15 12:08:56 -03:00
oobabooga
1413931705 Add a header bar and redesign the interface (#293) 2023-03-15 12:01:32 -03:00
oobabooga
9d6a625bd6 Add 'hallucinations' filter #326
This breaks the API since a new parameter has been added.
It should be a one-line fix. See api-example.py.
2023-03-15 11:10:35 -03:00
oobabooga
afc5339510
Remove "eval" statements from text generation functions 2023-03-14 16:04:17 -03:00
oobabooga
265ba384b7 Rename a file, add deprecation warning for --load-in-4bit 2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16 Implement "*Is recording a voice message...*" for TTS #303 2023-03-13 22:28:00 -03:00
oobabooga
518e5c4244 Some minor fixes to the GPTQ loader 2023-03-13 16:45:08 -03:00
Ayanami Rei
8778b756e6 use updated load_quantized 2023-03-13 22:11:40 +03:00
Ayanami Rei
a6a6522b6a determine model type from model name 2023-03-13 22:11:32 +03:00
Ayanami Rei
b6c5c57f2e remove default value from argument 2023-03-13 22:11:08 +03:00
Alexander Hristov Hristov
63c5a139a2
Merge branch 'main' into main 2023-03-13 19:50:08 +02:00
Ayanami Rei
e1c952c41c make argument non case-sensitive 2023-03-13 20:22:38 +03:00
Ayanami Rei
3c9afd5ca3 rename method 2023-03-13 20:14:40 +03:00
Ayanami Rei
1b99ed61bc add argument --gptq-model-type and remove duplicate arguments 2023-03-13 20:01:34 +03:00