oobabooga
|
4e188eeb80
|
Lint
|
2024-02-03 20:40:10 -08:00 |
|
oobabooga
|
cde000d478
|
Remove non-HF ExLlamaV2 loader (#5431)
|
2024-02-04 01:15:51 -03:00 |
|
oobabooga
|
0e54a09bcb
|
Remove exllamav1 loaders (#5128)
|
2023-12-31 01:57:06 -03:00 |
|
luna
|
6efbe3009f
|
let exllama v1 models load safetensor loras (#4854)
|
2023-12-20 13:29:19 -03:00 |
|
oobabooga
|
9992f7d8c0
|
Improve several log messages
|
2023-12-19 20:54:32 -08:00 |
|
oobabooga
|
9da7bb203d
|
Minor LoRA bug fix
|
2023-11-19 07:59:29 -08:00 |
|
oobabooga
|
a6f1e1bcc5
|
Fix PEFT LoRA unloading
|
2023-11-19 07:55:25 -08:00 |
|
Abhilash Majumder
|
778a010df8
|
Intel Gpu support initialization (#4340)
|
2023-10-26 23:39:51 -03:00 |
|
oobabooga
|
280ae720d7
|
Organize
|
2023-10-23 13:07:17 -07:00 |
|
Googulator
|
d0c3b407b3
|
transformers loader: multi-LoRAs support (#3120)
|
2023-10-22 16:06:22 -03:00 |
|
Forkoz
|
8cce1f1126
|
Exllamav2 lora support (#4229)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-10-14 16:12:41 -03:00 |
|
oobabooga
|
0789554f65
|
Allow --lora to use an absolute path
|
2023-08-10 10:03:12 -07:00 |
|
appe233
|
89e0d15cf5
|
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164)
|
2023-07-17 21:27:18 -03:00 |
|
oobabooga
|
e202190c4f
|
lint
|
2023-07-12 11:33:25 -07:00 |
|
Forkoz
|
74ea7522a0
|
Lora fixes for AutoGPTQ (#2818)
|
2023-07-09 01:03:43 -03:00 |
|
Fernando Tarin Morales
|
d7e14e1f78
|
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036)
|
2023-07-07 02:24:07 -03:00 |
|
FartyPants
|
c23c88ee4c
|
Update LoRA.py - avoid potential error (#2953)
|
2023-07-03 17:40:22 -03:00 |
|
oobabooga
|
22d455b072
|
Add LoRA support to ExLlama_HF
|
2023-06-26 00:10:33 -03:00 |
|
jllllll
|
bef67af23c
|
Use pre-compiled python module for ExLlama (#2770)
|
2023-06-24 20:24:17 -03:00 |
|
oobabooga
|
eb30f4441f
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
|
oobabooga
|
7ef6a50e84
|
Reorganize model loading UI completely (#2720)
|
2023-06-16 19:00:37 -03:00 |
|
FartyPants
|
56c19e623c
|
Add LORA name instead of "default" in PeftModel (#2689)
|
2023-06-14 18:29:42 -03:00 |
|
oobabooga
|
f040073ef1
|
Handle the case of older autogptq install
|
2023-06-06 13:05:05 -03:00 |
|
oobabooga
|
11f38b5c2b
|
Add AutoGPTQ LoRA support
|
2023-06-05 23:32:57 -03:00 |
|
oobabooga
|
e116d31180
|
Prevent unwanted log messages from modules
|
2023-05-21 22:42:34 -03:00 |
|
Clay Shoaf
|
79ac94cc2f
|
fixed LoRA loading issue (#1865)
|
2023-05-08 16:21:55 -03:00 |
|
oobabooga
|
95d04d6a8d
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
|
oobabooga
|
f39c99fa14
|
Load more than one LoRA with --lora, fix a bug
|
2023-04-25 22:58:48 -03:00 |
|
oobabooga
|
9b272bc8e5
|
Monkey patch fixes
|
2023-04-25 21:20:26 -03:00 |
|
oobabooga
|
39099663a0
|
Add 4-bit LoRA support (#1200)
|
2023-04-16 23:26:52 -03:00 |
|
Alex "mcmonkey" Goodwin
|
64e3b44e0f
|
initial multi-lora support (#1103)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-04-14 14:52:06 -03:00 |
|
Φφ
|
ffd102e5c0
|
SD Api Pics extension, v.1.1 (#596)
|
2023-04-07 21:36:04 -03:00 |
|
oobabooga
|
ea6e77df72
|
Make the code more like PEP8 for readability (#862)
|
2023-04-07 00:15:45 -03:00 |
|
oobabooga
|
a21e580782
|
Move an import
|
2023-03-29 22:50:58 -03:00 |
|
oobabooga
|
fde92048af
|
Merge branch 'main' into catalpaaa-lora-and-model-dir
|
2023-03-27 23:16:44 -03:00 |
|
oobabooga
|
3dc61284d5
|
Handle unloading LoRA from dropdown menu icon
|
2023-03-27 00:04:43 -03:00 |
|
catalpaaa
|
f740ee558c
|
Merge branch 'oobabooga:main' into lora-and-model-dir
|
2023-03-25 01:28:33 -07:00 |
|
oobabooga
|
25be9698c7
|
Fix LoRA on mps
|
2023-03-25 01:18:32 -03:00 |
|
catalpaaa
|
b37c54edcf
|
lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
|
2023-03-24 17:30:18 -07:00 |
|
oobabooga
|
b0abb327d8
|
Update LoRA.py
|
2023-03-23 22:02:09 -03:00 |
|
oobabooga
|
bf22d16ebc
|
Clear cache while switching LoRAs
|
2023-03-23 21:56:26 -03:00 |
|
oobabooga
|
9bf6ecf9e2
|
Fix LoRA device map (attempt)
|
2023-03-23 16:49:41 -03:00 |
|
oobabooga
|
29bd41d453
|
Fix LoRA in CPU mode
|
2023-03-23 01:05:13 -03:00 |
|
oobabooga
|
eac27f4f55
|
Make LoRAs work in 16-bit mode
|
2023-03-23 00:55:33 -03:00 |
|
oobabooga
|
a78b6508fc
|
Make custom LoRAs work by default #385
|
2023-03-19 12:11:35 -03:00 |
|
oobabooga
|
7c945cfe8e
|
Don't include PeftModel every time
|
2023-03-18 10:55:24 -03:00 |
|
oobabooga
|
9256e937d6
|
Add some LoRA params
|
2023-03-17 17:45:28 -03:00 |
|
oobabooga
|
f0b26451b4
|
Add a comment
|
2023-03-17 13:07:17 -03:00 |
|
oobabooga
|
614dad0075
|
Remove unused import
|
2023-03-17 11:43:11 -03:00 |
|
oobabooga
|
29fe7b1c74
|
Remove LoRA tab, move it into the Parameters menu
|
2023-03-17 11:39:48 -03:00 |
|