oobabooga
|
257edf5f56
|
Make the Default preset more reasonable
Credits: anonymous 4chan user who got it off
"some twitter post or something someone linked,
who even knows anymore"
|
2023-03-19 12:30:51 -03:00 |
|
oobabooga
|
a78b6508fc
|
Make custom LoRAs work by default #385
|
2023-03-19 12:11:35 -03:00 |
|
oobabooga
|
7073e96093
|
Add back RWKV dependency #98
|
2023-03-19 12:05:28 -03:00 |
|
oobabooga
|
217e1d9fdf
|
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
2023-03-19 10:37:23 -03:00 |
|
oobabooga
|
c79fc69e95
|
Fix the API example with streaming #417
|
2023-03-19 10:36:57 -03:00 |
|
oobabooga
|
0cbe2dd7e9
|
Update README.md
|
2023-03-18 12:24:54 -03:00 |
|
oobabooga
|
36ac7be76d
|
Merge pull request #407 from ThisIsPIRI/gitignore
Add loras to .gitignore
|
2023-03-18 11:57:10 -03:00 |
|
oobabooga
|
d2a7fac8ea
|
Use pip instead of conda for pytorch
|
2023-03-18 11:56:04 -03:00 |
|
ThisIsPIRI
|
705f513c4c
|
Add loras to .gitignore
|
2023-03-18 23:33:24 +09:00 |
|
oobabooga
|
a0b1a30fd5
|
Specify torchvision/torchaudio versions
|
2023-03-18 11:23:56 -03:00 |
|
oobabooga
|
c753261338
|
Disable stop_at_newline by default
|
2023-03-18 10:55:57 -03:00 |
|
oobabooga
|
7c945cfe8e
|
Don't include PeftModel every time
|
2023-03-18 10:55:24 -03:00 |
|
oobabooga
|
86b99006d9
|
Remove rwkv dependency
|
2023-03-18 10:27:52 -03:00 |
|
oobabooga
|
a163807f86
|
Update README.md
|
2023-03-18 03:07:27 -03:00 |
|
oobabooga
|
a7acfa4893
|
Update README.md
|
2023-03-17 22:57:46 -03:00 |
|
oobabooga
|
bcd8afd906
|
Merge pull request #393 from WojtekKowaluk/mps_support
Fix for MPS support on Apple Silicon
|
2023-03-17 22:57:28 -03:00 |
|
oobabooga
|
e26763a510
|
Minor changes
|
2023-03-17 22:56:46 -03:00 |
|
Wojtek Kowaluk
|
7994b580d5
|
clean up duplicated code
|
2023-03-18 02:27:26 +01:00 |
|
oobabooga
|
dc35861184
|
Update README.md
|
2023-03-17 21:05:17 -03:00 |
|
Wojtek Kowaluk
|
30939e2aee
|
add mps support on apple silicon
|
2023-03-18 00:56:23 +01:00 |
|
Wojtek Kowaluk
|
7d97da1dcb
|
add venv paths to gitignore
|
2023-03-18 00:54:17 +01:00 |
|
oobabooga
|
f2a5ca7d49
|
Update README.md
|
2023-03-17 20:50:27 -03:00 |
|
oobabooga
|
8c8286b0e6
|
Update README.md
|
2023-03-17 20:49:40 -03:00 |
|
oobabooga
|
0c05e65e5c
|
Update README.md
|
2023-03-17 20:25:42 -03:00 |
|
oobabooga
|
adc200318a
|
Merge branch 'main' of github.com:oobabooga/text-generation-webui
|
2023-03-17 20:19:33 -03:00 |
|
oobabooga
|
20f5b455bf
|
Add parameters reference #386 #331
|
2023-03-17 20:19:04 -03:00 |
|
oobabooga
|
66e8d12354
|
Update README.md
|
2023-03-17 19:59:37 -03:00 |
|
oobabooga
|
9a871117d7
|
Update README.md
|
2023-03-17 19:52:22 -03:00 |
|
oobabooga
|
d4f38b6a1f
|
Update README.md
|
2023-03-17 18:57:48 -03:00 |
|
oobabooga
|
ad7c829953
|
Update README.md
|
2023-03-17 18:55:01 -03:00 |
|
oobabooga
|
4426f941e0
|
Update the installation instructions. Tldr use WSL
|
2023-03-17 18:51:07 -03:00 |
|
oobabooga
|
9256e937d6
|
Add some LoRA params
|
2023-03-17 17:45:28 -03:00 |
|
oobabooga
|
9ed2c4501c
|
Use markdown in the "HTML" tab
|
2023-03-17 16:06:11 -03:00 |
|
oobabooga
|
f0b26451b4
|
Add a comment
|
2023-03-17 13:07:17 -03:00 |
|
oobabooga
|
7da742e149
|
Merge pull request #207 from EliasVincent/stt-extension
Extension: Whisper Speech-To-Text Input
|
2023-03-17 12:37:23 -03:00 |
|
oobabooga
|
ebef4a510b
|
Update README
|
2023-03-17 11:58:45 -03:00 |
|
oobabooga
|
cdfa787bcb
|
Update README
|
2023-03-17 11:53:28 -03:00 |
|
oobabooga
|
3bda907727
|
Merge pull request #366 from oobabooga/lora
Add LoRA support
|
2023-03-17 11:48:48 -03:00 |
|
oobabooga
|
614dad0075
|
Remove unused import
|
2023-03-17 11:43:11 -03:00 |
|
oobabooga
|
a717fd709d
|
Sort the imports
|
2023-03-17 11:42:25 -03:00 |
|
oobabooga
|
7d97287e69
|
Update settings-template.json
|
2023-03-17 11:41:12 -03:00 |
|
oobabooga
|
29fe7b1c74
|
Remove LoRA tab, move it into the Parameters menu
|
2023-03-17 11:39:48 -03:00 |
|
oobabooga
|
214dc6868e
|
Several QoL changes related to LoRA
|
2023-03-17 11:24:52 -03:00 |
|
oobabooga
|
4c130679c7
|
Merge pull request #377 from askmyteapot/Fix-Multi-gpu-GPTQ-Llama-no-tokens
Update GPTQ_Loader.py
|
2023-03-17 09:47:57 -03:00 |
|
askmyteapot
|
53b6a66beb
|
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
|
2023-03-17 18:34:13 +10:00 |
|
oobabooga
|
0cecfc684c
|
Add files
|
2023-03-16 21:35:53 -03:00 |
|
oobabooga
|
104293f411
|
Add LoRA support
|
2023-03-16 21:31:39 -03:00 |
|
oobabooga
|
ee164d1821
|
Don't split the layers in 8-bit mode by default
|
2023-03-16 18:22:16 -03:00 |
|
oobabooga
|
0a2aa79c4e
|
Merge pull request #358 from mayaeary/8bit-offload
Add support for memory maps with --load-in-8bit
|
2023-03-16 17:27:03 -03:00 |
|
oobabooga
|
e085cb4333
|
Small changes
|
2023-03-16 13:34:23 -03:00 |
|