Commit Graph

2760 Commits

Author SHA1 Message Date
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
7c945cfe8e Don't include PeftModel every time 2023-03-18 10:55:24 -03:00
oobabooga
86b99006d9
Remove rwkv dependency 2023-03-18 10:27:52 -03:00
oobabooga
a163807f86
Update README.md 2023-03-18 03:07:27 -03:00
oobabooga
a7acfa4893
Update README.md 2023-03-17 22:57:46 -03:00
oobabooga
bcd8afd906
Merge pull request #393 from WojtekKowaluk/mps_support
Fix for MPS support on Apple Silicon
2023-03-17 22:57:28 -03:00
oobabooga
e26763a510 Minor changes 2023-03-17 22:56:46 -03:00
Wojtek Kowaluk
7994b580d5 clean up duplicated code 2023-03-18 02:27:26 +01:00
oobabooga
dc35861184
Update README.md 2023-03-17 21:05:17 -03:00
Wojtek Kowaluk
30939e2aee add mps support on apple silicon 2023-03-18 00:56:23 +01:00
Wojtek Kowaluk
7d97da1dcb add venv paths to gitignore 2023-03-18 00:54:17 +01:00
oobabooga
f2a5ca7d49
Update README.md 2023-03-17 20:50:27 -03:00
oobabooga
8c8286b0e6
Update README.md 2023-03-17 20:49:40 -03:00
oobabooga
91371640f9
Use the official instructions
https://pytorch.org/get-started/locally/
2023-03-17 20:37:25 -03:00
oobabooga
0c05e65e5c
Update README.md 2023-03-17 20:25:42 -03:00
oobabooga
adc200318a Merge branch 'main' of github.com:oobabooga/text-generation-webui 2023-03-17 20:19:33 -03:00
oobabooga
20f5b455bf Add parameters reference #386 #331 2023-03-17 20:19:04 -03:00
oobabooga
66e8d12354
Update README.md 2023-03-17 19:59:37 -03:00
oobabooga
9a871117d7
Update README.md 2023-03-17 19:52:22 -03:00
oobabooga
d4f38b6a1f
Update README.md 2023-03-17 18:57:48 -03:00
oobabooga
ad7c829953
Update README.md 2023-03-17 18:55:01 -03:00
oobabooga
4426f941e0
Update the installation instructions. Tldr use WSL 2023-03-17 18:51:07 -03:00
oobabooga
9256e937d6 Add some LoRA params 2023-03-17 17:45:28 -03:00
oobabooga
9ed2c4501c Use markdown in the "HTML" tab 2023-03-17 16:06:11 -03:00
oobabooga
f0b26451b4 Add a comment 2023-03-17 13:07:17 -03:00
oobabooga
7da742e149
Merge pull request #207 from EliasVincent/stt-extension
Extension: Whisper Speech-To-Text Input
2023-03-17 12:37:23 -03:00
oobabooga
ebef4a510b Update README 2023-03-17 11:58:45 -03:00
oobabooga
cdfa787bcb Update README 2023-03-17 11:53:28 -03:00
oobabooga
3bda907727
Merge pull request #366 from oobabooga/lora
Add LoRA support
2023-03-17 11:48:48 -03:00
oobabooga
614dad0075 Remove unused import 2023-03-17 11:43:11 -03:00
oobabooga
a717fd709d Sort the imports 2023-03-17 11:42:25 -03:00
oobabooga
7d97287e69 Update settings-template.json 2023-03-17 11:41:12 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga
4c130679c7
Merge pull request #377 from askmyteapot/Fix-Multi-gpu-GPTQ-Llama-no-tokens
Update GPTQ_Loader.py
2023-03-17 09:47:57 -03:00
askmyteapot
53b6a66beb
Update GPTQ_Loader.py
Correcting decoder layer for renamed class.
2023-03-17 18:34:13 +10:00
oobabooga
0cecfc684c Add files 2023-03-16 21:35:53 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
ee164d1821 Don't split the layers in 8-bit mode by default 2023-03-16 18:22:16 -03:00
oobabooga
0a2aa79c4e
Merge pull request #358 from mayaeary/8bit-offload
Add support for memory maps with --load-in-8bit
2023-03-16 17:27:03 -03:00
oobabooga
e085cb4333 Small changes 2023-03-16 13:34:23 -03:00
oobabooga
dd1c5963da Update README 2023-03-16 12:45:27 -03:00
oobabooga
38d7017657 Add all command-line flags to "Interface mode" 2023-03-16 12:44:03 -03:00
awoo
83cb20aad8 Add support for --gpu-memory witn --load-in-8bit 2023-03-16 18:42:53 +03:00
oobabooga
23a5e886e1 The LLaMA PR has been merged into transformers
https://github.com/huggingface/transformers/pull/21955

The tokenizer class has been changed from

"LLaMATokenizer"

to

"LlamaTokenizer"

It is necessary to edit this change in every tokenizer_config.json
that you had for LLaMA so far.
2023-03-16 11:18:32 -03:00
oobabooga
d54f3f4a34 Add no-stream checkbox to the interface 2023-03-16 10:19:00 -03:00
oobabooga
1c378965e1 Remove unused imports 2023-03-16 10:18:34 -03:00
oobabooga
a577fb1077 Keep GALACTICA special tokens (#300) 2023-03-16 00:46:59 -03:00
oobabooga
25a00eaf98 Add "Experimental" warning 2023-03-15 23:43:35 -03:00
oobabooga
599d3139fd Increase the reload timeout a bit 2023-03-15 23:34:08 -03:00