Commit Graph

174 Commits

Author SHA1 Message Date
oobabooga
c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga
e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga
5eaa914e1b Fix settings.json being ignored because of config.yaml 2023-05-12 06:09:45 -03:00
oobabooga
f7dbddfff5 Add a variable for tts extensions to use 2023-05-11 16:12:46 -03:00
oobabooga
bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
minipasila
334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga
b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
oobabooga
95d04d6a8d Better warning messages 2023-05-03 21:43:17 -03:00
oobabooga
f54256e348 Rename no_mmap to no-mmap 2023-05-03 09:50:31 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00
oobabooga
b6af2e56a2 Add --character flag, add character to settings.json 2023-04-24 13:19:42 -03:00
eiery
78d1977ebf
add n_batch support for llama.cpp (#1115) 2023-04-24 03:46:18 -03:00
oobabooga
b1ee674d75 Make interface state (mostly) persistent on page reload 2023-04-24 03:05:47 -03:00
Wojtab
12212cf6be
LLaVA support (#1487) 2023-04-23 20:32:22 -03:00
Andy Salerno
654933c634
New universal API with streaming/blocking endpoints (#990)
Previous title: Add api_streaming extension and update api-example-stream to use it

* Merge with latest main

* Add parameter capturing encoder_repetition_penalty

* Change some defaults, minor fixes

* Add --api, --public-api flags

* remove unneeded/broken comment from blocking API startup. The comment is already correctly emitted in try_start_cloudflared by calling the lambda we pass in.

* Update on_start message for blocking_api, it should say 'non-streaming' and not 'streaming'

* Update the API examples

* Change a comment

* Update README

* Remove the gradio API

* Remove unused import

* Minor change

* Remove unused import

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-23 15:52:43 -03:00
oobabooga
fcb594b90e Don't require llama.cpp models to be placed in subfolders 2023-04-22 14:56:48 -03:00
oobabooga
7438f4f6ba Change GPTQ triton default settings 2023-04-22 12:27:30 -03:00
oobabooga
eddd016449 Minor deletion 2023-04-21 12:41:27 -03:00
oobabooga
d46b9b7c50 Fix evaluate comment saving 2023-04-21 12:34:08 -03:00
oobabooga
702fe92d42 Increase truncation_length_max value 2023-04-19 17:35:38 -03:00
oobabooga
ac2973ffc6 Add a warning for --share 2023-04-17 19:34:28 -03:00
oobabooga
89bc540557 Update README 2023-04-17 10:55:35 -03:00
sgsdxzy
b57ffc2ec9
Update to support GPTQ triton commit c90adef (#1229) 2023-04-17 01:11:18 -03:00
oobabooga
39099663a0
Add 4-bit LoRA support (#1200) 2023-04-16 23:26:52 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
b937c9d8c2
Add skip_special_tokens checkbox for Dolly model (#1218) 2023-04-16 14:24:49 -03:00
Mikel Bober-Irizar
16a3a5b039
Merge pull request from GHSA-hv5m-3rp9-xcpf
* Remove eval of API input

* Remove unnecessary eval/exec for security

* Use ast.literal_eval

* Use ast.literal_eval

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 01:36:50 -03:00
oobabooga
3a337cfded Use argparse defaults 2023-04-14 15:35:06 -03:00
Alex "mcmonkey" Goodwin
64e3b44e0f
initial multi-lora support (#1103)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-14 14:52:06 -03:00
oobabooga
8e31f2bad4
Automatically set wbits/groupsize/instruct based on model name (#1167) 2023-04-14 11:07:28 -03:00
v0xie
9d66957207
Add --listen-host launch option (#1122) 2023-04-13 21:35:08 -03:00
Light
cf58058c33 Change warmup_autotune to a negative switch. 2023-04-13 20:59:49 +08:00
Light
15d5a043f2 Merge remote-tracking branch 'origin/main' into triton 2023-04-13 19:38:51 +08:00
oobabooga
7dfbe54f42 Add --model-menu option 2023-04-12 21:24:26 -03:00
oobabooga
388038fb8e Update settings-template.json 2023-04-12 18:30:43 -03:00
oobabooga
1566d8e344 Add model settings to the Models tab 2023-04-12 17:20:18 -03:00
Light
f3591ccfa1 Keep minimal change. 2023-04-12 23:26:06 +08:00
oobabooga
cacbcda208
Two new options: truncation length and ban eos token 2023-04-11 18:46:06 -03:00
catalpaaa
78bbc66fc4
allow custom stopping strings in all modes (#903) 2023-04-11 12:30:06 -03:00
IggoOnCode
09d8119e3c
Add CPU LoRA training (#938)
(It's very slow)
2023-04-10 17:29:00 -03:00
oobabooga
bd04ff27ad Make the bos token optional 2023-04-10 16:44:22 -03:00
oobabooga
0f1627eff1 Don't treat Intruct mode histories as regular histories
* They must now be saved/loaded manually
* Also improved browser caching of pfps
* Also changed the global default preset
2023-04-10 15:48:07 -03:00
MarkovInequality
992663fa20
Added xformers support to Llama (#950) 2023-04-09 23:08:40 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
SDS
378d21e80c
Add LLaMA-Precise preset (#767) 2023-04-05 18:52:36 -03:00
oobabooga
e722c240af Add Instruct mode 2023-04-05 13:54:50 -03:00
oobabooga
65d8a24a6d Show profile pictures in the Character tab 2023-04-04 22:28:49 -03:00
oobabooga
b24147c7ca Document --pre_layer 2023-04-03 17:34:25 -03:00
oobabooga
4c9ed09270 Update settings template 2023-04-03 14:59:26 -03:00
OWKenobi
dcf61a8897
"character greeting" displayed and editable on the fly (#743)
* Add greetings field

* add greeting field and make it interactive

* Minor changes

* Fix a bug

* Simplify clear_chat_log

* Change a label

* Minor change

* Simplifications

* Simplification

* Simplify loading the default character history

* Fix regression

---------

Co-authored-by: oobabooga
2023-04-03 12:16:15 -03:00
oobabooga
b0890a7925 Add shared.is_chat() function 2023-04-01 20:15:00 -03:00
oobabooga
b857f4655b
Update shared.py 2023-04-01 13:56:47 -03:00
oobabooga
2c52310642 Add --threads flag for llama.cpp 2023-03-31 21:18:05 -03:00
oobabooga
1d1d9e40cd Add seed to settings 2023-03-31 12:22:07 -03:00
oobabooga
d4a9b5ea97 Remove redundant preset (see the plot in #587) 2023-03-30 17:34:44 -03:00
oobabooga
55755e27b9 Don't hardcode prompts in the settings dict/json 2023-03-29 22:47:01 -03:00
oobabooga
1cb9246160 Adapt to the new model names 2023-03-29 21:47:36 -03:00
oobabooga
010b259dde Update documentation 2023-03-28 17:46:00 -03:00
oobabooga
036163a751 Change description 2023-03-27 23:39:26 -03:00
oobabooga
005f552ea3 Some simplifications 2023-03-27 23:29:52 -03:00
oobabooga
fde92048af Merge branch 'main' into catalpaaa-lora-and-model-dir 2023-03-27 23:16:44 -03:00
oobabooga
49c10c5570
Add support for the latest GPTQ models with group-size (#530)
**Warning: old 4-bit weights will not work anymore!**

See here how to get up to date weights: https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#step-2-get-the-pre-converted-weights
2023-03-26 00:11:33 -03:00
catalpaaa
b37c54edcf lora-dir, model-dir and login auth
Added lora-dir, model-dir, and a login auth arguments that points to a file contains usernames and passwords in the format of "u:pw,u:pw,..."
2023-03-24 17:30:18 -07:00
oobabooga
c5ebcc5f7e
Change the default names (#518)
* Update shared.py

* Update settings-template.json
2023-03-23 13:36:00 -03:00
oobabooga
7618f3fe8c
Add -gptq-preload for 4-bit offloading (#460)
This works in a 4GB card now:

```
python server.py --model llama-7b-hf --gptq-bits 4 --gptq-pre-layer 20
```
2023-03-20 16:30:56 -03:00
oobabooga
ddb62470e9 --no-cache and --gpu-memory in MiB for fine VRAM control 2023-03-19 19:21:41 -03:00
oobabooga
a78b6508fc Make custom LoRAs work by default #385 2023-03-19 12:11:35 -03:00
oobabooga
c753261338 Disable stop_at_newline by default 2023-03-18 10:55:57 -03:00
oobabooga
29fe7b1c74 Remove LoRA tab, move it into the Parameters menu 2023-03-17 11:39:48 -03:00
oobabooga
214dc6868e Several QoL changes related to LoRA 2023-03-17 11:24:52 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
4d64a57092 Add Interface mode tab 2023-03-15 23:29:56 -03:00
oobabooga
265ba384b7 Rename a file, add deprecation warning for --load-in-4bit 2023-03-14 07:56:31 -03:00
oobabooga
3da73e409f Merge branch 'main' into Zerogoki00-opt4-bit 2023-03-14 07:50:36 -03:00
oobabooga
3fb8196e16 Implement "*Is recording a voice message...*" for TTS #303 2023-03-13 22:28:00 -03:00
Ayanami Rei
b6c5c57f2e remove default value from argument 2023-03-13 22:11:08 +03:00
Ayanami Rei
1b99ed61bc add argument --gptq-model-type and remove duplicate arguments 2023-03-13 20:01:34 +03:00
oobabooga
66b6971b61 Update README 2023-03-13 12:44:18 -03:00
oobabooga
ddea518e0f Document --auto-launch 2023-03-13 12:43:33 -03:00
oobabooga
1ddcd4d0ba Clean up silero_tts
This should only be used with --no-stream.

The shared.still_streaming implementation was faulty by design:
output_modifier should never be called when streaming is already over.
2023-03-12 23:42:49 -03:00
oobabooga
d8bea766d7
Merge pull request #192 from xanthousm/main
Add text generation stream status to shared module, use for better TTS with auto-play
2023-03-12 13:40:16 -03:00
oobabooga
f3b00dd165
Merge pull request #224 from ItsLogic/llama-bits
Allow users to load 2, 3 and 4 bit llama models
2023-03-12 11:23:50 -03:00
oobabooga
65dda28c9d Rename --llama-bits to --gptq-bits 2023-03-12 11:19:07 -03:00
oobabooga
0ac562bdba Add a default prompt for OpenAssistant oasst-sft-1-pythia-12b #253 2023-03-12 10:46:16 -03:00
Xan
b3e10e47c0 Fix merge conflict in text_generation
- Need to update `shared.still_streaming = False` before the final `yield formatted_outputs`, shifted the position of some yields.
2023-03-12 18:56:35 +11:00
Xan
96c51973f9 --auto-launch and "Is typing..."
- Added `--auto-launch` arg to open web UI in the default browser when ready.
- Changed chat.py to display user input immediately and "*Is typing...*" as a temporary reply while generating text. Most noticeable when using `--no-stream`.
2023-03-11 22:50:59 +11:00
Xan
33df4bd91f Merge remote-tracking branch 'upstream/main' 2023-03-11 22:40:47 +11:00
draff
28fd4fc970 Change wording to be consistent with other args 2023-03-10 23:34:13 +00:00
draff
804486214b Re-implement --load-in-4bit and update --llama-bits arg description 2023-03-10 23:21:01 +00:00
draff
e6c631aea4 Replace --load-in-4bit with --llama-bits
Replaces --load-in-4bit with a more flexible --llama-bits arg to allow for 2 and 3 bit models as well. This commit also fixes a loading issue with .pt files which are not in the root of the models folder
2023-03-10 21:36:45 +00:00
oobabooga
026d60bd34 Remove default preset that didn't do anything 2023-03-10 14:01:02 -03:00