Commit Graph

1058 Commits

Author SHA1 Message Date
oobabooga
3929971b66 Don't show oobabooga_llama-tokenizer in the model dropdown 2023-08-10 10:02:48 -07:00
oobabooga
c7f52bbdc1 Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a.
2023-08-10 08:39:41 -07:00
jllllll
d6765bebc4
Update installation documentation 2023-08-10 00:53:48 -05:00
jllllll
d7ee4c2386
Remove unused import 2023-08-10 00:10:14 -05:00
jllllll
e3d3565b2a
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
2023-08-09 23:59:04 -05:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support 2023-08-09 23:42:34 -05:00
oobabooga
6c6a52aaad Change the filenames for caches and histories 2023-08-09 07:47:19 -07:00
oobabooga
d8fb506aff Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Friedemann Lipphardt
901b028d55
Add option for named cloudflare tunnels (#3364) 2023-08-08 22:20:27 -03:00
oobabooga
bf08b16b32 Fix disappearing profile picture bug 2023-08-08 14:09:01 -07:00
Gennadij
0e78f3b4d4
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa (#3494) 2023-08-08 00:31:11 -03:00
oobabooga
37fb719452
Increase the Context/Greeting boxes sizes 2023-08-08 00:09:00 -03:00
oobabooga
584dd33424
Fix missing example_dialogue when uploading characters 2023-08-07 23:44:59 -03:00
oobabooga
412f6ff9d3 Change alpha_value maximum and step 2023-08-07 06:08:51 -07:00
oobabooga
a373c96d59 Fix a bug in modules/shared.py 2023-08-06 20:36:35 -07:00
oobabooga
3d48933f27 Remove ancient deprecation warnings 2023-08-06 18:58:59 -07:00
oobabooga
c237ce607e Move characters/instruction-following to instruction-templates 2023-08-06 17:50:32 -07:00
oobabooga
65aa11890f
Refactor everything (#3481) 2023-08-06 21:49:27 -03:00
oobabooga
d4b851bdc8 Credit turboderp 2023-08-06 13:43:15 -07:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325) 2023-08-06 17:22:48 -03:00
missionfloyd
5134878344
Fix chat message order (#3461) 2023-08-05 13:53:54 -03:00
jllllll
44f31731af
Create logs dir if missing when saving history (#3462) 2023-08-05 13:47:16 -03:00
Forkoz
9dcb37e8d4
Fix: Mirostat fails on models split across multiple GPUs 2023-08-05 13:45:47 -03:00
oobabooga
8df3cdfd51
Add SSL certificate support (#3453) 2023-08-04 13:57:31 -03:00
missionfloyd
2336b75d92
Remove unnecessary chat.js (#3445) 2023-08-04 01:58:37 -03:00
oobabooga
4b3384e353 Handle unfinished lists during markdown streaming 2023-08-03 17:15:18 -07:00
Pete
f4005164f4
Fix llama.cpp truncation (#3400)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-03 20:01:15 -03:00
oobabooga
87dab03dc0
Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432) 2023-08-03 11:00:36 -03:00
oobabooga
3e70bce576 Properly format exceptions in the UI 2023-08-03 06:57:21 -07:00
oobabooga
32c564509e Fix loading session in chat mode 2023-08-02 21:13:16 -07:00
oobabooga
0e8f9354b5 Add direct download for session/chat history JSONs 2023-08-02 19:43:39 -07:00
oobabooga
32a2bbee4a Implement auto_max_new_tokens for ExLlama 2023-08-02 11:03:56 -07:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter (#3419) 2023-08-02 14:52:20 -03:00
Pete
6afc1a193b
Add a scrollbar to notebook/default, improve chat scrollbar style (#3403)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-02 12:02:36 -03:00
oobabooga
b53ed70a70 Make llamacpp_HF 6x faster 2023-08-01 13:18:20 -07:00
oobabooga
8d46a8c50a Change the default chat style and the default preset 2023-08-01 09:35:17 -07:00
oobabooga
959feba602 When saving model settings, only save the settings for the current loader 2023-08-01 06:10:09 -07:00
oobabooga
f094330df0 When saving a preset, only save params that differ from the defaults 2023-07-31 19:13:29 -07:00
oobabooga
84297d05c4 Add a "Filter by loader" menu to the Parameters tab 2023-07-31 19:09:02 -07:00
oobabooga
7de7b3d495 Fix newlines in exported character yamls 2023-07-31 10:46:02 -07:00
oobabooga
5ca37765d3 Only replace {{user}} and {{char}} at generation time 2023-07-30 11:42:30 -07:00
oobabooga
6e16af34fd Save uploaded characters as yaml
Also allow yaml characters to be uploaded directly
2023-07-30 11:25:38 -07:00
oobabooga
b31321c779 Define visible_text before applying chat_input extensions 2023-07-26 07:27:14 -07:00
oobabooga
b17893a58f Revert "Add tensor split support for llama.cpp (#3171)"
This reverts commit 031fe7225e.
2023-07-26 07:06:01 -07:00
oobabooga
28779cd959 Use dark theme by default 2023-07-25 20:11:57 -07:00
oobabooga
c2e0d46616 Add credits 2023-07-25 15:49:04 -07:00
oobabooga
77d2e9f060 Remove flexgen 2 2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf Remove flexgen support 2023-07-25 15:15:29 -07:00
Foxtr0t1337
85b3a26e25
Ignore values which are not string in training.py (#3287) 2023-07-25 19:00:25 -03:00
Shouyi
031fe7225e
Add tensor split support for llama.cpp (#3171) 2023-07-25 18:59:26 -03:00
Eve
f653546484
README updates and improvements (#3198) 2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier (#3307) 2023-07-25 18:49:56 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support (#3285) 2023-07-24 16:37:03 -03:00
jllllll
1141987a0d
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading (#3225) 2023-07-24 11:25:36 -03:00
Ikko Eltociear Ashimine
b2d5433409
Fix typo in deepspeed_parameters.py (#3222)
configration -> configuration
2023-07-24 11:17:28 -03:00
oobabooga
4b19b74e6c Add CUDA wheels for llama-cpp-python by jllllll 2023-07-19 19:33:43 -07:00
oobabooga
913e060348 Change the default preset to Divine Intellect
It seems to reduce hallucination while using instruction-tuned models.
2023-07-19 08:24:37 -07:00
randoentity
a69955377a
[GGML] Support for customizable RoPE (#3083)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-17 22:32:37 -03:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164) 2023-07-17 21:27:18 -03:00
oobabooga
8c1c2e0fae Increase max_new_tokens upper limit 2023-07-17 17:08:22 -07:00
oobabooga
b1a6ea68dd Disable "autoload the model" by default 2023-07-17 07:40:56 -07:00
oobabooga
a199f21799 Optimize llamacpp_hf a bit 2023-07-16 20:49:48 -07:00
oobabooga
6a3edb0542 Clean up llamacpp_hf.py 2023-07-15 22:40:55 -07:00
oobabooga
27a84b4e04 Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader (#3062) 2023-07-16 02:21:13 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity (#3138) 2023-07-16 01:52:55 -03:00
oobabooga
b284f2407d Make ExLlama_HF the new default for GPTQ 2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions (#3029) 2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f lint 2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training (#3058) 2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5 Add back .replace('\r', '') 2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency (#3035) 2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training (#2624) 2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training (#1997)
* Update utils.py

returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files

* Update training.py

minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training

* Update put-trainer-datasets-here.txt

document

* Minor change

* Use pathlib, sort by natural keys

* Space

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection (#1711) 2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334 Disable wandb during training 2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names (#3051) 2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown (#2977) 2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40 Style changes 2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent (#3017)
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ (#2818) 2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b Make --model work with argument like models/folder_name 2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039 Add decode functions to llama.cpp/exllama 2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551 Add truncation to exllama 2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37 Minor change to ui.py 2023-07-07 09:09:14 -07:00
oobabooga
de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3 Block a cloudfare request 2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) 2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py (#3020)
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently 
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830 Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003 2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0 Fix start_with 2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb Fix loading some histories (thanks kaiokendev) 2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955) 2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d Make greetings appear in --multi-user mode 2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names (#2988) 2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error (#2953) 2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload (#2951) 2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info (#2944) 2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py (#2954)
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP (#2912) 2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. (#2921) 2023-06-29 14:56:25 -03:00
oobabooga
79db629665 Minor bug fix 2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers (#2916) 2023-06-29 13:40:13 -03:00
oobabooga
20740ab16e Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913)"
This reverts commit 37a16d23a7.
2023-06-28 18:10:34 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. (#2913) 2023-06-28 12:36:07 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it (#2902) 2023-06-27 18:24:04 -03:00
oobabooga
22d455b072 Add LoRA support to ExLlama_HF 2023-06-26 00:10:33 -03:00
oobabooga
c52290de50
ExLlama with long context (#2875) 2023-06-25 22:49:26 -03:00
oobabooga
9290c6236f Keep ExLlama_HF if already selected 2023-06-25 19:06:28 -03:00
oobabooga
75fd763f99 Fix chat saving issue (closes #2863) 2023-06-25 18:14:57 -03:00
FartyPants
21c189112c
Several Training Enhancements (#2868) 2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py 2023-06-25 12:13:15 -03:00
oobabooga
f31281a8de Fix loading instruction templates containing literal '\n' 2023-06-25 02:13:26 -03:00
oobabooga
f0fcd1f697 Sort some imports 2023-06-25 01:44:36 -03:00
oobabooga
365b672531 Minor change to prevent future bugs 2023-06-25 01:38:54 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama (#2770) 2023-06-24 20:24:17 -03:00
oobabooga
cec5fb0ef6 Failed attempt at evaluating exllama_hf perplexity 2023-06-24 12:02:25 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation (#2848) 2023-06-24 11:19:16 -03:00
oobabooga
ec482f3dae Apply input extensions after yielding *Is typing...* 2023-06-24 11:07:11 -03:00
oobabooga
3e80f2aceb Apply the output extensions only once
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
missionfloyd
51a388fa34
Organize chat history/character import menu (#2845)
* Organize character import menu

* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space (#2847) 2023-06-24 09:43:00 -03:00
oobabooga
3ae9af01aa Add --no_use_cuda_fp16 param for AutoGPTQ 2023-06-23 12:22:56 -03:00
Panchovix
5646690769
Fix some models not loading on exllama_hf (#2835) 2023-06-23 11:31:02 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena (#2830) 2023-06-23 01:48:29 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf (#2803) 2023-06-22 16:05:25 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. (#2777) 2023-06-21 15:31:42 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod (#2781) 2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support (#2684)
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json (#2769) 2023-06-20 00:47:36 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor (#2765) 2023-06-19 21:31:19 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support (#2756) 2023-06-19 12:31:24 -03:00
oobabooga
5f418f6171 Fix a memory leak (credits for the fix: Ph0rk0z) 2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer (#2753) 2023-06-18 18:14:06 -03:00
oobabooga
09c781b16f Add modules/block_requests.py
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter (#2744) 2023-06-18 13:26:30 -03:00
oobabooga
c5641b65d3 Handle leading spaces properly in ExLllama 2023-06-17 19:35:12 -03:00
oobabooga
05a743d6ad Make llama.cpp use tfs parameter 2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719 Add a variable to modules/shared.py 2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff Fix repeated tokens with exllama 2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7 Use gen_begin_reuse in exllama 2023-06-17 18:00:10 -03:00
oobabooga
b27f83c0e9 Make exllama stoppable 2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3 Fix streaming callback 2023-06-16 21:44:56 -03:00
oobabooga
5f392122fd Add gpu_split param to ExLlama
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support (#2444) 2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0 Add some clarifications 2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely (#2720) 2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP (#2648) 2023-06-15 23:59:54 -03:00
oobabooga
2b9a6b9259 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58 Add some checks to AutoGPTQ loader 2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel (#2689) 2023-06-14 18:29:42 -03:00
oobabooga
474dc7355a Allow API requests to use parameter presets 2023-06-14 11:32:20 -03:00
oobabooga
e471919e6d Make llava/minigpt-4 work with AutoGPTQ 2023-06-11 17:56:01 -03:00
oobabooga
f4defde752 Add a menu for installing extensions 2023-06-11 17:11:06 -03:00
oobabooga
ac122832f7 Make dropdown menus more similar to automatic1111 2023-06-11 14:20:16 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts (#2621) 2023-06-11 12:19:18 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models (#2571) 2023-06-09 21:26:31 -03:00
oobabooga
6015616338 Style changes 2023-06-06 13:06:05 -03:00
oobabooga
f040073ef1 Handle the case of older autogptq install 2023-06-06 13:05:05 -03:00
oobabooga
bc58dc40bd Fix a minor bug 2023-06-06 12:57:13 -03:00
oobabooga
00b94847da Remove softprompt support 2023-06-06 07:42:23 -03:00
oobabooga
0aebc838a0 Don't save the history for 'None' character 2023-06-06 07:21:07 -03:00
oobabooga
9f215523e2 Remove some unused imports 2023-06-06 07:05:46 -03:00
oobabooga
0f0108ce34 Never load the history for default character 2023-06-06 07:00:11 -03:00
oobabooga
11f38b5c2b Add AutoGPTQ LoRA support 2023-06-05 23:32:57 -03:00
oobabooga
3a5cfe96f0 Increase chat_prompt_size_max 2023-06-05 17:37:37 -03:00
oobabooga
f276d88546 Use AutoGPTQ by default for GPTQ models 2023-06-05 15:41:48 -03:00
oobabooga
9b0e95abeb Fix "regenerate" when "Start reply with" is set 2023-06-05 11:56:03 -03:00
oobabooga
19f78684e6 Add "Start reply with" feature to chat mode 2023-06-02 13:58:08 -03:00
GralchemOz
f7b07c4705
Fix the missing Chinese character bug (#2497) 2023-06-02 13:45:41 -03:00
oobabooga
2f6631195a Add desc_act checkbox to the UI 2023-06-02 01:45:46 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model (#1668) 2023-06-02 01:33:55 -03:00
oobabooga
a83f9aa65b
Update shared.py 2023-06-01 12:08:39 -03:00
oobabooga
b6c407f51d Don't stream at more than 24 fps
This is a performance optimization
2023-05-31 23:41:42 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora (#2438) 2023-05-30 11:09:18 -03:00
oobabooga
3578dd3611
Change a warning message 2023-05-29 22:40:54 -03:00
oobabooga
3a6e194bc7
Change a warning message 2023-05-29 22:39:23 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling (#2357) 2023-05-29 21:40:01 -03:00
oobabooga
1394f44e14 Add triton checkbox for AutoGPTQ 2023-05-29 15:32:45 -03:00
oobabooga
f34d20922c Minor fix 2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29 Attempt at evaluating falcon perplexity (failed) 2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) (#2367)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. (#2399) 2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf Change a warning message 2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121 Fix return message when no model is loaded 2023-05-28 22:46:32 -03:00
oobabooga
00ebea0b2a Use YAML for presets and settings 2023-05-28 22:34:12 -03:00
oobabooga
acfd876f29 Some qol changes to "Perplexity evaluation" 2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb Better default for compute_dtype 2023-05-25 15:05:53 -03:00
oobabooga
37d4ad012b Add a button for rendering markdown for any model 2023-05-25 11:59:27 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings (#2299) 2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter (#2320) 2023-05-25 01:14:13 -03:00
oobabooga
63ce5f9c28 Add back a missing bos token 2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after option to control which part of your input to train on (#2315) 2023-05-24 12:43:22 -03:00
flurb18
d37a28730d
Beginning of multi-user support (#2262)
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag (#2283) 2023-05-23 20:39:26 -03:00
oobabooga
fb6a00f4e5 Small AutoGPTQ fix 2023-05-23 15:20:01 -03:00
oobabooga
cd3618d7fb Add support for RWKV in Hugging Face format 2023-05-23 02:07:28 -03:00
oobabooga
75adc110d4 Fix "perplexity evaluation" progress messages 2023-05-23 01:54:52 -03:00
oobabooga
4d94a111d4 memoize load_character to speed up the chat API 2023-05-23 00:50:58 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json (#2288) 2023-05-22 19:45:11 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp (#2287) 2023-05-22 19:37:24 -03:00
oobabooga
d63ef59a0f Apply LLaMA-Precise preset to Vicuna by default 2023-05-21 23:00:42 -03:00
oobabooga
dcc3e54005 Various "impersonate" fixes 2023-05-21 22:54:28 -03:00
oobabooga
e116d31180 Prevent unwanted log messages from modules 2023-05-21 22:42:34 -03:00
oobabooga
fb91406e93 Fix generation_attempts continuing after an empty reply 2023-05-21 22:14:50 -03:00
oobabooga
e18534fe12 Fix "continue" in chat-instruct mode 2023-05-21 22:05:59 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters (#2258) 2023-05-21 15:11:57 -03:00
oobabooga
1e5821bd9e Fix silero tts autoplay (attempt #2) 2023-05-21 13:25:11 -03:00
oobabooga
a5d5bb9390 Fix silero tts autoplay 2023-05-21 12:11:59 -03:00
oobabooga
05593a7834 Minor bug fix 2023-05-20 23:22:36 -03:00
Matthew McAllister
ab6acddcc5
Add Save/Delete character buttons (#1870)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
oobabooga
c5af549d4b
Add chat API (#2233) 2023-05-20 18:42:17 -03:00
Konstantin Gukov
1b52bddfcc
Mitigate UnboundLocalError (#2136) 2023-05-19 14:46:18 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs (#2108) 2023-05-19 12:58:54 -03:00
oobabooga
9d5025f531 Improve error handling while loading GPTQ models 2023-05-19 11:20:08 -03:00
oobabooga
b667ffa51d Simplify GPTQ_loader.py 2023-05-17 16:22:56 -03:00
oobabooga
ef10ffc6b4 Add various checks to model loading functions 2023-05-17 16:14:54 -03:00
oobabooga
abd361b3a0 Minor change 2023-05-17 11:33:43 -03:00
oobabooga
21ecc3701e Avoid a name conflict 2023-05-17 11:23:13 -03:00
oobabooga
fb91c07191 Minor bug fix 2023-05-17 11:16:37 -03:00
oobabooga
1a8151a2b6
Add AutoGPTQ support (basic) (#2132) 2023-05-17 11:12:12 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow (#2100) 2023-05-17 10:41:09 -03:00
oobabooga
ce21804ec7 Allow extensions to define a new tab 2023-05-17 01:31:56 -03:00
oobabooga
a84f499718 Allow extensions to define custom CSS and JS 2023-05-17 00:30:54 -03:00
oobabooga
7584d46c29
Refactor models.py (#2113) 2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287 Fix no-mmap bug 2023-05-16 17:35:49 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected (#2099)
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
atriantafy
26cf8c2545
add api port options (#1990) 2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp (#1936) 2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087) 2023-05-15 19:51:23 -03:00
oobabooga
c07215cc08 Improve the default Assistant character 2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115 Create get_max_memory_dict() function 2023-05-15 19:38:27 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option (#2060) 2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode (#2049) 2023-05-14 10:43:55 -03:00
oobabooga
df37ba5256 Update impersonate_wrapper 2023-05-12 12:59:48 -03:00
oobabooga
e283ddc559 Change how spaces are handled in continue/generation attempts 2023-05-12 12:50:29 -03:00
oobabooga
2eeb27659d Fix bug in --cpu-memory 2023-05-12 06:17:07 -03:00
oobabooga
5eaa914e1b Fix settings.json being ignored because of config.yaml 2023-05-12 06:09:45 -03:00
oobabooga
71693161eb Better handle spaces in LlamaTokenizer 2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a Fix a bug 2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d Always return only the new tokens in generation functions 2023-05-11 17:07:20 -03:00
oobabooga
394bb253db Syntax improvement 2023-05-11 16:27:50 -03:00
oobabooga
f7dbddfff5 Add a variable for tts extensions to use 2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions (#2003) 2023-05-11 15:37:04 -03:00
oobabooga
b7a589afc8 Improve the Metharme prompt 2023-05-10 16:09:32 -03:00
oobabooga
b01c4884cb Better stopping strings for instruct mode 2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7 Add markdown table rendering 2023-05-10 13:41:23 -03:00
oobabooga
3316e33d14 Remove unused code 2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt (#1965) 2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6 Fix prompt order in the dropdown 2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90 Remove duplicate code 2023-05-10 02:07:22 -03:00
oobabooga
bdf1274b5d Remove duplicate code 2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements (#1957) 2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme (#1679) 2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following (#1596) 2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) (#1741) 2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location (#1953) 2023-05-09 19:22:10 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model (#1762)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages (#1354) 2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) (#1147) 2023-05-09 02:37:42 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. (#1663) 2023-05-08 22:55:41 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails (#1871)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00
oobabooga
68dcbc7ebd Fix chat history handling in instruct mode 2023-05-08 16:41:21 -03:00
Clay Shoaf
79ac94cc2f
fixed LoRA loading issue (#1865) 2023-05-08 16:21:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles (#1917) 2023-05-08 12:35:03 -03:00
EgrorBs
d3ea70f453
More trust_remote_code=trust_remote_code (#1899) 2023-05-07 23:48:20 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes (#1896) 2023-05-07 23:47:02 -03:00
oobabooga
9754d6a811 Fix an error message 2023-05-07 17:44:05 -03:00
camenduru
ba65a48ec8
trust_remote_code=shared.args.trust_remote_code (#1891) 2023-05-07 17:42:44 -03:00
oobabooga
6b67cb6611 Generalize superbooga to chat mode 2023-05-07 15:05:26 -03:00
oobabooga
56f6b7052a Sort dropdowns numerically 2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions (#1817) 2023-05-05 18:53:03 -03:00
oobabooga
c728f2b5f0 Better handle new line characters in code blocks 2023-05-05 11:22:36 -03:00
oobabooga
00e333d790 Add MOSS support 2023-05-04 23:20:34 -03:00
oobabooga
f673f4a4ca Change --verbose behavior 2023-05-04 15:56:06 -03:00
oobabooga
97a6a50d98 Use oasst tokenizer instead of universal tokenizer 2023-05-04 15:55:39 -03:00
oobabooga
b6ff138084 Add --checkpoint argument for GPTQ 2023-05-04 15:17:20 -03:00
Mylo
bd531c2dc2
Make --trust-remote-code work for all models (#1772) 2023-05-04 02:01:28 -03:00
oobabooga
0e6d17304a Clearer syntax for instruction-following characters 2023-05-03 22:50:39 -03:00
oobabooga
9c77ab4fc2 Improve some warnings 2023-05-03 22:06:46 -03:00
oobabooga
057b1b2978 Add credits 2023-05-03 21:49:55 -03:00
oobabooga
95d04d6a8d Better warning messages 2023-05-03 21:43:17 -03:00
oobabooga
f54256e348 Rename no_mmap to no-mmap 2023-05-03 09:50:31 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token (#1678)
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab
80c2f25131
LLaVA: small fixes (#1664)
* change multimodal projector to the correct one

* remove reference to custom stopping strings from readme

* fix stopping strings if tokenizer extension adds/removes tokens

* add API example

* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga
4e09df4034 Only show extension in UI if it has an ui() function 2023-05-02 19:20:02 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative (#1649)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input (#1707) 2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing (#1625) 2023-05-02 01:21:54 -03:00
oobabooga
c83210c460 Move the rstrips 2023-04-26 17:17:22 -03:00
oobabooga
1d8b8222e9 Revert #1579, apply the proper fix
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
oobabooga
9c2e7c0fab Fix path on models.py 2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode 2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py 2023-04-26 01:55:40 -03:00
oobabooga
f642135517 Make universal tokenizer, xformers, sdp-attention apply to monkey patch 2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14 Load more than one LoRA with --lora, fix a bug 2023-04-25 22:58:48 -03:00