oobabooga
c075969875
Add instructions
2023-09-22 13:10:03 -07:00
oobabooga
8ab3eca9ec
Add a warning for outdated installations
2023-09-22 09:35:19 -07:00
oobabooga
95976a9d4f
Fix a bug while deleting characters
2023-09-22 06:02:34 -07:00
oobabooga
d5330406fa
Add a rename menu for chat histories
2023-09-21 19:16:51 -07:00
oobabooga
00ab450c13
Multiple histories for each character ( #4022 )
2023-09-21 17:19:32 -03:00
oobabooga
029da9563f
Avoid redundant function call in llamacpp_hf
2023-09-19 14:14:40 -07:00
oobabooga
869f47fff9
Lint
2023-09-19 13:51:57 -07:00
oobabooga
13ac55fa18
Reorder some functions
2023-09-19 13:51:57 -07:00
oobabooga
03dc69edc5
ExLlama_HF (v1 and v2) prefix matching
2023-09-19 13:12:19 -07:00
oobabooga
5075087461
Fix command-line arguments being ignored
2023-09-19 13:11:46 -07:00
oobabooga
ff5d3d2d09
Add missing import
2023-09-18 16:26:54 -07:00
oobabooga
605ec3c9f2
Add a warning about ExLlamaV2 without flash-attn
2023-09-18 12:26:35 -07:00
oobabooga
f0ef971edb
Remove obsolete warning
2023-09-18 12:25:10 -07:00
oobabooga
745807dc03
Faster llamacpp_HF prefix matching
2023-09-18 11:02:45 -07:00
BadisG
893a72a1c5
Stop generation immediately when using "Maximum tokens/second" ( #3952 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-18 14:27:06 -03:00
Cebtenzzre
8466cf229a
llama.cpp: fix ban_eos_token ( #3987 )
2023-09-18 12:15:02 -03:00
oobabooga
0ede2965d5
Remove an error message
2023-09-17 18:46:08 -07:00
missionfloyd
cc8eda298a
Move hover menu shortcuts to right side ( #3951 )
2023-09-17 22:33:00 -03:00
oobabooga
280cca9f66
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-09-17 18:01:27 -07:00
oobabooga
b062d50c45
Remove exllama import that causes problems
2023-09-17 18:00:32 -07:00
James Braza
fee38e0601
Simplified ExLlama cloning instructions and failure message ( #3972 )
2023-09-17 19:26:05 -03:00
Lu Guanghua
9858acee7b
Fix unexpected extensions load after gradio restart ( #3965 )
2023-09-17 17:35:43 -03:00
oobabooga
d9b0f2c9c3
Fix llama.cpp double decoding
2023-09-17 13:07:48 -07:00
oobabooga
d71465708c
llamacpp_HF prefix matching
2023-09-17 11:51:01 -07:00
oobabooga
37e2980e05
Recommend mul_mat_q for llama.cpp
2023-09-17 08:27:11 -07:00
oobabooga
a069f3904c
Undo part of ad8ac545a5
2023-09-17 08:12:23 -07:00
oobabooga
ad8ac545a5
Tokenization improvements
2023-09-17 07:02:00 -07:00
saltacc
cd08eb0753
token probs for non HF loaders ( #3957 )
2023-09-17 10:42:32 -03:00
kalomaze
7c9664ed35
Allow full model URL to be used for download ( #3919 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-16 10:06:13 -03:00
saltacc
ed6b6411fb
Fix exllama tokenizers ( #3954 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-16 09:42:38 -03:00
missionfloyd
2ad6ca8874
Add back chat buttons with --chat-buttons ( #3947 )
2023-09-16 00:39:37 -03:00
oobabooga
ef04138bc0
Improve the UI tokenizer
2023-09-15 19:30:44 -07:00
oobabooga
c3e4c9fdc2
Add a simple tokenizer to the UI
2023-09-15 19:09:03 -07:00
saltacc
f01b9aa71f
Add customizable ban tokens ( #3899 )
2023-09-15 18:27:27 -03:00
oobabooga
5b117590ad
Add some scrollbars to Parameters tab
2023-09-15 09:17:37 -07:00
Johan
fdcee0c215
Allow custom tokenizer for llamacpp_HF loader ( #3941 )
2023-09-15 12:38:38 -03:00
oobabooga
fd7257c7f8
Prevent code blocks from flickering while streaming
2023-09-15 07:46:26 -07:00
oobabooga
a3ecf3bb65
Add cai-chat-square chat style
2023-09-14 16:15:08 -07:00
oobabooga
3d1c0f173d
User config precedence over GGUF metadata
2023-09-14 12:15:52 -07:00
oobabooga
94dc64f870
Add a border
2023-09-14 07:20:36 -07:00
oobabooga
70aafa34dc
Fix blockquote markdown rendering
2023-09-14 05:57:04 -07:00
oobabooga
644a9b8765
Change the chat generate button
2023-09-14 05:16:44 -07:00
oobabooga
ecc90f9f62
Continue on Alt + Enter
2023-09-14 03:59:12 -07:00
oobabooga
1ce3c93600
Allow "Your name" field to be saved
2023-09-14 03:44:35 -07:00
oobabooga
27dbcc59f5
Make the chat input expand upwards ( #3920 )
2023-09-14 07:06:42 -03:00
oobabooga
6b6af74e14
Keyboard shortcuts without conflicts (hopefully)
2023-09-14 02:33:52 -07:00
oobabooga
fc11d1eff0
Add chat keyboard shortcuts
2023-09-13 19:22:40 -07:00
oobabooga
9f199c7a4c
Use Noto Sans font
...
Copied from 6c8bd06308/public/webfonts/NotoSans
2023-09-13 13:48:05 -07:00
oobabooga
8ce94b735c
Show progress on impersonate
2023-09-13 11:22:53 -07:00
oobabooga
7cd437e05c
Properly close the hover menu on mobile
2023-09-13 11:10:46 -07:00
oobabooga
1b47b5c676
Change the Generate/Stop buttons
2023-09-13 09:25:26 -07:00
oobabooga
8ea28cbfe0
Reorder chat buttons
2023-09-13 08:49:11 -07:00
oobabooga
5e3d2f7d44
Reorganize chat buttons ( #3892 )
2023-09-13 02:36:12 -03:00
Panchovix
34dc7306b8
Fix NTK (alpha) and RoPE scaling for exllamav2 and exllamav2_HF ( #3897 )
2023-09-13 02:35:09 -03:00
oobabooga
b7adf290fc
Fix ExLlama-v2 path issue
2023-09-12 17:42:22 -07:00
oobabooga
b190676893
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-09-12 15:06:33 -07:00
oobabooga
2f935547c8
Minor changes
2023-09-12 15:05:21 -07:00
oobabooga
18e6b275f3
Add alpha_value/compress_pos_emb to ExLlama-v2
2023-09-12 15:02:47 -07:00
Gennadij
460c40d8ab
Read more GGUF metadata (scale_linear and freq_base) ( #3877 )
2023-09-12 17:02:42 -03:00
oobabooga
16e1696071
Minor qol change
2023-09-12 10:44:26 -07:00
oobabooga
c2a309f56e
Add ExLlamaV2 and ExLlamav2_HF loaders ( #3881 )
2023-09-12 14:33:07 -03:00
oobabooga
df123a20fc
Prevent extra keys from being saved to settings.yaml
2023-09-11 20:13:10 -07:00
oobabooga
dae428a967
Revamp cai-chat theme, make it default
2023-09-11 19:30:40 -07:00
oobabooga
78811dd89a
Fix GGUF metadata reading for falcon
2023-09-11 15:49:50 -07:00
oobabooga
9331ab4798
Read GGUF metadata ( #3873 )
2023-09-11 18:49:30 -03:00
oobabooga
df52dab67b
Lint
2023-09-11 07:57:38 -07:00
oobabooga
ed86878f02
Remove GGML support
2023-09-11 07:44:00 -07:00
John Smith
cc7b7ba153
fix lora training with alpaca_lora_4bit ( #3853 )
2023-09-11 01:22:20 -03:00
Forkoz
15e9b8c915
Exllama new rope settings ( #3852 )
2023-09-11 01:14:36 -03:00
oobabooga
4affa08821
Do not impose instruct mode while loading models
2023-09-02 11:31:33 -07:00
oobabooga
47e490c7b4
Set use_cache=True by default for all models
2023-08-30 13:26:27 -07:00
missionfloyd
787219267c
Allow downloading single file from UI ( #3737 )
2023-08-29 23:32:36 -03:00
oobabooga
cec8db52e5
Add max_tokens_second param ( #3533 )
2023-08-29 17:44:31 -03:00
oobabooga
2b58a89f6a
Clear instruction template before loading new one
2023-08-29 13:11:32 -07:00
oobabooga
36864cb3e8
Use Alpaca as the default instruction template
2023-08-29 13:06:25 -07:00
oobabooga
9a202f7fb2
Prevent <ul> lists from flickering during streaming
2023-08-28 20:45:07 -07:00
oobabooga
439dd0faab
Fix stopping strings in the chat API
2023-08-28 19:40:11 -07:00
oobabooga
c75f98a6d6
Autoscroll Notebook/Default textareas during streaming
2023-08-28 18:22:03 -07:00
oobabooga
558e918fd6
Add a typing dots (...) animation to chat tab
2023-08-28 13:50:36 -07:00
oobabooga
57e9ded00c
Make it possible to scroll during streaming ( #3721 )
2023-08-28 16:03:20 -03:00
Cebtenzzre
2f5d769a8d
accept floating-point alpha value on the command line ( #3712 )
2023-08-27 18:54:43 -03:00
oobabooga
b2296dcda0
Ctrl+S to show/hide chat controls
2023-08-27 13:14:33 -07:00
Ravindra Marella
e4c3e1bdd2
Fix ctransformers model unload ( #3711 )
...
Add missing comma in model types list
Fixes marella/ctransformers#111
2023-08-27 10:53:48 -03:00
oobabooga
0c9e818bb8
Update truncation length based on max_seq_len/n_ctx
2023-08-26 23:10:45 -07:00
oobabooga
3361728da1
Change some comments
2023-08-26 22:24:44 -07:00
oobabooga
8aeae3b3f4
Fix llamacpp_HF loading
2023-08-26 22:15:06 -07:00
oobabooga
7f5370a272
Minor fixes/cosmetics
2023-08-26 22:11:07 -07:00
jllllll
4d61a7d9da
Account for deprecated GGML parameters
2023-08-26 14:07:46 -05:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support
2023-08-26 10:40:08 -05:00
oobabooga
83640d6f43
Replace ggml occurences with gguf
2023-08-26 01:06:59 -07:00
jllllll
db42b365c9
Fix ctransformers threads auto-detection ( #3688 )
2023-08-25 14:37:02 -03:00
cal066
960980247f
ctransformers: gguf support ( #3685 )
2023-08-25 11:33:04 -03:00
oobabooga
21058c37f7
Add missing file
2023-08-25 07:10:26 -07:00
oobabooga
f4f04c8c32
Fix a typo
2023-08-25 07:08:38 -07:00
oobabooga
5c7d8bfdfd
Detect CodeLlama settings
2023-08-25 07:06:57 -07:00
oobabooga
52ab2a6b9e
Add rope_freq_base parameter for CodeLlama
2023-08-25 06:55:15 -07:00
oobabooga
feecd8190f
Unescape inline code blocks
2023-08-24 21:01:09 -07:00
oobabooga
3320accfdc
Add CFG to llamacpp_HF (second attempt) ( #3678 )
2023-08-24 20:32:21 -03:00
oobabooga
d6934bc7bc
Implement CFG for ExLlama_HF ( #3666 )
2023-08-24 16:27:36 -03:00
oobabooga
87442c6d18
Fix Notebook Logits tab
2023-08-22 21:00:12 -07:00
oobabooga
c0b119c3a3
Improve logit viewer format
2023-08-22 20:35:12 -07:00
oobabooga
8545052c9d
Add the option to use samplers in the logit viewer
2023-08-22 20:18:16 -07:00
oobabooga
25e5eaa6a6
Remove outdated training warning
2023-08-22 13:16:44 -07:00
oobabooga
335c49cc7e
Bump peft and transformers
2023-08-22 13:14:59 -07:00
cal066
e042bf8624
ctransformers: add mlock and no-mmap options ( #3649 )
2023-08-22 16:51:34 -03:00
oobabooga
6cca8b8028
Only update notebook token counter on input
...
For performance during streaming
2023-08-21 05:39:55 -07:00
oobabooga
2cb07065ec
Fix an escaping bug
2023-08-20 21:50:42 -07:00
oobabooga
a74dd9003f
Fix HTML escaping for perplexity_colors extension
2023-08-20 21:40:22 -07:00
oobabooga
57036abc76
Add "send to default/notebook" buttons to chat tab
2023-08-20 19:54:59 -07:00
oobabooga
429cacd715
Add a token counter similar to automatic1111
...
It can now be found in the Default and Notebook tabs
2023-08-20 19:37:33 -07:00
oobabooga
120fb86c6a
Add a simple logit viewer ( #3636 )
2023-08-20 20:49:21 -03:00
oobabooga
ef17da70af
Fix ExLlama truncation
2023-08-20 08:53:26 -07:00
oobabooga
ee964bcce9
Update a comment about RoPE scaling
2023-08-20 07:01:43 -07:00
missionfloyd
1cae784761
Unescape last message ( #3623 )
2023-08-19 09:29:08 -03:00
Cebtenzzre
942ad6067d
llama.cpp: make Stop button work with streaming disabled ( #3620 )
2023-08-19 00:17:27 -03:00
oobabooga
f6724a1a01
Return the visible history with "Copy last reply"
2023-08-18 13:04:45 -07:00
oobabooga
b96fd22a81
Refactor the training tab ( #3619 )
2023-08-18 16:58:38 -03:00
oobabooga
c4733000d7
Return the visible history with "Remove last"
2023-08-18 09:25:51 -07:00
oobabooga
7cba000421
Bump llama-cpp-python, +tensor_split by @shouyiwang, +mul_mat_q ( #3610 )
2023-08-18 12:03:34 -03:00
oobabooga
bdb6eb5734
Restyle the chat input box + several CSS improvements
...
- Remove extra spacing below the last chat message
- Change the background color of code blocks in dark mode
- Remove border radius from selected header bar elements
- Make the chat scrollbar more discrete
2023-08-17 11:10:38 -07:00
oobabooga
cebe07f29c
Unescape HTML inside code blocks
2023-08-16 21:08:26 -07:00
oobabooga
a4e903e932
Escape HTML in chat messages
2023-08-16 09:25:52 -07:00
oobabooga
73d9befb65
Make "Show controls" customizable through settings.yaml
2023-08-16 07:04:18 -07:00
oobabooga
2a29208224
Add a "Show controls" button to chat UI ( #3590 )
2023-08-16 02:39:58 -03:00
cal066
991bb57e43
ctransformers: Fix up model_type name consistency ( #3567 )
2023-08-14 15:17:24 -03:00
oobabooga
ccfc02a28d
Add the --disable_exllama option for AutoGPTQ ( #3545 from clefever/disable-exllama)
2023-08-14 15:15:55 -03:00
oobabooga
7e57b35b5e
Clean up old code
2023-08-14 10:10:39 -07:00
oobabooga
4d067e9b52
Add back a variable to keep old extensions working
2023-08-14 09:39:06 -07:00
oobabooga
d8a82d34ed
Improve a warning
2023-08-14 08:46:05 -07:00
oobabooga
3e0a9f9cdb
Refresh the character dropdown when saving/deleting a character
2023-08-14 08:23:41 -07:00
oobabooga
890b4abdad
Fix session saving
2023-08-14 07:55:52 -07:00
oobabooga
619cb4e78b
Add "save defaults to settings.yaml" button ( #3574 )
2023-08-14 11:46:07 -03:00
oobabooga
a95e6f02cb
Add a placeholder for custom stopping strings
2023-08-13 21:17:20 -07:00
oobabooga
ff9b5861c8
Fix impersonate when some text is present ( closes #3564 )
2023-08-13 21:10:47 -07:00
oobabooga
cc7e6ef645
Fix a CSS conflict
2023-08-13 19:24:09 -07:00
Eve
66c04c304d
Various ctransformers fixes ( #3556 )
...
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
2023-08-13 23:09:03 -03:00
oobabooga
4a05aa92cb
Add "send to" buttons for instruction templates
...
- Remove instruction templates from prompt dropdowns (default/notebook)
- Add 3 buttons to Parameters > Instruction template as a replacement
- Increase the number of lines of 'negative prompt' field to 3, and add a scrollbar
- When uploading a character, switch to the Character tab
- When uploading chat history, switch to the Chat tab
2023-08-13 18:35:45 -07:00
oobabooga
f6db2c78d1
Fix ctransformers seed
2023-08-13 05:48:53 -07:00
oobabooga
a1a9ec895d
Unify the 3 interface modes ( #3554 )
2023-08-13 01:12:15 -03:00
cal066
bf70c19603
ctransformers: move thread and seed parameters ( #3543 )
2023-08-13 00:04:03 -03:00
Chris Lefever
0230fa4e9c
Add the --disable_exllama option for AutoGPTQ
2023-08-12 02:26:58 -04:00
oobabooga
0e05818266
Style changes
2023-08-11 16:35:57 -07:00
oobabooga
2f918ccf7c
Remove unused parameter
2023-08-11 11:15:22 -07:00
oobabooga
28c8df337b
Add repetition_penalty_range to ctransformers
2023-08-11 11:04:19 -07:00
cal066
7a4fcee069
Add ctransformers support ( #3313 )
...
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
2023-08-11 14:41:33 -03:00
oobabooga
8dbaa20ca8
Don't replace last reply with an empty message
2023-08-10 13:14:48 -07:00
oobabooga
0789554f65
Allow --lora to use an absolute path
2023-08-10 10:03:12 -07:00
oobabooga
3929971b66
Don't show oobabooga_llama-tokenizer in the model dropdown
2023-08-10 10:02:48 -07:00
oobabooga
c7f52bbdc1
Revert "Remove GPTQ-for-LLaMa monkey patch support"
...
This reverts commit e3d3565b2a
.
2023-08-10 08:39:41 -07:00
jllllll
d6765bebc4
Update installation documentation
2023-08-10 00:53:48 -05:00
jllllll
d7ee4c2386
Remove unused import
2023-08-10 00:10:14 -05:00
jllllll
e3d3565b2a
Remove GPTQ-for-LLaMa monkey patch support
...
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
2023-08-09 23:59:04 -05:00
jllllll
bee73cedbd
Streamline GPTQ-for-LLaMa support
2023-08-09 23:42:34 -05:00
oobabooga
6c6a52aaad
Change the filenames for caches and histories
2023-08-09 07:47:19 -07:00
oobabooga
d8fb506aff
Add RoPE scaling support for transformers (including dynamic NTK)
...
https://github.com/huggingface/transformers/pull/24653
2023-08-08 21:25:48 -07:00
Friedemann Lipphardt
901b028d55
Add option for named cloudflare tunnels ( #3364 )
2023-08-08 22:20:27 -03:00
oobabooga
bf08b16b32
Fix disappearing profile picture bug
2023-08-08 14:09:01 -07:00
Gennadij
0e78f3b4d4
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa ( #3494 )
2023-08-08 00:31:11 -03:00
oobabooga
37fb719452
Increase the Context/Greeting boxes sizes
2023-08-08 00:09:00 -03:00
oobabooga
584dd33424
Fix missing example_dialogue when uploading characters
2023-08-07 23:44:59 -03:00
oobabooga
412f6ff9d3
Change alpha_value maximum and step
2023-08-07 06:08:51 -07:00
oobabooga
a373c96d59
Fix a bug in modules/shared.py
2023-08-06 20:36:35 -07:00
oobabooga
3d48933f27
Remove ancient deprecation warnings
2023-08-06 18:58:59 -07:00
oobabooga
c237ce607e
Move characters/instruction-following to instruction-templates
2023-08-06 17:50:32 -07:00
oobabooga
65aa11890f
Refactor everything ( #3481 )
2023-08-06 21:49:27 -03:00
oobabooga
d4b851bdc8
Credit turboderp
2023-08-06 13:43:15 -07:00
oobabooga
0af10ab49b
Add Classifier Free Guidance (CFG) for Transformers/ExLlama ( #3325 )
2023-08-06 17:22:48 -03:00
missionfloyd
5134878344
Fix chat message order ( #3461 )
2023-08-05 13:53:54 -03:00
jllllll
44f31731af
Create logs dir if missing when saving history ( #3462 )
2023-08-05 13:47:16 -03:00
Forkoz
9dcb37e8d4
Fix: Mirostat fails on models split across multiple GPUs
2023-08-05 13:45:47 -03:00
oobabooga
8df3cdfd51
Add SSL certificate support ( #3453 )
2023-08-04 13:57:31 -03:00
missionfloyd
2336b75d92
Remove unnecessary chat.js ( #3445 )
2023-08-04 01:58:37 -03:00
oobabooga
4b3384e353
Handle unfinished lists during markdown streaming
2023-08-03 17:15:18 -07:00
Pete
f4005164f4
Fix llama.cpp truncation ( #3400 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-03 20:01:15 -03:00
oobabooga
87dab03dc0
Add the --cpu option for llama.cpp to prevent CUDA from being used ( #3432 )
2023-08-03 11:00:36 -03:00
oobabooga
3e70bce576
Properly format exceptions in the UI
2023-08-03 06:57:21 -07:00
oobabooga
32c564509e
Fix loading session in chat mode
2023-08-02 21:13:16 -07:00
oobabooga
0e8f9354b5
Add direct download for session/chat history JSONs
2023-08-02 19:43:39 -07:00
oobabooga
32a2bbee4a
Implement auto_max_new_tokens for ExLlama
2023-08-02 11:03:56 -07:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter ( #3419 )
2023-08-02 14:52:20 -03:00
Pete
6afc1a193b
Add a scrollbar to notebook/default, improve chat scrollbar style ( #3403 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-02 12:02:36 -03:00
oobabooga
b53ed70a70
Make llamacpp_HF 6x faster
2023-08-01 13:18:20 -07:00
oobabooga
8d46a8c50a
Change the default chat style and the default preset
2023-08-01 09:35:17 -07:00
oobabooga
959feba602
When saving model settings, only save the settings for the current loader
2023-08-01 06:10:09 -07:00
oobabooga
f094330df0
When saving a preset, only save params that differ from the defaults
2023-07-31 19:13:29 -07:00
oobabooga
84297d05c4
Add a "Filter by loader" menu to the Parameters tab
2023-07-31 19:09:02 -07:00
oobabooga
7de7b3d495
Fix newlines in exported character yamls
2023-07-31 10:46:02 -07:00
oobabooga
5ca37765d3
Only replace {{user}} and {{char}} at generation time
2023-07-30 11:42:30 -07:00
oobabooga
6e16af34fd
Save uploaded characters as yaml
...
Also allow yaml characters to be uploaded directly
2023-07-30 11:25:38 -07:00
oobabooga
b31321c779
Define visible_text before applying chat_input extensions
2023-07-26 07:27:14 -07:00
oobabooga
b17893a58f
Revert "Add tensor split support for llama.cpp ( #3171 )"
...
This reverts commit 031fe7225e
.
2023-07-26 07:06:01 -07:00
oobabooga
28779cd959
Use dark theme by default
2023-07-25 20:11:57 -07:00
oobabooga
c2e0d46616
Add credits
2023-07-25 15:49:04 -07:00
oobabooga
77d2e9f060
Remove flexgen 2
2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf
Remove flexgen support
2023-07-25 15:15:29 -07:00
Foxtr0t1337
85b3a26e25
Ignore values which are not string in training.py ( #3287 )
2023-07-25 19:00:25 -03:00
Shouyi
031fe7225e
Add tensor split support for llama.cpp ( #3171 )
2023-07-25 18:59:26 -03:00
Eve
f653546484
README updates and improvements ( #3198 )
2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier ( #3307 )
2023-07-25 18:49:56 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support ( #3285 )
2023-07-24 16:37:03 -03:00
jllllll
1141987a0d
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading ( #3225 )
2023-07-24 11:25:36 -03:00
Ikko Eltociear Ashimine
b2d5433409
Fix typo in deepspeed_parameters.py ( #3222 )
...
configration -> configuration
2023-07-24 11:17:28 -03:00
oobabooga
4b19b74e6c
Add CUDA wheels for llama-cpp-python by jllllll
2023-07-19 19:33:43 -07:00
oobabooga
913e060348
Change the default preset to Divine Intellect
...
It seems to reduce hallucination while using instruction-tuned models.
2023-07-19 08:24:37 -07:00
randoentity
a69955377a
[GGML] Support for customizable RoPE ( #3083 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-17 22:32:37 -03:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported ( #3164 )
2023-07-17 21:27:18 -03:00
oobabooga
8c1c2e0fae
Increase max_new_tokens upper limit
2023-07-17 17:08:22 -07:00
oobabooga
b1a6ea68dd
Disable "autoload the model" by default
2023-07-17 07:40:56 -07:00
oobabooga
a199f21799
Optimize llamacpp_hf a bit
2023-07-16 20:49:48 -07:00
oobabooga
6a3edb0542
Clean up llamacpp_hf.py
2023-07-15 22:40:55 -07:00
oobabooga
27a84b4e04
Make AutoGPTQ the default again
...
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader ( #3062 )
2023-07-16 02:21:13 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity ( #3138 )
2023-07-16 01:52:55 -03:00
oobabooga
b284f2407d
Make ExLlama_HF the new default for GPTQ
2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent ( #3017 )
...
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ ( #2818 )
2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b
Make --model work with argument like models/folder_name
2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039
Add decode functions to llama.cpp/exllama
2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551
Add truncation to exllama
2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37
Minor change to ui.py
2023-07-07 09:09:14 -07:00
oobabooga
de994331a4
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3
Block a cloudfare request
2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits ( #3036 )
2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py ( #3020 )
...
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830
Remove universal llama tokenizer support
...
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003
2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0
Fix start_with
2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb
Fix loading some histories (thanks kaiokendev)
2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d
Make greetings appear in --multi-user mode
2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error ( #2953 )
2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload ( #2951 )
2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py ( #2954 )
...
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP ( #2912 )
2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. ( #2921 )
2023-06-29 14:56:25 -03:00
oobabooga
79db629665
Minor bug fix
2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
20740ab16e
Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )"
...
This reverts commit 37a16d23a7
.
2023-06-28 18:10:34 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )
2023-06-28 12:36:07 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
oobabooga
22d455b072
Add LoRA support to ExLlama_HF
2023-06-26 00:10:33 -03:00
oobabooga
c52290de50
ExLlama with long context ( #2875 )
2023-06-25 22:49:26 -03:00
oobabooga
9290c6236f
Keep ExLlama_HF if already selected
2023-06-25 19:06:28 -03:00
oobabooga
75fd763f99
Fix chat saving issue ( closes #2863 )
2023-06-25 18:14:57 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f31281a8de
Fix loading instruction templates containing literal '\n'
2023-06-25 02:13:26 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
oobabooga
365b672531
Minor change to prevent future bugs
2023-06-25 01:38:54 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
oobabooga
cec5fb0ef6
Failed attempt at evaluating exllama_hf perplexity
2023-06-24 12:02:25 -03:00
快乐的我531
e356f69b36
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
oobabooga
ec482f3dae
Apply input extensions after yielding *Is typing...*
2023-06-24 11:07:11 -03:00
oobabooga
3e80f2aceb
Apply the output extensions only once
...
Relevant for google translate, silero
2023-06-24 10:59:07 -03:00
missionfloyd
51a388fa34
Organize chat history/character import menu ( #2845 )
...
* Organize character import menu
* Move Chat history upload/download labels
2023-06-24 09:55:02 -03:00
oobabooga
8bb3bb39b3
Implement stopping string search in string space ( #2847 )
2023-06-24 09:43:00 -03:00
oobabooga
3ae9af01aa
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
Panchovix
5646690769
Fix some models not loading on exllama_hf ( #2835 )
2023-06-23 11:31:02 -03:00
oobabooga
383c50f05b
Replace old presets with the results of Preset Arena ( #2830 )
2023-06-23 01:48:29 -03:00
Panchovix
b4a38c24b7
Fix Multi-GPU not working on exllama_hf ( #2803 )
2023-06-22 16:05:25 -03:00
LarryVRH
580c1ee748
Implement a demo HF wrapper for exllama to utilize existing HF transformers decoding. ( #2777 )
2023-06-21 15:31:42 -03:00
EugeoSynthesisThirtyTwo
7625c6de89
fix usage of self in classmethod ( #2781 )
2023-06-20 16:18:42 -03:00
MikoAL
c40932eb39
Added Falcon LoRA training support ( #2684 )
...
I am 50% sure this will work
2023-06-20 01:03:44 -03:00
FartyPants
ce86f726e9
Added saving of training logs to training_log.json ( #2769 )
2023-06-20 00:47:36 -03:00
Cebtenzzre
59e7ecb198
llama.cpp: implement ban_eos_token via logits_processor ( #2765 )
2023-06-19 21:31:19 -03:00
oobabooga
eb30f4441f
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
oobabooga
5f418f6171
Fix a memory leak (credits for the fix: Ph0rk0z)
2023-06-19 01:19:28 -03:00
ThisIsPIRI
def3b69002
Fix loading condition for universal llama tokenizer ( #2753 )
2023-06-18 18:14:06 -03:00
oobabooga
09c781b16f
Add modules/block_requests.py
...
This has become unnecessary, but it could be useful in the future
for other libraries.
2023-06-18 16:31:14 -03:00
Forkoz
3cae1221d4
Update exllama.py - Respect model dir parameter ( #2744 )
2023-06-18 13:26:30 -03:00
oobabooga
c5641b65d3
Handle leading spaces properly in ExLllama
2023-06-17 19:35:12 -03:00
oobabooga
05a743d6ad
Make llama.cpp use tfs parameter
2023-06-17 19:08:25 -03:00
oobabooga
e19cbea719
Add a variable to modules/shared.py
2023-06-17 19:02:29 -03:00
oobabooga
cbd63eeeff
Fix repeated tokens with exllama
2023-06-17 19:02:08 -03:00
oobabooga
766c760cd7
Use gen_begin_reuse in exllama
2023-06-17 18:00:10 -03:00
oobabooga
b27f83c0e9
Make exllama stoppable
2023-06-16 22:03:23 -03:00
oobabooga
7f06d551a3
Fix streaming callback
2023-06-16 21:44:56 -03:00
oobabooga
5f392122fd
Add gpu_split param to ExLlama
...
Adapted from code created by Ph0rk0z. Thank you Ph0rk0z.
2023-06-16 20:49:36 -03:00
oobabooga
9f40032d32
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
oobabooga
dea43685b0
Add some clarifications
2023-06-16 19:10:53 -03:00
oobabooga
7ef6a50e84
Reorganize model loading UI completely ( #2720 )
2023-06-16 19:00:37 -03:00
Tom Jobbins
646b0c889f
AutoGPTQ: Add UI and command line support for disabling fused attention and fused MLP ( #2648 )
2023-06-15 23:59:54 -03:00
oobabooga
2b9a6b9259
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-06-14 18:45:24 -03:00
oobabooga
4d508cbe58
Add some checks to AutoGPTQ loader
2023-06-14 18:44:43 -03:00
FartyPants
56c19e623c
Add LORA name instead of "default" in PeftModel ( #2689 )
2023-06-14 18:29:42 -03:00
oobabooga
474dc7355a
Allow API requests to use parameter presets
2023-06-14 11:32:20 -03:00
oobabooga
e471919e6d
Make llava/minigpt-4 work with AutoGPTQ
2023-06-11 17:56:01 -03:00
oobabooga
f4defde752
Add a menu for installing extensions
2023-06-11 17:11:06 -03:00
oobabooga
ac122832f7
Make dropdown menus more similar to automatic1111
2023-06-11 14:20:16 -03:00
oobabooga
6133675e0f
Add menus for saving presets/characters/instruction templates/prompts ( #2621 )
2023-06-11 12:19:18 -03:00
brandonj60
b04e18d10c
Add Mirostat v2 sampling to transformer models ( #2571 )
2023-06-09 21:26:31 -03:00
oobabooga
6015616338
Style changes
2023-06-06 13:06:05 -03:00
oobabooga
f040073ef1
Handle the case of older autogptq install
2023-06-06 13:05:05 -03:00
oobabooga
bc58dc40bd
Fix a minor bug
2023-06-06 12:57:13 -03:00
oobabooga
00b94847da
Remove softprompt support
2023-06-06 07:42:23 -03:00
oobabooga
0aebc838a0
Don't save the history for 'None' character
2023-06-06 07:21:07 -03:00
oobabooga
9f215523e2
Remove some unused imports
2023-06-06 07:05:46 -03:00
oobabooga
0f0108ce34
Never load the history for default character
2023-06-06 07:00:11 -03:00
oobabooga
11f38b5c2b
Add AutoGPTQ LoRA support
2023-06-05 23:32:57 -03:00
oobabooga
3a5cfe96f0
Increase chat_prompt_size_max
2023-06-05 17:37:37 -03:00
oobabooga
f276d88546
Use AutoGPTQ by default for GPTQ models
2023-06-05 15:41:48 -03:00
oobabooga
9b0e95abeb
Fix "regenerate" when "Start reply with" is set
2023-06-05 11:56:03 -03:00
oobabooga
19f78684e6
Add "Start reply with" feature to chat mode
2023-06-02 13:58:08 -03:00
GralchemOz
f7b07c4705
Fix the missing Chinese character bug ( #2497 )
2023-06-02 13:45:41 -03:00
oobabooga
2f6631195a
Add desc_act checkbox to the UI
2023-06-02 01:45:46 -03:00
LaaZa
9c066601f5
Extend AutoGPTQ support for any GPTQ model ( #1668 )
2023-06-02 01:33:55 -03:00
oobabooga
a83f9aa65b
Update shared.py
2023-06-01 12:08:39 -03:00
oobabooga
b6c407f51d
Don't stream at more than 24 fps
...
This is a performance optimization
2023-05-31 23:41:42 -03:00
Forkoz
9ab90d8b60
Fix warning for qlora ( #2438 )
2023-05-30 11:09:18 -03:00
oobabooga
3578dd3611
Change a warning message
2023-05-29 22:40:54 -03:00
oobabooga
3a6e194bc7
Change a warning message
2023-05-29 22:39:23 -03:00
Luis Lopez
9e7204bef4
Add tail-free and top-a sampling ( #2357 )
2023-05-29 21:40:01 -03:00
oobabooga
1394f44e14
Add triton checkbox for AutoGPTQ
2023-05-29 15:32:45 -03:00
oobabooga
f34d20922c
Minor fix
2023-05-29 13:31:17 -03:00
oobabooga
983eef1e29
Attempt at evaluating falcon perplexity (failed)
2023-05-29 13:28:25 -03:00
Honkware
204731952a
Falcon support (trust-remote-code and autogptq checkboxes) ( #2367 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-29 10:20:18 -03:00
Forkoz
60ae80cf28
Fix hang in tokenizer for AutoGPTQ llama models. ( #2399 )
2023-05-28 23:10:10 -03:00
oobabooga
2f811b1bdf
Change a warning message
2023-05-28 22:48:20 -03:00
oobabooga
9ee1e37121
Fix return message when no model is loaded
2023-05-28 22:46:32 -03:00
oobabooga
00ebea0b2a
Use YAML for presets and settings
2023-05-28 22:34:12 -03:00
oobabooga
acfd876f29
Some qol changes to "Perplexity evaluation"
2023-05-25 15:06:22 -03:00
oobabooga
8efdc01ffb
Better default for compute_dtype
2023-05-25 15:05:53 -03:00
oobabooga
37d4ad012b
Add a button for rendering markdown for any model
2023-05-25 11:59:27 -03:00
DGdev91
cf088566f8
Make llama.cpp read prompt size and seed from settings ( #2299 )
2023-05-25 10:29:31 -03:00
oobabooga
361451ba60
Add --load-in-4bit parameter ( #2320 )
2023-05-25 01:14:13 -03:00
oobabooga
63ce5f9c28
Add back a missing bos token
2023-05-24 13:54:36 -03:00
Alex "mcmonkey" Goodwin
3cd7c5bdd0
LoRA Trainer: train_only_after
option to control which part of your input to train on ( #2315 )
2023-05-24 12:43:22 -03:00
flurb18
d37a28730d
Beginning of multi-user support ( #2262 )
...
Adds a lock to generate_reply
2023-05-24 09:38:20 -03:00
Gabriel Terrien
7aed53559a
Support of the --gradio-auth flag ( #2283 )
2023-05-23 20:39:26 -03:00
oobabooga
fb6a00f4e5
Small AutoGPTQ fix
2023-05-23 15:20:01 -03:00
oobabooga
cd3618d7fb
Add support for RWKV in Hugging Face format
2023-05-23 02:07:28 -03:00
oobabooga
75adc110d4
Fix "perplexity evaluation" progress messages
2023-05-23 01:54:52 -03:00
oobabooga
4d94a111d4
memoize load_character to speed up the chat API
2023-05-23 00:50:58 -03:00
Gabriel Terrien
0f51b64bb3
Add a "dark_theme" option to settings.json ( #2288 )
2023-05-22 19:45:11 -03:00
oobabooga
c0fd7f3257
Add mirostat parameters for llama.cpp ( #2287 )
2023-05-22 19:37:24 -03:00
oobabooga
d63ef59a0f
Apply LLaMA-Precise preset to Vicuna by default
2023-05-21 23:00:42 -03:00
oobabooga
dcc3e54005
Various "impersonate" fixes
2023-05-21 22:54:28 -03:00
oobabooga
e116d31180
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
oobabooga
fb91406e93
Fix generation_attempts continuing after an empty reply
2023-05-21 22:14:50 -03:00
oobabooga
e18534fe12
Fix "continue" in chat-instruct mode
2023-05-21 22:05:59 -03:00
oobabooga
8ac3636966
Add epsilon_cutoff/eta_cutoff parameters ( #2258 )
2023-05-21 15:11:57 -03:00
oobabooga
1e5821bd9e
Fix silero tts autoplay (attempt #2 )
2023-05-21 13:25:11 -03:00
oobabooga
a5d5bb9390
Fix silero tts autoplay
2023-05-21 12:11:59 -03:00
oobabooga
05593a7834
Minor bug fix
2023-05-20 23:22:36 -03:00
Matthew McAllister
ab6acddcc5
Add Save/Delete character buttons ( #1870 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-20 21:48:45 -03:00
oobabooga
c5af549d4b
Add chat API ( #2233 )
2023-05-20 18:42:17 -03:00
Konstantin Gukov
1b52bddfcc
Mitigate UnboundLocalError ( #2136 )
2023-05-19 14:46:18 -03:00
Alex "mcmonkey" Goodwin
50c70e28f0
Lora Trainer improvements, part 6 - slightly better raw text inputs ( #2108 )
2023-05-19 12:58:54 -03:00
oobabooga
9d5025f531
Improve error handling while loading GPTQ models
2023-05-19 11:20:08 -03:00
oobabooga
b667ffa51d
Simplify GPTQ_loader.py
2023-05-17 16:22:56 -03:00
oobabooga
ef10ffc6b4
Add various checks to model loading functions
2023-05-17 16:14:54 -03:00
oobabooga
abd361b3a0
Minor change
2023-05-17 11:33:43 -03:00
oobabooga
21ecc3701e
Avoid a name conflict
2023-05-17 11:23:13 -03:00
oobabooga
fb91c07191
Minor bug fix
2023-05-17 11:16:37 -03:00
oobabooga
1a8151a2b6
Add AutoGPTQ support (basic) ( #2132 )
2023-05-17 11:12:12 -03:00
Alex "mcmonkey" Goodwin
1f50dbe352
Experimental jank multiGPU inference that's 2x faster than native somehow ( #2100 )
2023-05-17 10:41:09 -03:00
oobabooga
ce21804ec7
Allow extensions to define a new tab
2023-05-17 01:31:56 -03:00
oobabooga
a84f499718
Allow extensions to define custom CSS and JS
2023-05-17 00:30:54 -03:00
oobabooga
7584d46c29
Refactor models.py ( #2113 )
2023-05-16 19:52:22 -03:00
oobabooga
5cd6dd4287
Fix no-mmap bug
2023-05-16 17:35:49 -03:00
Forkoz
d205ec9706
Fix Training fails when evaluation dataset is selected ( #2099 )
...
Fixes https://github.com/oobabooga/text-generation-webui/issues/2078 from Googulator
2023-05-16 13:40:19 -03:00
atriantafy
26cf8c2545
add api port options ( #1990 )
2023-05-15 20:44:16 -03:00
Andrei
e657dd342d
Add in-memory cache support for llama.cpp ( #1936 )
2023-05-15 20:19:55 -03:00
Jakub Strnad
0227e738ed
Add settings UI for llama.cpp and fixed reloading of llama.cpp models ( #2087 )
2023-05-15 19:51:23 -03:00
oobabooga
c07215cc08
Improve the default Assistant character
2023-05-15 19:39:08 -03:00
oobabooga
4e66f68115
Create get_max_memory_dict() function
2023-05-15 19:38:27 -03:00
AlphaAtlas
071f0776ad
Add llama.cpp GPU offload option ( #2060 )
2023-05-14 22:58:11 -03:00
oobabooga
3b886f9c9f
Add chat-instruct mode ( #2049 )
2023-05-14 10:43:55 -03:00
oobabooga
df37ba5256
Update impersonate_wrapper
2023-05-12 12:59:48 -03:00
oobabooga
e283ddc559
Change how spaces are handled in continue/generation attempts
2023-05-12 12:50:29 -03:00
oobabooga
2eeb27659d
Fix bug in --cpu-memory
2023-05-12 06:17:07 -03:00
oobabooga
5eaa914e1b
Fix settings.json being ignored because of config.yaml
2023-05-12 06:09:45 -03:00
oobabooga
71693161eb
Better handle spaces in LlamaTokenizer
2023-05-11 17:55:50 -03:00
oobabooga
7221d1389a
Fix a bug
2023-05-11 17:11:10 -03:00
oobabooga
0d36c18f5d
Always return only the new tokens in generation functions
2023-05-11 17:07:20 -03:00
oobabooga
394bb253db
Syntax improvement
2023-05-11 16:27:50 -03:00
oobabooga
f7dbddfff5
Add a variable for tts extensions to use
2023-05-11 16:12:46 -03:00
oobabooga
638c6a65a2
Refactor chat functions ( #2003 )
2023-05-11 15:37:04 -03:00
oobabooga
b7a589afc8
Improve the Metharme prompt
2023-05-10 16:09:32 -03:00
oobabooga
b01c4884cb
Better stopping strings for instruct mode
2023-05-10 14:22:38 -03:00
oobabooga
6a4783afc7
Add markdown table rendering
2023-05-10 13:41:23 -03:00
oobabooga
3316e33d14
Remove unused code
2023-05-10 11:59:59 -03:00
Alexander Dibrov
ec14d9b725
Fix custom_generate_chat_prompt
( #1965 )
2023-05-10 11:29:59 -03:00
oobabooga
32481ec4d6
Fix prompt order in the dropdown
2023-05-10 02:24:09 -03:00
oobabooga
dfd9ba3e90
Remove duplicate code
2023-05-10 02:07:22 -03:00
oobabooga
bdf1274b5d
Remove duplicate code
2023-05-10 01:34:04 -03:00
oobabooga
3913155c1f
Style improvements ( #1957 )
2023-05-09 22:49:39 -03:00
minipasila
334486f527
Added instruct-following template for Metharme ( #1679 )
2023-05-09 22:29:22 -03:00
Carl Kenner
814f754451
Support for MPT, INCITE, WizardLM, StableLM, Galactica, Vicuna, Guanaco, and Baize instruction following ( #1596 )
2023-05-09 20:37:31 -03:00
Wojtab
e9e75a9ec7
Generalize multimodality (llava/minigpt4 7b and 13b now supported) ( #1741 )
2023-05-09 20:18:02 -03:00
Wesley Pyburn
a2b25322f0
Fix trust_remote_code in wrong location ( #1953 )
2023-05-09 19:22:10 -03:00
LaaZa
218bd64bd1
Add the option to not automatically load the selected model ( #1762 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-09 15:52:35 -03:00
Maks
cf6caf1830
Make the RWKV model cache the RNN state between messages ( #1354 )
2023-05-09 11:12:53 -03:00
Kamil Szurant
641500dcb9
Use current input for Impersonate (continue impersonate feature) ( #1147 )
2023-05-09 02:37:42 -03:00
IJumpAround
020fe7b50b
Remove mutable defaults from function signature. ( #1663 )
2023-05-08 22:55:41 -03:00
Matthew McAllister
d78b04f0b4
Add error message when GPTQ-for-LLaMa import fails ( #1871 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-08 22:29:09 -03:00
oobabooga
68dcbc7ebd
Fix chat history handling in instruct mode
2023-05-08 16:41:21 -03:00
Clay Shoaf
79ac94cc2f
fixed LoRA loading issue ( #1865 )
2023-05-08 16:21:55 -03:00
oobabooga
b5260b24f1
Add support for custom chat styles ( #1917 )
2023-05-08 12:35:03 -03:00
EgrorBs
d3ea70f453
More trust_remote_code=trust_remote_code ( #1899 )
2023-05-07 23:48:20 -03:00
oobabooga
56a5969658
Improve the separation between instruct/chat modes ( #1896 )
2023-05-07 23:47:02 -03:00
oobabooga
9754d6a811
Fix an error message
2023-05-07 17:44:05 -03:00
camenduru
ba65a48ec8
trust_remote_code=shared.args.trust_remote_code ( #1891 )
2023-05-07 17:42:44 -03:00
oobabooga
6b67cb6611
Generalize superbooga to chat mode
2023-05-07 15:05:26 -03:00
oobabooga
56f6b7052a
Sort dropdowns numerically
2023-05-05 23:14:56 -03:00
oobabooga
8aafb1f796
Refactor text_generation.py, add support for custom generation functions ( #1817 )
2023-05-05 18:53:03 -03:00
oobabooga
c728f2b5f0
Better handle new line characters in code blocks
2023-05-05 11:22:36 -03:00
oobabooga
00e333d790
Add MOSS support
2023-05-04 23:20:34 -03:00
oobabooga
f673f4a4ca
Change --verbose behavior
2023-05-04 15:56:06 -03:00
oobabooga
97a6a50d98
Use oasst tokenizer instead of universal tokenizer
2023-05-04 15:55:39 -03:00
oobabooga
b6ff138084
Add --checkpoint argument for GPTQ
2023-05-04 15:17:20 -03:00
Mylo
bd531c2dc2
Make --trust-remote-code work for all models ( #1772 )
2023-05-04 02:01:28 -03:00
oobabooga
0e6d17304a
Clearer syntax for instruction-following characters
2023-05-03 22:50:39 -03:00
oobabooga
9c77ab4fc2
Improve some warnings
2023-05-03 22:06:46 -03:00
oobabooga
057b1b2978
Add credits
2023-05-03 21:49:55 -03:00
oobabooga
95d04d6a8d
Better warning messages
2023-05-03 21:43:17 -03:00
oobabooga
f54256e348
Rename no_mmap to no-mmap
2023-05-03 09:50:31 -03:00
practicaldreamer
e3968f7dd0
Fix Training Pad Token ( #1678 )
...
Currently padding with 0 the character vs 0 the token id (<unk> in the case of llama)
2023-05-02 23:16:08 -03:00
Wojtab
80c2f25131
LLaVA: small fixes ( #1664 )
...
* change multimodal projector to the correct one
* remove reference to custom stopping strings from readme
* fix stopping strings if tokenizer extension adds/removes tokens
* add API example
* LLaVA 7B just dropped, add to readme that there is no support for it currently
2023-05-02 23:12:22 -03:00
oobabooga
4e09df4034
Only show extension in UI if it has an ui() function
2023-05-02 19:20:02 -03:00
Ahmed Said
fbcd32988e
added no_mmap & mlock parameters to llama.cpp and removed llamacpp_model_alternative ( #1649 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-05-02 18:25:28 -03:00
Carl Kenner
2f1a2846d1
Verbose should always print special tokens in input ( #1707 )
2023-05-02 01:24:56 -03:00
Alex "mcmonkey" Goodwin
0df0b2d0f9
optimize stopping strings processing ( #1625 )
2023-05-02 01:21:54 -03:00
oobabooga
c83210c460
Move the rstrips
2023-04-26 17:17:22 -03:00
oobabooga
1d8b8222e9
Revert #1579 , apply the proper fix
...
Apparently models dislike trailing spaces.
2023-04-26 16:47:50 -03:00
oobabooga
9c2e7c0fab
Fix path on models.py
2023-04-26 03:29:09 -03:00
oobabooga
a777c058af
Precise prompts for instruct mode
2023-04-26 03:21:53 -03:00
oobabooga
a8409426d7
Fix bug in models.py
2023-04-26 01:55:40 -03:00
oobabooga
f642135517
Make universal tokenizer, xformers, sdp-attention apply to monkey patch
2023-04-25 23:18:11 -03:00
oobabooga
f39c99fa14
Load more than one LoRA with --lora, fix a bug
2023-04-25 22:58:48 -03:00
oobabooga
15940e762e
Fix missing initial space for LlamaTokenizer
2023-04-25 22:47:23 -03:00
Vincent Brouwers
92cdb4f22b
Seq2Seq support (including FLAN-T5) ( #1535 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-25 22:39:04 -03:00
Alex "mcmonkey" Goodwin
312cb7dda6
LoRA trainer improvements part 5 ( #1546 )
...
* full dynamic model type support on modern peft
* remove shuffle option
2023-04-25 21:27:30 -03:00