cal066
|
bf70c19603
|
ctransformers: move thread and seed parameters (#3543)
|
2023-08-13 00:04:03 -03:00 |
|
oobabooga
|
0e05818266
|
Style changes
|
2023-08-11 16:35:57 -07:00 |
|
oobabooga
|
2f918ccf7c
|
Remove unused parameter
|
2023-08-11 11:15:22 -07:00 |
|
oobabooga
|
28c8df337b
|
Add repetition_penalty_range to ctransformers
|
2023-08-11 11:04:19 -07:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
oobabooga
|
8dbaa20ca8
|
Don't replace last reply with an empty message
|
2023-08-10 13:14:48 -07:00 |
|
oobabooga
|
0789554f65
|
Allow --lora to use an absolute path
|
2023-08-10 10:03:12 -07:00 |
|
oobabooga
|
3929971b66
|
Don't show oobabooga_llama-tokenizer in the model dropdown
|
2023-08-10 10:02:48 -07:00 |
|
oobabooga
|
c7f52bbdc1
|
Revert "Remove GPTQ-for-LLaMa monkey patch support"
This reverts commit e3d3565b2a .
|
2023-08-10 08:39:41 -07:00 |
|
jllllll
|
d6765bebc4
|
Update installation documentation
|
2023-08-10 00:53:48 -05:00 |
|
jllllll
|
d7ee4c2386
|
Remove unused import
|
2023-08-10 00:10:14 -05:00 |
|
jllllll
|
e3d3565b2a
|
Remove GPTQ-for-LLaMa monkey patch support
AutoGPTQ will be the preferred GPTQ LoRa loader in the future.
|
2023-08-09 23:59:04 -05:00 |
|
jllllll
|
bee73cedbd
|
Streamline GPTQ-for-LLaMa support
|
2023-08-09 23:42:34 -05:00 |
|
oobabooga
|
6c6a52aaad
|
Change the filenames for caches and histories
|
2023-08-09 07:47:19 -07:00 |
|
oobabooga
|
d8fb506aff
|
Add RoPE scaling support for transformers (including dynamic NTK)
https://github.com/huggingface/transformers/pull/24653
|
2023-08-08 21:25:48 -07:00 |
|
Friedemann Lipphardt
|
901b028d55
|
Add option for named cloudflare tunnels (#3364)
|
2023-08-08 22:20:27 -03:00 |
|
oobabooga
|
bf08b16b32
|
Fix disappearing profile picture bug
|
2023-08-08 14:09:01 -07:00 |
|
Gennadij
|
0e78f3b4d4
|
Fixed a typo in "rms_norm_eps", incorrectly set as n_gqa (#3494)
|
2023-08-08 00:31:11 -03:00 |
|
oobabooga
|
37fb719452
|
Increase the Context/Greeting boxes sizes
|
2023-08-08 00:09:00 -03:00 |
|
oobabooga
|
584dd33424
|
Fix missing example_dialogue when uploading characters
|
2023-08-07 23:44:59 -03:00 |
|
oobabooga
|
412f6ff9d3
|
Change alpha_value maximum and step
|
2023-08-07 06:08:51 -07:00 |
|
oobabooga
|
a373c96d59
|
Fix a bug in modules/shared.py
|
2023-08-06 20:36:35 -07:00 |
|
oobabooga
|
3d48933f27
|
Remove ancient deprecation warnings
|
2023-08-06 18:58:59 -07:00 |
|
oobabooga
|
c237ce607e
|
Move characters/instruction-following to instruction-templates
|
2023-08-06 17:50:32 -07:00 |
|
oobabooga
|
65aa11890f
|
Refactor everything (#3481)
|
2023-08-06 21:49:27 -03:00 |
|
oobabooga
|
d4b851bdc8
|
Credit turboderp
|
2023-08-06 13:43:15 -07:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
missionfloyd
|
5134878344
|
Fix chat message order (#3461)
|
2023-08-05 13:53:54 -03:00 |
|
jllllll
|
44f31731af
|
Create logs dir if missing when saving history (#3462)
|
2023-08-05 13:47:16 -03:00 |
|
Forkoz
|
9dcb37e8d4
|
Fix: Mirostat fails on models split across multiple GPUs
|
2023-08-05 13:45:47 -03:00 |
|
oobabooga
|
8df3cdfd51
|
Add SSL certificate support (#3453)
|
2023-08-04 13:57:31 -03:00 |
|
missionfloyd
|
2336b75d92
|
Remove unnecessary chat.js (#3445)
|
2023-08-04 01:58:37 -03:00 |
|
oobabooga
|
4b3384e353
|
Handle unfinished lists during markdown streaming
|
2023-08-03 17:15:18 -07:00 |
|
Pete
|
f4005164f4
|
Fix llama.cpp truncation (#3400)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-08-03 20:01:15 -03:00 |
|
oobabooga
|
87dab03dc0
|
Add the --cpu option for llama.cpp to prevent CUDA from being used (#3432)
|
2023-08-03 11:00:36 -03:00 |
|
oobabooga
|
3e70bce576
|
Properly format exceptions in the UI
|
2023-08-03 06:57:21 -07:00 |
|
oobabooga
|
32c564509e
|
Fix loading session in chat mode
|
2023-08-02 21:13:16 -07:00 |
|
oobabooga
|
0e8f9354b5
|
Add direct download for session/chat history JSONs
|
2023-08-02 19:43:39 -07:00 |
|
oobabooga
|
32a2bbee4a
|
Implement auto_max_new_tokens for ExLlama
|
2023-08-02 11:03:56 -07:00 |
|
oobabooga
|
e931844fe2
|
Add auto_max_new_tokens parameter (#3419)
|
2023-08-02 14:52:20 -03:00 |
|
Pete
|
6afc1a193b
|
Add a scrollbar to notebook/default, improve chat scrollbar style (#3403)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-08-02 12:02:36 -03:00 |
|
oobabooga
|
b53ed70a70
|
Make llamacpp_HF 6x faster
|
2023-08-01 13:18:20 -07:00 |
|
oobabooga
|
8d46a8c50a
|
Change the default chat style and the default preset
|
2023-08-01 09:35:17 -07:00 |
|
oobabooga
|
959feba602
|
When saving model settings, only save the settings for the current loader
|
2023-08-01 06:10:09 -07:00 |
|
oobabooga
|
f094330df0
|
When saving a preset, only save params that differ from the defaults
|
2023-07-31 19:13:29 -07:00 |
|
oobabooga
|
84297d05c4
|
Add a "Filter by loader" menu to the Parameters tab
|
2023-07-31 19:09:02 -07:00 |
|
oobabooga
|
7de7b3d495
|
Fix newlines in exported character yamls
|
2023-07-31 10:46:02 -07:00 |
|
oobabooga
|
5ca37765d3
|
Only replace {{user}} and {{char}} at generation time
|
2023-07-30 11:42:30 -07:00 |
|
oobabooga
|
6e16af34fd
|
Save uploaded characters as yaml
Also allow yaml characters to be uploaded directly
|
2023-07-30 11:25:38 -07:00 |
|
oobabooga
|
b31321c779
|
Define visible_text before applying chat_input extensions
|
2023-07-26 07:27:14 -07:00 |
|