jllllll
44f31731af
Create logs dir if missing when saving history ( #3462 )
2023-08-05 13:47:16 -03:00
Forkoz
9dcb37e8d4
Fix: Mirostat fails on models split across multiple GPUs
2023-08-05 13:45:47 -03:00
oobabooga
8df3cdfd51
Add SSL certificate support ( #3453 )
2023-08-04 13:57:31 -03:00
missionfloyd
2336b75d92
Remove unnecessary chat.js ( #3445 )
2023-08-04 01:58:37 -03:00
oobabooga
4b3384e353
Handle unfinished lists during markdown streaming
2023-08-03 17:15:18 -07:00
Pete
f4005164f4
Fix llama.cpp truncation ( #3400 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-03 20:01:15 -03:00
oobabooga
87dab03dc0
Add the --cpu option for llama.cpp to prevent CUDA from being used ( #3432 )
2023-08-03 11:00:36 -03:00
oobabooga
3e70bce576
Properly format exceptions in the UI
2023-08-03 06:57:21 -07:00
oobabooga
32c564509e
Fix loading session in chat mode
2023-08-02 21:13:16 -07:00
oobabooga
0e8f9354b5
Add direct download for session/chat history JSONs
2023-08-02 19:43:39 -07:00
oobabooga
32a2bbee4a
Implement auto_max_new_tokens for ExLlama
2023-08-02 11:03:56 -07:00
oobabooga
e931844fe2
Add auto_max_new_tokens parameter ( #3419 )
2023-08-02 14:52:20 -03:00
Pete
6afc1a193b
Add a scrollbar to notebook/default, improve chat scrollbar style ( #3403 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-08-02 12:02:36 -03:00
oobabooga
b53ed70a70
Make llamacpp_HF 6x faster
2023-08-01 13:18:20 -07:00
oobabooga
8d46a8c50a
Change the default chat style and the default preset
2023-08-01 09:35:17 -07:00
oobabooga
959feba602
When saving model settings, only save the settings for the current loader
2023-08-01 06:10:09 -07:00
oobabooga
f094330df0
When saving a preset, only save params that differ from the defaults
2023-07-31 19:13:29 -07:00
oobabooga
84297d05c4
Add a "Filter by loader" menu to the Parameters tab
2023-07-31 19:09:02 -07:00
oobabooga
7de7b3d495
Fix newlines in exported character yamls
2023-07-31 10:46:02 -07:00
oobabooga
5ca37765d3
Only replace {{user}} and {{char}} at generation time
2023-07-30 11:42:30 -07:00
oobabooga
6e16af34fd
Save uploaded characters as yaml
...
Also allow yaml characters to be uploaded directly
2023-07-30 11:25:38 -07:00
oobabooga
b31321c779
Define visible_text before applying chat_input extensions
2023-07-26 07:27:14 -07:00
oobabooga
b17893a58f
Revert "Add tensor split support for llama.cpp ( #3171 )"
...
This reverts commit 031fe7225e
.
2023-07-26 07:06:01 -07:00
oobabooga
28779cd959
Use dark theme by default
2023-07-25 20:11:57 -07:00
oobabooga
c2e0d46616
Add credits
2023-07-25 15:49:04 -07:00
oobabooga
77d2e9f060
Remove flexgen 2
2023-07-25 15:18:25 -07:00
oobabooga
75c2dd38cf
Remove flexgen support
2023-07-25 15:15:29 -07:00
Foxtr0t1337
85b3a26e25
Ignore values which are not string in training.py ( #3287 )
2023-07-25 19:00:25 -03:00
Shouyi
031fe7225e
Add tensor split support for llama.cpp ( #3171 )
2023-07-25 18:59:26 -03:00
Eve
f653546484
README updates and improvements ( #3198 )
2023-07-25 18:58:13 -03:00
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier ( #3307 )
2023-07-25 18:49:56 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support ( #3285 )
2023-07-24 16:37:03 -03:00
jllllll
1141987a0d
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading ( #3225 )
2023-07-24 11:25:36 -03:00
Ikko Eltociear Ashimine
b2d5433409
Fix typo in deepspeed_parameters.py ( #3222 )
...
configration -> configuration
2023-07-24 11:17:28 -03:00
oobabooga
4b19b74e6c
Add CUDA wheels for llama-cpp-python by jllllll
2023-07-19 19:33:43 -07:00
oobabooga
913e060348
Change the default preset to Divine Intellect
...
It seems to reduce hallucination while using instruction-tuned models.
2023-07-19 08:24:37 -07:00
randoentity
a69955377a
[GGML] Support for customizable RoPE ( #3083 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-17 22:32:37 -03:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported ( #3164 )
2023-07-17 21:27:18 -03:00
oobabooga
8c1c2e0fae
Increase max_new_tokens upper limit
2023-07-17 17:08:22 -07:00
oobabooga
b1a6ea68dd
Disable "autoload the model" by default
2023-07-17 07:40:56 -07:00
oobabooga
a199f21799
Optimize llamacpp_hf a bit
2023-07-16 20:49:48 -07:00
oobabooga
6a3edb0542
Clean up llamacpp_hf.py
2023-07-15 22:40:55 -07:00
oobabooga
27a84b4e04
Make AutoGPTQ the default again
...
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader ( #3062 )
2023-07-16 02:21:13 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity ( #3138 )
2023-07-16 01:52:55 -03:00
oobabooga
b284f2407d
Make ExLlama_HF the new default for GPTQ
2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40
Style changes
2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent ( #3017 )
...
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ ( #2818 )
2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b
Make --model work with argument like models/folder_name
2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039
Add decode functions to llama.cpp/exllama
2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551
Add truncation to exllama
2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37
Minor change to ui.py
2023-07-07 09:09:14 -07:00
oobabooga
de994331a4
Merge remote-tracking branch 'refs/remotes/origin/main'
2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3
Block a cloudfare request
2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits ( #3036 )
2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py ( #3020 )
...
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830
Remove universal llama tokenizer support
...
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003
2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0
Fix start_with
2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb
Fix loading some histories (thanks kaiokendev)
2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d
Make greetings appear in --multi-user mode
2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names ( #2988 )
2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error ( #2953 )
2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload ( #2951 )
2023-07-03 17:39:06 -03:00
FartyPants
48b11f9c5b
Training: added trainable parameters info ( #2944 )
2023-07-03 17:38:36 -03:00
Turamarth14
847f70b694
Update html_generator.py ( #2954 )
...
With version 10.0.0 of Pillow the constant Image.ANTIALIAS has been removed. Instead Image.LANCZOS should be used.
2023-07-02 01:43:58 -03:00
ardfork
3c076c3c80
Disable half2 for ExLlama when using HIP ( #2912 )
2023-06-29 15:03:16 -03:00
missionfloyd
ac0f96e785
Some more character import tweaks. ( #2921 )
2023-06-29 14:56:25 -03:00
oobabooga
79db629665
Minor bug fix
2023-06-29 13:53:06 -03:00
oobabooga
3443219cbc
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
oobabooga
20740ab16e
Revert "Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )"
...
This reverts commit 37a16d23a7
.
2023-06-28 18:10:34 -03:00
Panchovix
37a16d23a7
Fix exllama_hf gibbersh above 2048 context, and works >5000 context. ( #2913 )
2023-06-28 12:36:07 -03:00
FartyPants
ab1998146b
Training update - backup the existing adapter before training on top of it ( #2902 )
2023-06-27 18:24:04 -03:00
oobabooga
22d455b072
Add LoRA support to ExLlama_HF
2023-06-26 00:10:33 -03:00
oobabooga
c52290de50
ExLlama with long context ( #2875 )
2023-06-25 22:49:26 -03:00
oobabooga
9290c6236f
Keep ExLlama_HF if already selected
2023-06-25 19:06:28 -03:00
oobabooga
75fd763f99
Fix chat saving issue ( closes #2863 )
2023-06-25 18:14:57 -03:00
FartyPants
21c189112c
Several Training Enhancements ( #2868 )
2023-06-25 15:34:46 -03:00
oobabooga
95212edf1f
Update training.py
2023-06-25 12:13:15 -03:00
oobabooga
f31281a8de
Fix loading instruction templates containing literal '\n'
2023-06-25 02:13:26 -03:00
oobabooga
f0fcd1f697
Sort some imports
2023-06-25 01:44:36 -03:00
oobabooga
365b672531
Minor change to prevent future bugs
2023-06-25 01:38:54 -03:00
jllllll
bef67af23c
Use pre-compiled python module for ExLlama ( #2770 )
2023-06-24 20:24:17 -03:00
oobabooga
cec5fb0ef6
Failed attempt at evaluating exllama_hf perplexity
2023-06-24 12:02:25 -03:00