Commit Graph

757 Commits

Author SHA1 Message Date
oobabooga
ef8637e32d
Add extension example, replace input_hijack with chat_input_modifier (#3307) 2023-07-25 18:49:56 -03:00
oobabooga
a07d070b6c
Add llama-2-70b GGML support (#3285) 2023-07-24 16:37:03 -03:00
jllllll
1141987a0d
Add checks for ROCm and unsupported architectures to llama_cpp_cuda loading (#3225) 2023-07-24 11:25:36 -03:00
Ikko Eltociear Ashimine
b2d5433409
Fix typo in deepspeed_parameters.py (#3222)
configration -> configuration
2023-07-24 11:17:28 -03:00
oobabooga
4b19b74e6c Add CUDA wheels for llama-cpp-python by jllllll 2023-07-19 19:33:43 -07:00
oobabooga
913e060348 Change the default preset to Divine Intellect
It seems to reduce hallucination while using instruction-tuned models.
2023-07-19 08:24:37 -07:00
randoentity
a69955377a
[GGML] Support for customizable RoPE (#3083)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-17 22:32:37 -03:00
appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164) 2023-07-17 21:27:18 -03:00
oobabooga
8c1c2e0fae Increase max_new_tokens upper limit 2023-07-17 17:08:22 -07:00
oobabooga
b1a6ea68dd Disable "autoload the model" by default 2023-07-17 07:40:56 -07:00
oobabooga
a199f21799 Optimize llamacpp_hf a bit 2023-07-16 20:49:48 -07:00
oobabooga
6a3edb0542 Clean up llamacpp_hf.py 2023-07-15 22:40:55 -07:00
oobabooga
27a84b4e04 Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader (#3062) 2023-07-16 02:21:13 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity (#3138) 2023-07-16 01:52:55 -03:00
oobabooga
b284f2407d Make ExLlama_HF the new default for GPTQ 2023-07-14 14:03:56 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions (#3029) 2023-07-13 17:22:41 -03:00
oobabooga
e202190c4f lint 2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training (#3058) 2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5 Add back .replace('\r', '') 2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency (#3035) 2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training (#2624) 2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training (#1997)
* Update utils.py

returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files

* Update training.py

minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training

* Update put-trainer-datasets-here.txt

document

* Minor change

* Use pathlib, sort by natural keys

* Space

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection (#1711) 2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334 Disable wandb during training 2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp (#3076) 2023-07-12 11:05:13 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names (#3051) 2023-07-12 00:10:36 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown (#2977) 2023-07-11 23:27:38 -03:00
oobabooga
e3810dff40 Style changes 2023-07-11 18:49:06 -07:00
Ricardo Pinto
3e9da5a27c
Changed FormComponent to IOComponent (#3017)
Co-authored-by: Ricardo Pinto <1-ricardo.pinto@users.noreply.gitlab.cognitage.com>
2023-07-11 18:52:16 -03:00
Forkoz
74ea7522a0
Lora fixes for AutoGPTQ (#2818) 2023-07-09 01:03:43 -03:00
oobabooga
5ac4e4da8b Make --model work with argument like models/folder_name 2023-07-08 10:22:54 -07:00
oobabooga
b6643e5039 Add decode functions to llama.cpp/exllama 2023-07-07 09:11:30 -07:00
oobabooga
1ba2e88551 Add truncation to exllama 2023-07-07 09:09:23 -07:00
oobabooga
c21b73ff37 Minor change to ui.py 2023-07-07 09:09:14 -07:00
oobabooga
de994331a4 Merge remote-tracking branch 'refs/remotes/origin/main' 2023-07-06 22:25:43 -07:00
oobabooga
9aee1064a3 Block a cloudfare request 2023-07-06 22:24:52 -07:00
Fernando Tarin Morales
d7e14e1f78
Fixed the param name when loading a LoRA using a model loaded in 4 or 8 bits (#3036) 2023-07-07 02:24:07 -03:00
Xiaojian "JJ" Deng
ff45317032
Update models.py (#3020)
Hopefully fixed error with "ValueError: Tokenizer class GPTNeoXTokenizer does not exist or is not currently 
imported."
2023-07-05 21:40:43 -03:00
oobabooga
8705eba830 Remove universal llama tokenizer support
Instead replace it with a warning if the tokenizer files look off
2023-07-04 19:43:19 -07:00
oobabooga
333075e726
Fix #3003 2023-07-04 11:38:35 -03:00
oobabooga
463ddfffd0 Fix start_with 2023-07-03 23:32:02 -07:00
oobabooga
373555c4fb Fix loading some histories (thanks kaiokendev) 2023-07-03 22:19:28 -07:00
Panchovix
10c8c197bf
Add Support for Static NTK RoPE scaling for exllama/exllama_hf (#2955) 2023-07-04 01:13:16 -03:00
oobabooga
7e8340b14d Make greetings appear in --multi-user mode 2023-07-03 20:08:14 -07:00
oobabooga
4b1804a438
Implement sessions + add basic multi-user support (#2991) 2023-07-04 00:03:30 -03:00
FartyPants
1f8cae14f9
Update training.py - correct use of lora_names (#2988) 2023-07-03 17:41:18 -03:00
FartyPants
c23c88ee4c
Update LoRA.py - avoid potential error (#2953) 2023-07-03 17:40:22 -03:00
FartyPants
33f56fd41d
Update models.py to clear LORA names after unload (#2951) 2023-07-03 17:39:06 -03:00