appe233
89e0d15cf5
Use 'torch.backends.mps.is_available' to check if mps is supported ( #3164 )
2023-07-17 21:27:18 -03:00
dependabot[bot]
234c58ccd1
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 ( #3178 )
2023-07-17 21:24:51 -03:00
oobabooga
49a5389bd3
Bump accelerate from 0.20.3 to 0.21.0
2023-07-17 21:23:59 -03:00
oobabooga
8c1c2e0fae
Increase max_new_tokens upper limit
2023-07-17 17:08:22 -07:00
oobabooga
5e5d926d2b
Prevent lists from flickering in chat mode while streaming
2023-07-17 17:00:49 -07:00
dependabot[bot]
02a5fe6aa2
Bump accelerate from 0.20.3 to 0.21.0
...
Bumps [accelerate](https://github.com/huggingface/accelerate ) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases )
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0 )
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 20:18:31 +00:00
oobabooga
60a3e70242
Update LLaMA links and info
2023-07-17 12:51:01 -07:00
oobabooga
f83fdb9270
Don't reset LoRA menu when loading a model
2023-07-17 12:50:25 -07:00
oobabooga
4ce766414b
Bump AutoGPTQ version
2023-07-17 10:02:12 -07:00
oobabooga
b1a6ea68dd
Disable "autoload the model" by default
2023-07-17 07:40:56 -07:00
oobabooga
656b457795
Add Airoboros-v1.2 template
2023-07-17 07:27:42 -07:00
oobabooga
a199f21799
Optimize llamacpp_hf a bit
2023-07-16 20:49:48 -07:00
oobabooga
9f08038864
Merge pull request #3163 from oobabooga/dev
...
v1.2
2023-07-16 02:43:18 -03:00
oobabooga
6a3edb0542
Clean up llamacpp_hf.py
2023-07-15 22:40:55 -07:00
oobabooga
2de0cedce3
Fix reload screen color
2023-07-15 22:39:39 -07:00
oobabooga
13449aa44d
Decrease download timeout
2023-07-15 22:30:08 -07:00
oobabooga
27a84b4e04
Make AutoGPTQ the default again
...
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
2023-07-15 22:29:23 -07:00
oobabooga
5e3f7e00a9
Create llamacpp_HF loader ( #3062 )
2023-07-16 02:21:13 -03:00
Panchovix
7c4d4fc7d3
Increase alpha value limit for NTK RoPE scaling for exllama/exllama_HF ( #3149 )
2023-07-16 01:56:04 -03:00
ofirkris
780a2f2e16
Bump llama cpp version ( #3160 )
...
Bump llama cpp version to support better 8K RoPE scaling
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-16 01:54:56 -03:00
jllllll
ed3ffd212d
Bump bitsandbytes to 0.40.1.post1 ( #3156 )
...
817bdf6325...6ec4f0c374
2023-07-16 01:53:32 -03:00
oobabooga
94dfcec237
Make it possible to evaluate exllama perplexity ( #3138 )
2023-07-16 01:52:55 -03:00
jllllll
11a8fd1eb9
Add cuBLAS llama-cpp-python wheel installation ( #102 )
...
Parses requirements.txt using regex to determine required version.
2023-07-16 01:31:33 -03:00
oobabooga
b284f2407d
Make ExLlama_HF the new default for GPTQ
2023-07-14 14:03:56 -07:00
jllllll
32f12b8bbf
Bump bitsandbytes Windows wheel to 0.40.0.post4 ( #3135 )
2023-07-13 17:32:37 -03:00
SeanScripts
9800745db9
Color tokens by probability and/or perplexity ( #3078 )
2023-07-13 17:30:22 -03:00
oobabooga
146e8b2a6c
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-13 13:23:38 -07:00
Morgan Schweers
6d1e911577
Add support for logits processors in extensions ( #3029 )
2023-07-13 17:22:41 -03:00
oobabooga
22341e948d
Merge branch 'main' into dev
2023-07-12 14:19:49 -07:00
oobabooga
0e6295886d
Fix lora download folder
2023-07-12 14:19:33 -07:00
oobabooga
eb823fce96
Fix typo
2023-07-12 13:55:19 -07:00
oobabooga
d0a626f32f
Change reload screen color
2023-07-12 13:54:43 -07:00
oobabooga
c592a9b740
Fix #3117
2023-07-12 13:33:44 -07:00
oobabooga
6447b2eea6
Merge pull request #3116 from oobabooga/dev
...
v1.1
2023-07-12 15:55:40 -03:00
oobabooga
2463d7c098
Spaces
2023-07-12 11:35:43 -07:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
FartyPants
9b55d3a9f9
More robust and error prone training ( #3058 )
2023-07-12 15:29:43 -03:00
oobabooga
30f37530d5
Add back .replace('\r', '')
2023-07-12 09:52:20 -07:00
Fernando Tarin Morales
987d0fe023
Fix: Fixed the tokenization process of a raw dataset and improved its efficiency ( #3035 )
2023-07-12 12:05:37 -03:00
kabachuha
3f19e94c93
Add Tensorboard/Weights and biases integration for training ( #2624 )
2023-07-12 11:53:31 -03:00
kizinfo
5d513eea22
Add ability to load all text files from a subdirectory for training ( #1997 )
...
* Update utils.py
returns individual txt files and subdirectories to getdatasets to allow for training from a directory of text files
* Update training.py
minor tweak to training on raw datasets to detect if a directory is selected, and if so, to load in all the txt files in that directory for training
* Update put-trainer-datasets-here.txt
document
* Minor change
* Use pathlib, sort by natural keys
* Space
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 11:44:30 -03:00
practicaldreamer
73a0def4af
Add Feature to Log Sample of Training Dataset for Inspection ( #1711 )
2023-07-12 11:26:45 -03:00
oobabooga
b6ba68eda9
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
2023-07-12 07:19:34 -07:00
oobabooga
a17b78d334
Disable wandb during training
2023-07-12 07:19:12 -07:00
Gabriel Pena
eedb3bf023
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
oobabooga
180420d2c9
Fix send_pictures extension
2023-07-11 20:56:01 -07:00
original-subliminal-thought-criminal
ad07839a7b
Small bug, when arbitrary loading character.json that doesn't exist ( #2643 )
...
* Fixes #2482
* corrected erroronius variable
* Use .exists()
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-07-12 00:16:36 -03:00
Axiom Wolf
d986c17c52
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
atriantafy
d9fabdde40
Add context_instruct to API. Load default model instruction template … ( #2688 )
2023-07-12 00:01:03 -03:00
Salvador E. Tropea
324e45b848
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00