Michael Sullivan
|
1c68c05b66
|
model in the TTS extensions clobbered global model
|
2023-07-21 02:33:11 -05:00 |
|
jllllll
|
87926d033d
|
Bump exllama module to 0.0.7 (#3211)
|
2023-07-19 22:24:47 -03:00 |
|
oobabooga
|
0d7f43225f
|
Merge branch 'dev'
|
2023-07-19 07:20:13 -07:00 |
|
oobabooga
|
08c23b62c7
|
Bump llama-cpp-python and transformers
|
2023-07-19 07:19:12 -07:00 |
|
oobabooga
|
5447e75191
|
Merge branch 'dev'
|
2023-07-18 15:36:26 -07:00 |
|
oobabooga
|
8ec225f245
|
Add EOS/BOS tokens to Llama-2 template
Following this comment:
https://github.com/ggerganov/llama.cpp/issues/2262#issuecomment-1641063329
|
2023-07-18 15:35:27 -07:00 |
|
oobabooga
|
3ef49397bb
|
Merge pull request #3195 from oobabooga/dev
v1.3
|
2023-07-18 17:33:11 -03:00 |
|
oobabooga
|
070a886278
|
Revert "Prevent lists from flickering in chat mode while streaming"
This reverts commit 5e5d926d2b .
|
2023-07-18 13:23:29 -07:00 |
|
oobabooga
|
a2918176ea
|
Update LLaMA-v2-model.md (thanks Panchovix)
|
2023-07-18 13:21:18 -07:00 |
|
oobabooga
|
e0631e309f
|
Create instruction template for Llama-v2 (#3194)
|
2023-07-18 17:19:18 -03:00 |
|
oobabooga
|
603c596616
|
Add LLaMA-v2 conversion instructions
|
2023-07-18 10:29:56 -07:00 |
|
jllllll
|
c535f14e5f
|
Bump bitsandbytes Windows wheel to 0.40.2 (#3186)
|
2023-07-18 11:39:43 -03:00 |
|
jllllll
|
d7a14174a2
|
Remove auto-loading when only one model is available (#3187)
|
2023-07-18 11:39:08 -03:00 |
|
randoentity
|
a69955377a
|
[GGML] Support for customizable RoPE (#3083)
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-07-17 22:32:37 -03:00 |
|
appe233
|
89e0d15cf5
|
Use 'torch.backends.mps.is_available' to check if mps is supported (#3164)
|
2023-07-17 21:27:18 -03:00 |
|
dependabot[bot]
|
234c58ccd1
|
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178)
|
2023-07-17 21:24:51 -03:00 |
|
oobabooga
|
49a5389bd3
|
Bump accelerate from 0.20.3 to 0.21.0
|
2023-07-17 21:23:59 -03:00 |
|
oobabooga
|
8c1c2e0fae
|
Increase max_new_tokens upper limit
|
2023-07-17 17:08:22 -07:00 |
|
oobabooga
|
5e5d926d2b
|
Prevent lists from flickering in chat mode while streaming
|
2023-07-17 17:00:49 -07:00 |
|
dependabot[bot]
|
02a5fe6aa2
|
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-17 20:18:31 +00:00 |
|
oobabooga
|
60a3e70242
|
Update LLaMA links and info
|
2023-07-17 12:51:01 -07:00 |
|
oobabooga
|
f83fdb9270
|
Don't reset LoRA menu when loading a model
|
2023-07-17 12:50:25 -07:00 |
|
oobabooga
|
4ce766414b
|
Bump AutoGPTQ version
|
2023-07-17 10:02:12 -07:00 |
|
oobabooga
|
b1a6ea68dd
|
Disable "autoload the model" by default
|
2023-07-17 07:40:56 -07:00 |
|
oobabooga
|
656b457795
|
Add Airoboros-v1.2 template
|
2023-07-17 07:27:42 -07:00 |
|
oobabooga
|
a199f21799
|
Optimize llamacpp_hf a bit
|
2023-07-16 20:49:48 -07:00 |
|
oobabooga
|
9f08038864
|
Merge pull request #3163 from oobabooga/dev
v1.2
|
2023-07-16 02:43:18 -03:00 |
|
oobabooga
|
6a3edb0542
|
Clean up llamacpp_hf.py
|
2023-07-15 22:40:55 -07:00 |
|
oobabooga
|
2de0cedce3
|
Fix reload screen color
|
2023-07-15 22:39:39 -07:00 |
|
oobabooga
|
13449aa44d
|
Decrease download timeout
|
2023-07-15 22:30:08 -07:00 |
|
oobabooga
|
27a84b4e04
|
Make AutoGPTQ the default again
Purely for compatibility with more models.
You should still use ExLlama_HF for LLaMA models.
|
2023-07-15 22:29:23 -07:00 |
|
oobabooga
|
5e3f7e00a9
|
Create llamacpp_HF loader (#3062)
|
2023-07-16 02:21:13 -03:00 |
|
Panchovix
|
7c4d4fc7d3
|
Increase alpha value limit for NTK RoPE scaling for exllama/exllama_HF (#3149)
|
2023-07-16 01:56:04 -03:00 |
|
ofirkris
|
780a2f2e16
|
Bump llama cpp version (#3160)
Bump llama cpp version to support better 8K RoPE scaling
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
|
2023-07-16 01:54:56 -03:00 |
|
jllllll
|
ed3ffd212d
|
Bump bitsandbytes to 0.40.1.post1 (#3156)
817bdf6325...6ec4f0c374
|
2023-07-16 01:53:32 -03:00 |
|
oobabooga
|
94dfcec237
|
Make it possible to evaluate exllama perplexity (#3138)
|
2023-07-16 01:52:55 -03:00 |
|
oobabooga
|
b284f2407d
|
Make ExLlama_HF the new default for GPTQ
|
2023-07-14 14:03:56 -07:00 |
|
jllllll
|
32f12b8bbf
|
Bump bitsandbytes Windows wheel to 0.40.0.post4 (#3135)
|
2023-07-13 17:32:37 -03:00 |
|
SeanScripts
|
9800745db9
|
Color tokens by probability and/or perplexity (#3078)
|
2023-07-13 17:30:22 -03:00 |
|
oobabooga
|
146e8b2a6c
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2023-07-13 13:23:38 -07:00 |
|
Morgan Schweers
|
6d1e911577
|
Add support for logits processors in extensions (#3029)
|
2023-07-13 17:22:41 -03:00 |
|
oobabooga
|
22341e948d
|
Merge branch 'main' into dev
|
2023-07-12 14:19:49 -07:00 |
|
oobabooga
|
0e6295886d
|
Fix lora download folder
|
2023-07-12 14:19:33 -07:00 |
|
oobabooga
|
eb823fce96
|
Fix typo
|
2023-07-12 13:55:19 -07:00 |
|
oobabooga
|
d0a626f32f
|
Change reload screen color
|
2023-07-12 13:54:43 -07:00 |
|
oobabooga
|
c592a9b740
|
Fix #3117
|
2023-07-12 13:33:44 -07:00 |
|
oobabooga
|
6447b2eea6
|
Merge pull request #3116 from oobabooga/dev
v1.1
|
2023-07-12 15:55:40 -03:00 |
|
oobabooga
|
2463d7c098
|
Spaces
|
2023-07-12 11:35:43 -07:00 |
|
oobabooga
|
e202190c4f
|
lint
|
2023-07-12 11:33:25 -07:00 |
|
FartyPants
|
9b55d3a9f9
|
More robust and error prone training (#3058)
|
2023-07-12 15:29:43 -03:00 |
|