jllllll
|
859b4fd737
|
Bump exllama to 0.1.17 (#3847)
|
2023-09-11 01:13:14 -03:00 |
|
dependabot[bot]
|
1d6b384828
|
Update transformers requirement from ==4.32.* to ==4.33.* (#3865)
|
2023-09-11 01:12:22 -03:00 |
|
jllllll
|
e8f234ca8f
|
Bump llama-cpp-python to 0.1.84 (#3854)
|
2023-09-11 01:11:33 -03:00 |
|
oobabooga
|
66d5caba1b
|
Pin pydantic version (closes #3850)
|
2023-09-10 21:09:04 -07:00 |
|
oobabooga
|
0576691538
|
Add optimum to requirements (for GPTQ LoRA training)
See https://github.com/oobabooga/text-generation-webui/issues/3655
|
2023-08-31 08:45:38 -07:00 |
|
jllllll
|
9626f57721
|
Bump exllama to 0.0.14 (#3758)
|
2023-08-30 13:43:38 -03:00 |
|
jllllll
|
dac5f4b912
|
Bump llama-cpp-python to 0.1.83 (#3745)
|
2023-08-29 22:35:59 -03:00 |
|
VishwasKukreti
|
a9a1784420
|
Update accelerate to 0.22 in requirements.txt (#3725)
|
2023-08-29 17:47:37 -03:00 |
|
jllllll
|
fe1f7c6513
|
Bump ctransformers to 0.2.25 (#3740)
|
2023-08-29 17:24:36 -03:00 |
|
jllllll
|
22b2a30ec7
|
Bump llama-cpp-python to 0.1.82 (#3730)
|
2023-08-28 18:02:24 -03:00 |
|
jllllll
|
7d3a0b5387
|
Bump llama-cpp-python to 0.1.81 (#3716)
|
2023-08-27 22:38:41 -03:00 |
|
oobabooga
|
7f5370a272
|
Minor fixes/cosmetics
|
2023-08-26 22:11:07 -07:00 |
|
jllllll
|
4a999e3bcd
|
Use separate llama-cpp-python packages for GGML support
|
2023-08-26 10:40:08 -05:00 |
|
oobabooga
|
6e6431e73f
|
Update requirements.txt
|
2023-08-26 01:07:28 -07:00 |
|
cal066
|
960980247f
|
ctransformers: gguf support (#3685)
|
2023-08-25 11:33:04 -03:00 |
|
oobabooga
|
26c5e5e878
|
Bump autogptq
|
2023-08-24 19:23:08 -07:00 |
|
oobabooga
|
2b675533f7
|
Un-bump safetensors
The newest one doesn't work on Windows yet
|
2023-08-23 14:36:03 -07:00 |
|
oobabooga
|
335c49cc7e
|
Bump peft and transformers
|
2023-08-22 13:14:59 -07:00 |
|
tkbit
|
df165fe6c4
|
Use numpy==1.24 in requirements.txt (#3651)
The whisper extension needs numpy 1.24 to work properly
|
2023-08-22 16:55:17 -03:00 |
|
cal066
|
e042bf8624
|
ctransformers: add mlock and no-mmap options (#3649)
|
2023-08-22 16:51:34 -03:00 |
|
oobabooga
|
b96fd22a81
|
Refactor the training tab (#3619)
|
2023-08-18 16:58:38 -03:00 |
|
jllllll
|
1a71ab58a9
|
Bump llama_cpp_python_cuda to 0.1.78 (#3614)
|
2023-08-18 12:04:01 -03:00 |
|
oobabooga
|
6170b5ba31
|
Bump llama-cpp-python
|
2023-08-17 21:41:02 -07:00 |
|
oobabooga
|
ccfc02a28d
|
Add the --disable_exllama option for AutoGPTQ (#3545 from clefever/disable-exllama)
|
2023-08-14 15:15:55 -03:00 |
|
oobabooga
|
8294eadd38
|
Bump AutoGPTQ wheel
|
2023-08-14 11:13:46 -07:00 |
|
jllllll
|
73421b1fed
|
Bump ctransformers wheel version (#3558)
|
2023-08-12 23:02:47 -03:00 |
|
cal066
|
7a4fcee069
|
Add ctransformers support (#3313)
---------
Co-authored-by: cal066 <cal066@users.noreply.github.com>
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
Co-authored-by: randoentity <137087500+randoentity@users.noreply.github.com>
|
2023-08-11 14:41:33 -03:00 |
|
jllllll
|
bee73cedbd
|
Streamline GPTQ-for-LLaMa support
|
2023-08-09 23:42:34 -05:00 |
|
oobabooga
|
a4e48cbdb6
|
Bump AutoGPTQ
|
2023-08-09 08:31:17 -07:00 |
|
oobabooga
|
7c1300fab5
|
Pin aiofiles version to fix statvfs issue
|
2023-08-09 08:07:55 -07:00 |
|
oobabooga
|
2d0634cd07
|
Bump transformers commit for positive prompts
|
2023-08-07 08:57:19 -07:00 |
|
oobabooga
|
0af10ab49b
|
Add Classifier Free Guidance (CFG) for Transformers/ExLlama (#3325)
|
2023-08-06 17:22:48 -03:00 |
|
jllllll
|
5ee95d126c
|
Bump exllama wheels to 0.0.10 (#3467)
|
2023-08-05 13:46:14 -03:00 |
|
jllllll
|
6e30f76ba5
|
Bump bitsandbytes to 0.41.1 (#3457)
|
2023-08-04 19:28:59 -03:00 |
|
jllllll
|
c4e14a757c
|
Bump exllama module to 0.0.9 (#3338)
|
2023-07-29 22:16:23 -03:00 |
|
oobabooga
|
77d2e9f060
|
Remove flexgen 2
|
2023-07-25 15:18:25 -07:00 |
|
oobabooga
|
a07d070b6c
|
Add llama-2-70b GGML support (#3285)
|
2023-07-24 16:37:03 -03:00 |
|
oobabooga
|
6f4830b4d3
|
Bump peft commit
|
2023-07-24 09:49:57 -07:00 |
|
jllllll
|
eb105b0495
|
Bump llama-cpp-python to 0.1.74 (#3257)
|
2023-07-24 11:15:42 -03:00 |
|
jllllll
|
152cf1e8ef
|
Bump bitsandbytes to 0.41.0 (#3258)
e229fbce66...a06a0f6a08
|
2023-07-24 11:06:18 -03:00 |
|
jllllll
|
8d31d20c9a
|
Bump exllama module to 0.0.8 (#3256)
39b3541cdd...3f83ebb378
|
2023-07-24 11:05:54 -03:00 |
|
oobabooga
|
63ece46213
|
Merge branch 'main' into dev
|
2023-07-20 07:06:41 -07:00 |
|
oobabooga
|
4b19b74e6c
|
Add CUDA wheels for llama-cpp-python by jllllll
|
2023-07-19 19:33:43 -07:00 |
|
jllllll
|
87926d033d
|
Bump exllama module to 0.0.7 (#3211)
|
2023-07-19 22:24:47 -03:00 |
|
oobabooga
|
08c23b62c7
|
Bump llama-cpp-python and transformers
|
2023-07-19 07:19:12 -07:00 |
|
jllllll
|
c535f14e5f
|
Bump bitsandbytes Windows wheel to 0.40.2 (#3186)
|
2023-07-18 11:39:43 -03:00 |
|
dependabot[bot]
|
234c58ccd1
|
Bump bitsandbytes from 0.40.1.post1 to 0.40.2 (#3178)
|
2023-07-17 21:24:51 -03:00 |
|
oobabooga
|
49a5389bd3
|
Bump accelerate from 0.20.3 to 0.21.0
|
2023-07-17 21:23:59 -03:00 |
|
dependabot[bot]
|
02a5fe6aa2
|
Bump accelerate from 0.20.3 to 0.21.0
Bumps [accelerate](https://github.com/huggingface/accelerate) from 0.20.3 to 0.21.0.
- [Release notes](https://github.com/huggingface/accelerate/releases)
- [Commits](https://github.com/huggingface/accelerate/compare/v0.20.3...v0.21.0)
---
updated-dependencies:
- dependency-name: accelerate
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
2023-07-17 20:18:31 +00:00 |
|
oobabooga
|
4ce766414b
|
Bump AutoGPTQ version
|
2023-07-17 10:02:12 -07:00 |
|