oobabooga
|
b92d7fd43e
|
Add warnings for when AutoGPTQ, TensorRT-LLM, or HQQ are missing
|
2024-09-28 20:30:24 -07:00 |
|
oobabooga
|
65e5864084
|
Update README
|
2024-09-28 20:25:26 -07:00 |
|
oobabooga
|
1a870b3ea7
|
Remove AutoAWQ and AutoGPTQ from requirements (no wheels available)
|
2024-09-28 19:38:56 -07:00 |
|
oobabooga
|
85994e3ef0
|
Bump pytorch to 2.4.1
|
2024-09-28 09:44:08 -07:00 |
|
oobabooga
|
ca5a2dba72
|
Bump rocm to 6.1.2
|
2024-09-28 09:39:53 -07:00 |
|
oobabooga
|
7276dca933
|
Fix a typo
|
2024-09-27 20:28:17 -07:00 |
|
RandoInternetPreson
|
46996f6519
|
ExllamaV2 tensor parallelism to increase multi gpu inference speeds (#6356)
|
2024-09-28 00:26:03 -03:00 |
|
Philipp Emanuel Weidmann
|
301375834e
|
Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition (#6335)
|
2024-09-27 22:50:12 -03:00 |
|
oobabooga
|
3492e33fd5
|
Bump bitsandbytes to 0.44
|
2024-09-27 16:59:30 -07:00 |
|
Thireus ☠
|
626b0a0437
|
Force /bin/bash shell for conda (#6386)
|
2024-09-27 19:47:04 -03:00 |
|
oobabooga
|
5c918c5b2d
|
Make it possible to sort DRY
|
2024-09-27 15:40:48 -07:00 |
|
oobabooga
|
78b8705400
|
Bump llama-cpp-python to 0.3.0 (except for AMD)
|
2024-09-27 15:06:31 -07:00 |
|
oobabooga
|
c5f048e912
|
Bump ExLlamaV2 to 0.2.2
|
2024-09-27 15:04:08 -07:00 |
|
oobabooga
|
7424f789bf
|
Fix the sampling monkey patch (and add more options to sampler_priority) (#6411)
|
2024-09-27 19:03:25 -03:00 |
|
oobabooga
|
c497a32372
|
Bump transformers to 4.45
|
2024-09-26 11:55:51 -07:00 |
|
oobabooga
|
a50477ec85
|
Apply the change to all requirements (oops)
|
2024-09-06 18:47:25 -07:00 |
|
oobabooga
|
e86ab37aaf
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-09-06 18:44:43 -07:00 |
|
oobabooga
|
27797a92d0
|
Pin fastapi/pydantic requirement versions
|
2024-09-06 18:38:57 -07:00 |
|
Jean-Sylvain Boige
|
4924ee2901
|
typo in OpenAI response format (#6365)
|
2024-09-05 21:42:23 -03:00 |
|
oobabooga
|
bba5b36d33
|
Don't import PEFT unless necessary
|
2024-09-03 19:40:53 -07:00 |
|
oobabooga
|
c5b40eb555
|
llama.cpp: prevent prompt evaluation progress bar with just 1 step
|
2024-09-03 17:37:06 -07:00 |
|
oobabooga
|
2cb8d4c96e
|
Bump llama-cpp-python to 0.2.90
|
2024-09-03 05:53:18 -07:00 |
|
oobabooga
|
64919e0d69
|
Bump flash-attention to 2.6.3
|
2024-09-03 05:51:46 -07:00 |
|
oobabooga
|
68d52c60f3
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-09-02 21:16:39 -07:00 |
|
oobabooga
|
d1168afa76
|
Bump ExLlamaV2 to 0.2.0
|
2024-09-02 21:15:51 -07:00 |
|
Stefan Merettig
|
9a150c3368
|
API: Relax multimodal format, fixes HuggingFace Chat UI (#6353)
|
2024-09-02 23:03:15 -03:00 |
|
GralchemOz
|
4c74c7a116
|
Fix UnicodeDecodeError for BPE-based Models (especially GLM-4) (#6357)
|
2024-09-02 23:00:59 -03:00 |
|
FartyPants (FP HAM)
|
41a8eb4eeb
|
Training pro update script.py (#6359)
|
2024-09-02 23:00:15 -03:00 |
|
oobabooga
|
1f288b4072
|
Bump ExLlamaV2 to 0.1.9
|
2024-08-22 12:40:15 -07:00 |
|
joachimchauvet
|
c24966c591
|
update API documentation with examples to list/load models (#5902)
|
2024-08-21 15:33:45 -03:00 |
|
oobabooga
|
1124f71cf3
|
Update README.md
|
2024-08-20 11:19:46 -03:00 |
|
oobabooga
|
d9a031fcad
|
Update README.md
|
2024-08-20 01:52:30 -03:00 |
|
oobabooga
|
9d99156ca3
|
Update README.md
|
2024-08-20 01:27:02 -03:00 |
|
oobabooga
|
406995f722
|
Update README
|
2024-08-19 21:24:01 -07:00 |
|
oobabooga
|
1b1518aa6a
|
Update README.md
|
2024-08-20 00:36:18 -03:00 |
|
oobabooga
|
5058269143
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2024-08-19 19:55:45 -07:00 |
|
oobabooga
|
fd9cb26619
|
UI: update the DRY parameters descriptions/order
|
2024-08-19 19:40:17 -07:00 |
|
dependabot[bot]
|
64e16e9a46
|
Update accelerate requirement from ==0.32.* to ==0.33.* (#6291)
|
2024-08-19 23:34:10 -03:00 |
|
dependabot[bot]
|
68f928b5e0
|
Update peft requirement from ==0.8.* to ==0.12.* (#6292)
|
2024-08-19 23:33:56 -03:00 |
|
oobabooga
|
8bac1a9382
|
Update README.md
|
2024-08-19 23:10:04 -03:00 |
|
oobabooga
|
bb987ffe66
|
Update README.md
|
2024-08-19 23:06:52 -03:00 |
|
oobabooga
|
4d8c1801c2
|
Bump llama-cpp-python to 0.2.89
|
2024-08-19 17:45:01 -07:00 |
|
oobabooga
|
bf8187124d
|
Bump llama-cpp-python to 0.2.88
|
2024-08-13 12:40:18 -07:00 |
|
oobabooga
|
089d5a9415
|
Bump llama-cpp-python to 0.2.87
|
2024-08-07 20:36:28 -07:00 |
|
oobabooga
|
81773f7f36
|
Bump transformers to 4.44
|
2024-08-06 20:07:05 -07:00 |
|
oobabooga
|
e926c03b3d
|
Add a --tokenizer-dir command-line flag for llamacpp_HF
|
2024-08-06 19:41:18 -07:00 |
|
oobabooga
|
f106e780ba
|
downloader: use 1 session for all files for better speed
|
2024-08-06 19:41:12 -07:00 |
|
oobabooga
|
608545d282
|
Bump llama-cpp-python to 0.2.85
|
2024-07-31 18:44:46 -07:00 |
|
oobabooga
|
30b4d8c8b2
|
Fix Llama 3.1 template including lengthy "tools" headers
|
2024-07-29 11:52:17 -07:00 |
|
oobabooga
|
f4d95f33b8
|
downloader: better progress bar
|
2024-07-28 22:21:56 -07:00 |
|