Artificiangel
1b44204bd7
Use custom model/lora download folder in model downloader
2024-04-29 07:21:09 -04:00
oobabooga
5770e06c48
Add a retry mechanism to the model downloader ( #5943 )
2024-04-27 12:25:28 -03:00
zaypen
a90509d82e
Model downloader: Take HF_ENDPOINT in consideration ( #5571 )
2024-04-11 18:28:10 -03:00
oobabooga
830168d3d4
Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. ( #4383 )"
...
This reverts commit 0ced78fdfa
.
2024-02-26 05:54:33 -08:00
oobabooga
f465b7b486
Downloader: start one session per file ( #5520 )
2024-02-16 12:55:27 -03:00
oobabooga
44018c2f69
Add a "llamacpp_HF creator" menu ( #5519 )
2024-02-16 12:43:24 -03:00
oobabooga
ee65f4f014
Downloader: don't assume that huggingface_hub is installed
2024-01-30 09:14:11 -08:00
Anthony Guijarro
828be63f2c
Downloader: use HF get_token function ( #5381 )
2024-01-27 17:13:09 -03:00
oobabooga
7bbe7e803a
Minor fix
2023-12-08 05:01:25 -08:00
oobabooga
d516815c9c
Model downloader: download only fp16 if both fp16 and GGUF are present
2023-12-05 21:09:12 -08:00
oobabooga
510a01ef46
Lint
2023-11-16 18:03:06 -08:00
LightningDragon
0ced78fdfa
Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. ( #4383 )
2023-10-25 12:15:34 -03:00
oobabooga
613feca23b
Make colab functional for llama.cpp
...
- Download only Q4_K_M for GGUF repositories by default
- Use maximum n-gpu-layers by default
2023-10-22 09:08:25 -07:00
oobabooga
cd45635f53
tqdm improvement for colab
2023-10-21 22:00:29 -07:00
oobabooga
3a9d90c3a1
Download models with 4 threads by default
2023-10-10 13:52:10 -07:00
快乐的我531
4e56ad55e1
Let model downloader download *.tiktoken as well ( #4121 )
2023-09-28 18:03:18 -03:00
kalomaze
7c9664ed35
Allow full model URL to be used for download ( #3919 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-09-16 10:06:13 -03:00
oobabooga
df52dab67b
Lint
2023-09-11 07:57:38 -07:00
oobabooga
ed86878f02
Remove GGML support
2023-09-11 07:44:00 -07:00
missionfloyd
787219267c
Allow downloading single file from UI ( #3737 )
2023-08-29 23:32:36 -03:00
Alberto Ferrer
f63dd83631
Update download-model.py (Allow single file download) ( #3732 )
2023-08-29 22:57:58 -03:00
oobabooga
7f5370a272
Minor fixes/cosmetics
2023-08-26 22:11:07 -07:00
jllllll
4a999e3bcd
Use separate llama-cpp-python packages for GGML support
2023-08-26 10:40:08 -05:00
oobabooga
83640d6f43
Replace ggml occurences with gguf
2023-08-26 01:06:59 -07:00
Thomas De Bonnet
0dfd1a8b7d
Improve readability of download-model.py ( #3497 )
2023-08-20 20:13:13 -03:00
oobabooga
4b3384e353
Handle unfinished lists during markdown streaming
2023-08-03 17:15:18 -07:00
oobabooga
13449aa44d
Decrease download timeout
2023-07-15 22:30:08 -07:00
oobabooga
e202190c4f
lint
2023-07-12 11:33:25 -07:00
Ahmad Fahadh Ilyas
8db7e857b1
Add token authorization for downloading model ( #3067 )
2023-07-11 18:48:08 -03:00
FartyPants
61102899cd
google flan T5 download fix ( #3080 )
2023-07-11 18:46:59 -03:00
tianchen zhong
c7058afb40
Add new possible bin file name regex ( #3070 )
2023-07-09 17:22:56 -03:00
jeckyhl
88a747b5b9
fix: Error when downloading model from UI ( #3014 )
2023-07-05 11:27:29 -03:00
AN Long
be4582be40
Support specify retry times in download-model.py ( #2908 )
2023-07-04 22:26:30 -03:00
Roman
38897fbd8a
fix: added model parameter check ( #2829 )
2023-06-24 10:09:34 -03:00
Gaurav Bhagchandani
89fb6f9236
Fixed the ZeroDivisionError when downloading a model ( #2797 )
2023-06-21 12:31:50 -03:00
oobabooga
5dfe0bec06
Remove old/useless code
2023-06-20 23:36:56 -03:00
oobabooga
faa92eee8d
Add spaces
2023-06-20 23:25:58 -03:00
Peter Sofronas
b22c7199c9
Download optimizations ( #2786 )
...
* download_model_files metadata writing improvement
* line swap
* reduce line length
* safer download and greater block size
* Minor changes by pycodestyle
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 23:14:18 -03:00
Morgan Schweers
447569e31a
Add a download progress bar to the web UI. ( #2472 )
...
* Show download progress on the model screen.
* In case of error, mark as done to clear progress bar.
* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
oobabooga
240752617d
Increase download timeout to 20s
2023-06-08 11:16:38 -03:00
oobabooga
9f215523e2
Remove some unused imports
2023-06-06 07:05:46 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. ( #2408 )
2023-06-01 00:11:21 -03:00
Juan M Uys
b984a44f47
fix error when downloading a model for the first time ( #2404 )
2023-05-30 22:07:12 -03:00
oobabooga
b4662bf4af
Download gptq_model*.py using download-model.py
2023-05-29 16:12:54 -03:00
oobabooga
39dab18307
Add a timeout to download-model.py requests
2023-05-19 11:19:34 -03:00
oobabooga
c7ba2d4f3f
Change a message in download-model.py
2023-05-10 19:00:14 -03:00
Wojtek Kowaluk
1436c5845a
fix ggml detection regex in model downloader ( #1779 )
2023-05-04 11:48:36 -03:00
Lou Bernardi
a6ef2429fa
Add "do not download" and "download from HF" to download-model.py ( #1439 )
2023-04-21 12:54:50 -03:00
Rudd-O
69d50e2e86
Fix download script ( #1373 )
2023-04-19 13:02:32 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support ( #1256 )
...
---------
Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00