Commit Graph

66 Commits

Author SHA1 Message Date
oobabooga
faa92eee8d Add spaces 2023-06-20 23:25:58 -03:00
Peter Sofronas
b22c7199c9
Download optimizations (#2786)
* download_model_files metadata writing improvement

* line swap

* reduce line length

* safer download and greater block size

* Minor changes by pycodestyle

---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-06-20 23:14:18 -03:00
Morgan Schweers
447569e31a
Add a download progress bar to the web UI. (#2472)
* Show download progress on the model screen.

* In case of error, mark as done to clear progress bar.

* Increase the iteration block size to reduce overhead.
2023-06-20 22:59:14 -03:00
oobabooga
240752617d Increase download timeout to 20s 2023-06-08 11:16:38 -03:00
oobabooga
9f215523e2 Remove some unused imports 2023-06-06 07:05:46 -03:00
Morgan Schweers
1aed2b9e52
Make it possible to download protected HF models from the command line. (#2408) 2023-06-01 00:11:21 -03:00
Juan M Uys
b984a44f47
fix error when downloading a model for the first time (#2404) 2023-05-30 22:07:12 -03:00
oobabooga
b4662bf4af
Download gptq_model*.py using download-model.py 2023-05-29 16:12:54 -03:00
oobabooga
39dab18307 Add a timeout to download-model.py requests 2023-05-19 11:19:34 -03:00
oobabooga
c7ba2d4f3f Change a message in download-model.py 2023-05-10 19:00:14 -03:00
Wojtek Kowaluk
1436c5845a
fix ggml detection regex in model downloader (#1779) 2023-05-04 11:48:36 -03:00
Lou Bernardi
a6ef2429fa
Add "do not download" and "download from HF" to download-model.py (#1439) 2023-04-21 12:54:50 -03:00
Rudd-O
69d50e2e86
Fix download script (#1373) 2023-04-19 13:02:32 -03:00
Forkoz
c6fe1ced01
Add ChatGLM support (#1256)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2023-04-16 19:15:03 -03:00
oobabooga
2c14df81a8 Use download-model.py to download the model 2023-04-10 11:36:39 -03:00
oobabooga
170e0c05c4 Typo 2023-04-09 17:00:59 -03:00
oobabooga
34ec02d41d Make download-model.py importable 2023-04-09 16:59:59 -03:00
Blake Wyatt
df561fd896
Fix ggml downloading in download-model.py (#915) 2023-04-08 18:52:30 -03:00
oobabooga
ea6e77df72
Make the code more like PEP8 for readability (#862) 2023-04-07 00:15:45 -03:00
oobabooga
b38ba230f4
Update download-model.py 2023-04-01 15:03:24 -03:00
oobabooga
526d5725db
Update download-model.py 2023-04-01 14:47:47 -03:00
oobabooga
23116b88ef
Add support for resuming downloads (#654 from nikita-skakun/support-partial-downloads) 2023-03-31 22:55:55 -03:00
oobabooga
74462ac713 Don't override the metadata when checking the sha256sum 2023-03-31 22:52:52 -03:00
oobabooga
875de5d983 Update ggml template 2023-03-31 17:57:31 -03:00
oobabooga
6a44f4aec6 Add support for downloading ggml files 2023-03-31 17:33:42 -03:00
Nikita Skakun
b99bea3c69 Fixed reported header affecting resuming download 2023-03-30 23:11:59 -07:00
oobabooga
92c7068daf Don't download if --check is specified 2023-03-31 01:31:47 -03:00
Nikita Skakun
0cc89e7755 Checksum code now activated by --check flag. 2023-03-30 20:06:12 -07:00
Nikita Skakun
d550c12a3e Fixed the bug with additional bytes.
The issue seems to be with huggingface not reporting the entire size of the model.
Added an error message with instructions if the checksums don't match.
2023-03-30 12:52:16 -07:00
Nikita Skakun
297ac051d9 Added sha256 validation of model files. 2023-03-30 02:34:19 -07:00
Nikita Skakun
8c590c2362 Added a 'clean' flag to not resume download. 2023-03-30 00:42:19 -07:00
Nikita Skakun
e17af59261 Add support for resuming downloads
This commit adds the ability to resume interrupted downloads by adding a new function to the downloader module. The function uses the HTTP Range header to fetch only the remaining part of a file that wasn't downloaded yet.
2023-03-30 00:21:34 -07:00
oobabooga
131753fcf5 Save the sha256sum of downloaded models 2023-03-29 23:28:16 -03:00
oobabooga
0345e04249 Fix "Unknown argument(s): {'verbose': False}" 2023-03-29 21:17:48 -03:00
oobabooga
37754164eb Move argparse 2023-03-29 20:47:36 -03:00
oobabooga
6403e72062 Merge branch 'main' into nikita-skakun-optimize-download-model 2023-03-29 20:45:33 -03:00
oobabooga
1445ea86f7 Add --output and better metadata for downloading models 2023-03-29 20:26:44 -03:00
Nikita Skakun
aaa218a102 Remove unused import. 2023-03-28 18:32:49 -07:00
Nikita Skakun
ff515ec2fe Improve progress bar visual style
This commit reverts the performance improvements of the previous commit for for improved visual style of multithreaded progress bars. The style of the progress bar has been modified to take up the same amount of size to align them.
2023-03-28 18:29:20 -07:00
Nikita Skakun
4d8e101006 Refactor download process to use multiprocessing
The previous implementation used threads to download files in parallel, which could lead to performance issues due to the Global Interpreter Lock (GIL).
This commit refactors the download process to use multiprocessing instead,
which allows for true parallelism across multiple CPUs.
This results in significantly faster downloads, particularly for large models.
2023-03-28 14:24:23 -07:00
oobabooga
91aa5b460e If both .pt and .safetensors are present, download only safetensors 2023-03-28 13:08:38 -03:00
Florian Kusche
19174842b8 Also download Markdown files 2023-03-26 19:41:14 +02:00
oobabooga
bb4cb22453
Download .pt files using download-model.py (for 4-bit models) 2023-03-24 00:49:04 -03:00
oobabooga
164e05daad Download .py files using download-model.py 2023-03-19 20:34:52 -03:00
oobabooga
104293f411 Add LoRA support 2023-03-16 21:31:39 -03:00
oobabooga
1d7e893fa1
Merge pull request #211 from zoidbb/add-tokenizer-to-hf-downloads
download tokenizer when present
2023-03-10 00:46:21 -03:00
oobabooga
875847bf88 Consider tokenizer a type of text 2023-03-10 00:45:28 -03:00
oobabooga
249c268176 Fix the download script for long lists of files on HF 2023-03-10 00:41:10 -03:00
Ber Zoidberg
ec3de0495c download tokenizer when present 2023-03-09 19:08:09 -08:00
oobabooga
7c70e0e2a6 Fix the download script (sort of) 2023-03-02 14:05:21 -03:00