Commit Graph

3786 Commits

Author SHA1 Message Date
oobabooga
3cfcab63a5 Update an installation message 2024-03-04 20:37:44 -08:00
oobabooga
907bda0d56 Move update_wizard_wsl.sh to update_wizard_wsl.bat 2024-03-04 19:57:49 -08:00
oobabooga
f697cb4609 Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
oobabooga
2d74660733 Don't git pull on "Install/update extensions requirements" 2024-03-04 12:37:10 -08:00
oobabooga
fbe83854ca Minor message change 2024-03-04 11:10:37 -08:00
oobabooga
90ab022856 Minor message change 2024-03-04 10:54:16 -08:00
oobabooga
97dc3602fc
Create an update wizard (#5623) 2024-03-04 15:52:24 -03:00
oobabooga
6adf222599 One-click installer: change an info message 2024-03-04 08:20:04 -08:00
oobabooga
4bb79c57ac One-click installer: change an info message 2024-03-04 08:11:55 -08:00
oobabooga
74564fe8d0 One-click installer: delete the Miniconda installer after completion 2024-03-04 08:11:03 -08:00
oobabooga
dc2dd5b9d8 One-click installer: add an info message before git pull 2024-03-04 08:00:39 -08:00
oobabooga
527ba98105
Do not install extensions requirements by default (#5621) 2024-03-04 04:46:39 -03:00
oobabooga
fa4ce0eee8 One-click installer: minor change to CMD_FLAGS.txt in CPU mode 2024-03-03 17:42:59 -08:00
oobabooga
8bd4960d05
Update PyTorch to 2.2 (also update flash-attn to 2.5.6) (#5618) 2024-03-03 19:40:32 -03:00
oobabooga
70047a5c57 Bump bitsandytes to 0.42.0 on Windows 2024-03-03 13:19:27 -08:00
oobabooga
24e86bb21b Bump llama-cpp-python to 0.2.55 2024-03-03 12:14:48 -08:00
oobabooga
314e42fd98 Fix transformers requirement 2024-03-03 10:49:28 -08:00
oobabooga
71b1617c1b Remove bitsandbytes from incompatible requirements.txt files 2024-03-03 08:24:54 -08:00
kalomaze
cfb25c9b3f
Cubic sampling w/ curve param (#5551)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-03-03 13:22:21 -03:00
jeffbiocode
3168644152
Training: Update llama2-chat-format.json (#5593) 2024-03-03 12:42:14 -03:00
oobabooga
71dc5b4dee Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-02-28 19:59:20 -08:00
oobabooga
09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
dependabot[bot]
dfdf6eb5b4
Bump hqq from 0.1.3 to 0.1.3.post1 (#5582) 2024-02-26 20:51:39 -03:00
oobabooga
332957ffec Bump llama-cpp-python to 0.2.52 2024-02-26 15:05:53 -08:00
oobabooga
b64770805b Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-02-26 08:51:31 -08:00
oobabooga
830168d3d4 Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)"
This reverts commit 0ced78fdfa.
2024-02-26 05:54:33 -08:00
Bartowski
21acf504ce
Bump transformers to 4.38 for gemma compatibility (#5575) 2024-02-25 20:15:13 -03:00
oobabooga
4164e29416 Block the "To create a public link, set share=True" gradio message 2024-02-25 15:06:08 -08:00
oobabooga
d34126255d Fix loading extensions with "-" in the name (closes #5557) 2024-02-25 09:24:52 -08:00
Lounger
0f68c6fb5b
Big picture fixes (#5565) 2024-02-25 14:10:16 -03:00
jeffbiocode
45c4cd01c5
Add llama 2 chat format for lora training (#5553) 2024-02-25 02:36:36 -03:00
Devin Roark
e0fc808980
fix: ngrok logging does not use the shared logger module (#5570) 2024-02-25 02:35:59 -03:00
oobabooga
32ee5504ed
Remove -k from curl command to download miniconda (#5535) 2024-02-25 02:35:23 -03:00
oobabooga
c07dc56736 Bump llama-cpp-python to 0.2.50 2024-02-24 21:34:11 -08:00
oobabooga
98580cad8e Bump exllamav2 to 0.0.14 2024-02-24 18:35:42 -08:00
oobabooga
527f2652af Bump llama-cpp-python to 0.2.47 2024-02-22 19:48:49 -08:00
oobabooga
3f42e3292a Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)"
This reverts commit d04fef6a07.
2024-02-22 19:48:04 -08:00
oobabooga
10aedc329f Logging: more readable messages when renaming chat histories 2024-02-22 07:57:06 -08:00
oobabooga
faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) 2024-02-22 07:13:22 -08:00
oobabooga
ac5a7a26ea Perplexity evaluation: add some informative error messages 2024-02-21 20:20:52 -08:00
oobabooga
59032140b5 Fix CFG with llamacpp_HF (2nd attempt) 2024-02-19 18:35:42 -08:00
oobabooga
c203c57c18 Fix CFG with llamacpp_HF 2024-02-19 18:09:49 -08:00
dependabot[bot]
5f7dbf454a
Update optimum requirement from ==1.16.* to ==1.17.* (#5548) 2024-02-19 19:15:21 -03:00
dependabot[bot]
d04fef6a07
Bump autoawq from 0.1.8 to 0.2.2 (#5547) 2024-02-19 19:14:55 -03:00
dependabot[bot]
ed6ff49431
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546) 2024-02-19 19:14:04 -03:00
Kevin Pham
10df23efb7
Remove message.content from openai streaming API (#5503) 2024-02-19 18:50:27 -03:00
oobabooga
0b2279d031 Bump llama-cpp-python to 0.2.44 2024-02-19 13:42:31 -08:00
oobabooga
ae05d9830f Replace {{char}}, {{user}} in the chat template itself 2024-02-18 19:57:54 -08:00
oobabooga
717c3494e8 Minor width change after daa140447e 2024-02-18 15:23:45 -08:00
oobabooga
1f27bef71b
Move chat UI elements to the right on desktop (#5538) 2024-02-18 14:32:05 -03:00