Commit Graph

3817 Commits

Author SHA1 Message Date
jeffbiocode
3168644152
Training: Update llama2-chat-format.json (#5593) 2024-03-03 12:42:14 -03:00
oobabooga
71dc5b4dee Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-02-28 19:59:20 -08:00
oobabooga
09b13acfb2 Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
dependabot[bot]
dfdf6eb5b4
Bump hqq from 0.1.3 to 0.1.3.post1 (#5582) 2024-02-26 20:51:39 -03:00
oobabooga
332957ffec Bump llama-cpp-python to 0.2.52 2024-02-26 15:05:53 -08:00
oobabooga
b64770805b Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2024-02-26 08:51:31 -08:00
oobabooga
830168d3d4 Revert "Replace hashlib.sha256 with hashlib.file_digest so we don't need to load entire files into ram before hashing them. (#4383)"
This reverts commit 0ced78fdfa.
2024-02-26 05:54:33 -08:00
Bartowski
21acf504ce
Bump transformers to 4.38 for gemma compatibility (#5575) 2024-02-25 20:15:13 -03:00
oobabooga
4164e29416 Block the "To create a public link, set share=True" gradio message 2024-02-25 15:06:08 -08:00
oobabooga
d34126255d Fix loading extensions with "-" in the name (closes #5557) 2024-02-25 09:24:52 -08:00
Lounger
0f68c6fb5b
Big picture fixes (#5565) 2024-02-25 14:10:16 -03:00
jeffbiocode
45c4cd01c5
Add llama 2 chat format for lora training (#5553) 2024-02-25 02:36:36 -03:00
Devin Roark
e0fc808980
fix: ngrok logging does not use the shared logger module (#5570) 2024-02-25 02:35:59 -03:00
oobabooga
32ee5504ed
Remove -k from curl command to download miniconda (#5535) 2024-02-25 02:35:23 -03:00
oobabooga
c07dc56736 Bump llama-cpp-python to 0.2.50 2024-02-24 21:34:11 -08:00
oobabooga
98580cad8e Bump exllamav2 to 0.0.14 2024-02-24 18:35:42 -08:00
oobabooga
527f2652af Bump llama-cpp-python to 0.2.47 2024-02-22 19:48:49 -08:00
oobabooga
3f42e3292a Revert "Bump autoawq from 0.1.8 to 0.2.2 (#5547)"
This reverts commit d04fef6a07.
2024-02-22 19:48:04 -08:00
oobabooga
10aedc329f Logging: more readable messages when renaming chat histories 2024-02-22 07:57:06 -08:00
oobabooga
faf3bf2503 Perplexity evaluation: make UI events more robust (attempt) 2024-02-22 07:13:22 -08:00
oobabooga
ac5a7a26ea Perplexity evaluation: add some informative error messages 2024-02-21 20:20:52 -08:00
oobabooga
59032140b5 Fix CFG with llamacpp_HF (2nd attempt) 2024-02-19 18:35:42 -08:00
oobabooga
c203c57c18 Fix CFG with llamacpp_HF 2024-02-19 18:09:49 -08:00
dependabot[bot]
5f7dbf454a
Update optimum requirement from ==1.16.* to ==1.17.* (#5548) 2024-02-19 19:15:21 -03:00
dependabot[bot]
d04fef6a07
Bump autoawq from 0.1.8 to 0.2.2 (#5547) 2024-02-19 19:14:55 -03:00
dependabot[bot]
ed6ff49431
Update accelerate requirement from ==0.25.* to ==0.27.* (#5546) 2024-02-19 19:14:04 -03:00
Kevin Pham
10df23efb7
Remove message.content from openai streaming API (#5503) 2024-02-19 18:50:27 -03:00
oobabooga
0b2279d031 Bump llama-cpp-python to 0.2.44 2024-02-19 13:42:31 -08:00
oobabooga
ae05d9830f Replace {{char}}, {{user}} in the chat template itself 2024-02-18 19:57:54 -08:00
oobabooga
717c3494e8 Minor width change after daa140447e 2024-02-18 15:23:45 -08:00
oobabooga
1f27bef71b
Move chat UI elements to the right on desktop (#5538) 2024-02-18 14:32:05 -03:00
oobabooga
d8064c00e8 UI: hide chat scrollbar on desktop when not hovered 2024-02-17 20:47:14 -08:00
oobabooga
36c29084bb UI: fix instruct style background for multiline inputs 2024-02-17 20:09:47 -08:00
oobabooga
904867a139 UI: fix scroll down after sending a multiline message 2024-02-17 19:27:13 -08:00
oobabooga
d6bd71db7f ExLlamaV2: fix loading when autosplit is not set 2024-02-17 12:54:37 -08:00
oobabooga
af0bbf5b13 Lint 2024-02-17 09:01:04 -08:00
fschuh
fa1019e8fe
Removed extra spaces from Mistral instruction template that were causing Mistral to misbehave (#5517) 2024-02-16 21:40:51 -03:00
oobabooga
c375c753d6 Bump bitsandbytes to 0.42 (Linux only) 2024-02-16 10:47:57 -08:00
oobabooga
a6730f88f7
Add --autosplit flag for ExLlamaV2 (#5524) 2024-02-16 15:26:10 -03:00
oobabooga
4039999be5 Autodetect llamacpp_HF loader when tokenizer exists 2024-02-16 09:29:26 -08:00
oobabooga
76d28eaa9e
Add a menu for customizing the instruction template for the model (#5521) 2024-02-16 14:21:17 -03:00
oobabooga
0e1d8d5601 Instruction template: make "Send to default/notebook" work without a tokenizer 2024-02-16 08:01:07 -08:00
oobabooga
f465b7b486
Downloader: start one session per file (#5520) 2024-02-16 12:55:27 -03:00
oobabooga
44018c2f69
Add a "llamacpp_HF creator" menu (#5519) 2024-02-16 12:43:24 -03:00
oobabooga
b2b74c83a6 Fix Qwen1.5 in llamacpp_HF 2024-02-15 19:04:19 -08:00
oobabooga
080f7132c0
Revert gradio to 3.50.2 (#5513) 2024-02-15 20:40:23 -03:00
oobabooga
ea0e1feee7 Bump llama-cpp-python to 0.2.43 2024-02-14 21:58:24 -08:00
oobabooga
549f106879 Bump ExLlamaV2 to v0.0.13.2 2024-02-14 21:57:48 -08:00
oobabooga
7123ac3f77
Remove "Maximum UI updates/second" parameter (#5507) 2024-02-14 23:34:30 -03:00
DominikKowalczyk
33c4ce0720
Bump gradio to 4.19 (#5419)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2024-02-14 23:28:26 -03:00