Commit Graph

4122 Commits

Author SHA1 Message Date
oobabooga
096272f49e Update README 2025-01-17 09:47:45 -08:00
oobabooga
878f378e9f
Merge pull request #6670 from oobabooga/dev
Merge dev branch
2025-01-16 10:22:49 -03:00
oobabooga
0258a6f877 Fix the Google Colab notebook 2025-01-16 05:21:18 -08:00
oobabooga
fe96678692 Update some comments in the requirements 2025-01-14 19:28:48 -08:00
oobabooga
ddb0f71741
Merge pull request #6666 from oobabooga/dev
Merge dev branch
2025-01-14 22:24:39 -03:00
oobabooga
2344366c9b Remove a debug message 2025-01-14 17:23:44 -08:00
oobabooga
7e80266ae9
Merge pull request #6665 from oobabooga/dev
Merge dev branch
2025-01-14 22:01:08 -03:00
oobabooga
5d25739767 Make the update wizards nice 2025-01-14 16:59:36 -08:00
oobabooga
1ef748fb20 Lint 2025-01-14 16:44:15 -08:00
oobabooga
f843cb475b UI: update a help message 2025-01-14 08:12:51 -08:00
oobabooga
c832953ff7 UI: Activate auto_max_new_tokens by default 2025-01-14 05:59:55 -08:00
Underscore
53b838d6c5
HTML: Fix quote pair RegEx matching for all quote types (#6661) 2025-01-13 18:01:50 -03:00
oobabooga
c85e5e58d0 UI: move the new morphdom code to a .js file 2025-01-13 06:20:42 -08:00
oobabooga
facb4155d4 Fix morphdom leaving ghost elements behind 2025-01-11 20:57:28 -08:00
Lounger
ed16374ece
Fix the gallery extension (#6656) 2025-01-11 23:35:22 -03:00
oobabooga
a0492ce325
Optimize syntax highlighting during chat streaming (#6655) 2025-01-11 21:14:10 -03:00
mamei16
f1797f4323
Unescape backslashes in html_output (#6648) 2025-01-11 18:39:44 -03:00
oobabooga
1b9121e5b8 Add a "refresh" button below the last message, add a missing file 2025-01-11 12:42:25 -08:00
oobabooga
a5d64b586d
Add a "copy" button below each message (#6654) 2025-01-11 16:59:21 -03:00
oobabooga
58342740a5 Bump flash-attn to 2.7.3 2025-01-11 07:59:49 -08:00
oobabooga
3a722a36c8
Use morphdom to make chat streaming 1902381098231% faster (#6653) 2025-01-11 12:55:19 -03:00
oobabooga
02db4b0d06 Bump transformers to 4.48 2025-01-10 15:05:08 -08:00
oobabooga
d2f6c0f65f Update README 2025-01-10 13:25:40 -08:00
oobabooga
c393f7650d Update settings-template.yaml, organize modules/shared.py 2025-01-10 13:22:18 -08:00
oobabooga
83c426e96b
Organize internals (#6646) 2025-01-10 18:04:32 -03:00
oobabooga
17aa97248f Installer: make the hashsum verification more robust on Windows 2025-01-10 07:22:25 -08:00
oobabooga
7fe46764fb Improve the --help message about --tensorcores as well 2025-01-10 07:07:41 -08:00
oobabooga
da6d868f58 Remove old deprecated flags (~6 months or more) 2025-01-09 16:11:46 -08:00
oobabooga
15bfe36619 Installer: update miniconda to 24.11.1 (experimental) 2025-01-09 15:58:14 -08:00
oobabooga
e6eda6a3bb
Merge pull request #6645 from oobabooga/dev
Merge dev branch
2025-01-09 18:46:28 -03:00
oobabooga
f3c0f964a2 Lint 2025-01-09 13:18:23 -08:00
oobabooga
0e94d7075e UI: minor style fix on Windows 2025-01-09 13:12:30 -08:00
oobabooga
3020f2e5ec UI: improve the info message about --tensorcores 2025-01-09 12:44:03 -08:00
oobabooga
c08d87b78d Make the huggingface loader more readable 2025-01-09 12:23:38 -08:00
oobabooga
03b4067f31 Installer: ask 1 question for NVIDIA users instead of 2 2025-01-09 12:03:49 -08:00
BPplays
619265b32c
add ipv6 support to the API (#6559) 2025-01-09 10:23:44 -03:00
oobabooga
5c89068168 UI: add an info message for the new Static KV cache option 2025-01-08 17:36:30 -08:00
oobabooga
4ffc9ffc7a UI: fix a list style 2025-01-08 17:24:38 -08:00
oobabooga
e6796c3859 Bump llama-cpp-python to 0.3.6, add macOS 14 and 15 wheels 2025-01-08 17:24:21 -08:00
nclok1405
b9e2ded6d4
Added UnicodeDecodeError workaround for modules/llamacpp_model.py (#6040)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-01-08 21:17:31 -03:00
oobabooga
91a8a87887 Remove obsolete code 2025-01-08 15:07:21 -08:00
oobabooga
ad118056b8 Update README 2025-01-08 14:29:46 -08:00
oobabooga
7157257c3f
Remove the AutoGPTQ loader (#6641) 2025-01-08 19:28:56 -03:00
Jack Cloudman
d3adcbf64b
Add --exclude-pattern flag to download-model.py script (#6542) 2025-01-08 17:30:21 -03:00
dependabot[bot]
1f86722977
Update safetensors requirement from ==0.4.* to ==0.5.* (#6634) 2025-01-08 16:56:55 -03:00
FP HAM
03a0f236a4
Training_PRO fix: add if 'quantization_config' in shared.model.config.to_dict() 2025-01-08 16:54:09 -03:00
oobabooga
c0f600c887 Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
oobabooga
11af199aff Add a "Static KV cache" option for transformers 2025-01-04 17:52:57 -08:00
oobabooga
3967520e71 Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF) 2025-01-04 16:25:06 -08:00
oobabooga
d56b500568 UI: add padding to file saving dialog 2025-01-04 16:22:40 -08:00