uvos
3ad5451f3b
Add some minimal optimizations for CDNA ( #10498 )
...
* Add some minimal optimizations for CDNA
* ggml_cuda: set launch bounds also for GCN as it helps there too
2024-11-27 17:10:08 +01:00
Diego Devesa
46c69e0e75
ci : faster CUDA toolkit installation method and use ccache ( #10537 )
...
* ci : faster CUDA toolkit installation method and use ccache
* remove fetch-depth
* only pack CUDA runtime on master
2024-11-27 11:03:25 +01:00
Georgi Gerganov
9e2301f4a4
metal : fix group_norm support condition ( #0 )
2024-11-27 11:22:14 +02:00
Georgi Gerganov
fee824a1a1
sync : ggml
2024-11-27 11:10:42 +02:00
Frankie Robertson
9150f8fef9
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
2024-11-27 11:10:27 +02:00
Jeff Bolz
c31ed2abfc
vulkan: define all quant data structures in types.comp ( #10440 )
2024-11-27 08:32:54 +01:00
Jeff Bolz
5b3466bedf
vulkan: Handle GPUs with less shared memory ( #10468 )
...
There have been reports of failure to compile on systems with <= 32KB
of shared memory (e.g. #10037 ). This change makes the large tile size
fall back to a smaller size if necessary, and makes mul_mat_id fall
back to CPU if there's only 16KB of shared memory.
2024-11-27 08:30:27 +01:00
Jeff Bolz
249a7902ec
vulkan: further optimize q5_k mul_mat_vec ( #10479 )
2024-11-27 08:21:59 +01:00
Jeff Bolz
71a64989a5
vulkan: skip integer div/mod in get_offsets for batch_idx==0 ( #10506 )
2024-11-27 08:08:54 +01:00
Jeff Bolz
4a57d362e1
vulkan: optimize Q2_K and Q3_K mul_mat_vec ( #10459 )
2024-11-27 08:00:50 +01:00
Diego Devesa
c9b00a70b0
ci : fix cuda releases ( #10532 )
2024-11-26 22:12:10 +01:00
Shane A
de5097351c
Add OLMo 2 model in docs ( #10530 )
...
* Add link to OLMo 2 model in docs
* Change link to landing page
2024-11-26 21:55:29 +01:00
Diego Devesa
5a349f2809
ci : remove nix workflows ( #10526 )
2024-11-26 21:13:54 +01:00
Diego Devesa
30ec398321
llama : disable warnings for 3rd party sha1 dependency ( #10527 )
2024-11-26 21:01:47 +01:00
Tristan Druyen
be0e350c8b
Fix HIP flag inconsistency & build docs ( #10524 )
...
* Fix inconsistency of HIP flags in cmake & make
* Fix docs regarding GGML_HIP
2024-11-26 19:27:28 +01:00
R0CKSTAR
249cd93da3
mtgpu: Add MUSA_DOCKER_ARCH in Dockerfiles && update cmake and make ( #10516 )
...
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-11-26 17:00:41 +01:00
Jeff Bolz
904109ed0d
vulkan: fix group_norm ( #10496 )
...
Fix bad calculation of the end of the range. Add a backend test that
covers the bad case (taken from stable diffusion).
Fixes https://github.com/leejet/stable-diffusion.cpp/issues/439 .
2024-11-26 16:45:05 +01:00
Xuan Son Nguyen
45abe0f74e
server : replace behave with pytest ( #10416 )
...
* server : replace behave with pytest
* fix test on windows
* misc
* add more tests
* more tests
* styling
* log less, fix embd test
* added all sequential tests
* fix coding style
* fix save slot test
* add parallel completion test
* fix parallel test
* remove feature files
* update test docs
* no cache_prompt for some tests
* add test_cache_vs_nocache_prompt
2024-11-26 16:20:18 +01:00
Neo Zhang Jianyu
0bbd2262a3
restore the condistion to build & update pacakge when merge ( #10507 )
...
Co-authored-by: arthw <14088817+arthw@users.noreply.github.com>
2024-11-26 21:43:47 +08:00
Georgi Gerganov
ab96610b1e
cmake : enable warnings in llama ( #10474 )
...
* cmake : enable warnings in llama
ggml-ci
* cmake : add llama_get_flags and respect LLAMA_FATAL_WARNINGS
* cmake : get_flags -> ggml_get_flags
* speculative-simple : fix warnings
* cmake : reuse ggml_get_flags
ggml-ci
* speculative-simple : fix compile warning
ggml-ci
2024-11-26 14:18:08 +02:00
Diego Devesa
7db3846a94
ci : publish the docker images created during scheduled runs ( #10515 )
2024-11-26 13:05:20 +01:00
Diego Devesa
c6807b3f28
ci : add ubuntu cuda build, build with one arch on windows ( #10456 )
2024-11-26 13:05:07 +01:00
Charles Xu
25669aa92c
ggml-cpu: cmake add arm64 cpu feature check for macos ( #10487 )
...
* ggml-cpu: cmake add arm64 cpu feature check for macos
* use vmmlaq_s32 for compile option i8mm check
2024-11-26 13:37:05 +02:00
Georgi Gerganov
84e1c33cde
server : fix parallel speculative decoding ( #10513 )
...
ggml-ci
2024-11-26 13:36:40 +02:00
Georgi Gerganov
811872a59d
speculative : simplify the implementation ( #10504 )
...
ggml-ci
2024-11-26 12:29:38 +02:00
Shanshan Shen
9a4b79bcfa
CANN: Improve the Inferencing Performance for Ascend NPU Device ( #10454 )
...
* improve inferencing performance for ascend npu.
Co-authored-by: Frank Mai <thxCode@thxcode0824@gmail.com>
* some modification after review
* some modifications after review
* restore some modifications
* restore some modifications
---------
Co-authored-by: shanshan shen <shanshanshen333@gmail.com>
Co-authored-by: Frank Mai <thxCode@thxcode0824@gmail.com>
2024-11-26 18:08:37 +08:00
Chenguang Li
7066b4cce2
CANN: RoPE and CANCAT operator optimization ( #10488 )
...
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2024-11-26 17:31:05 +08:00
Junil Kim
0eb4e12bee
vulkan: Fix a vulkan-shaders-gen arugment parsing error ( #10484 )
...
The vulkan-shaders-gen was not parsing the --no-clean argument correctly.
Because the previous code was parsing the arguments which have a value only
and the --no-clean argument does not have a value, it was not being parsed
correctly. This commit can now correctly parse arguments that don't have values.
2024-11-26 01:47:20 +00:00
Eric Curtin
0cc63754b8
Introduce llama-run ( #10291 )
...
It's like simple-chat but it uses smart pointers to avoid manual
memory cleanups. Less memory leaks in the code now. Avoid printing
multiple dots. Split code into smaller functions. Uses no exception
handling.
Signed-off-by: Eric Curtin <ecurtin@redhat.com>
2024-11-25 22:56:24 +01:00
Diego Devesa
50d5cecbda
ci : build docker images only once daily ( #10503 )
2024-11-25 22:05:39 +01:00
Georgi Gerganov
9fd8c2687f
server : add more information about error ( #10455 )
2024-11-25 22:28:59 +02:00
Georgi Gerganov
47f931c8f9
server : enable cache_prompt by default ( #10501 )
...
ggml-ci
2024-11-25 21:50:07 +02:00
Georgi Gerganov
106964e3d2
metal : enable mat-vec kernels for bs <= 4 ( #10491 )
2024-11-25 21:49:31 +02:00
Shane A
80acb7b430
Rename Olmo1124 to Olmo2 ( #10500 )
2024-11-25 19:36:09 +01:00
Diego Devesa
10bce0450f
llama : accept a list of devices to use to offload a model ( #10497 )
...
* llama : accept a list of devices to use to offload a model
* accept `--dev none` to completely disable offloading
* fix dev list with dl backends
* rename env parameter to LLAMA_ARG_DEVICE for consistency
2024-11-25 19:30:06 +01:00
Johannes Gäßler
1f922254f0
Github: update issue templates [no ci] ( #10489 )
2024-11-25 19:18:37 +01:00
brucepro
a9a678a6b2
Add download chat feature to server chat ( #10481 )
...
* Add download chat feature to server chat
Add a download feature next to the delete chat feature in the server vue chat interface.
* code style
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
2024-11-25 17:11:55 +01:00
Georgi Gerganov
9ca2e67762
server : add speculative decoding support ( #10455 )
...
* server : add speculative decoding support
ggml-ci
* server : add helper function slot.can_speculate()
ggml-ci
2024-11-25 16:31:38 +02:00
Diego Devesa
5931c1f233
ggml : add support for dynamic loading of backends ( #10469 )
...
* ggml : add support for dynamic loading of backends
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-11-25 15:13:39 +01:00
Georgi Gerganov
f6d12e7df8
tests : fix compile warning
2024-11-25 15:17:32 +02:00
Georgi Gerganov
b756441104
metal : minor code formatting
2024-11-25 15:08:04 +02:00
Neo Zhang Jianyu
5a8987793f
[SYCL] Fix building Win package for oneAPI 2025.0 update ( #10483 )
...
* fix build package for 2025.0
* debug
* debug
* fix
* rm debug
---------
Co-authored-by: arthw <14088817+arthw@users.noreply.github.com>
2024-11-25 17:31:10 +08:00
Georgi Gerganov
d9d54e498d
speculative : refactor and add a simpler example ( #10362 )
...
* speculative : refactor and add a simpler example
ggml-ci
* speculative : clean-up and add comments and TODOs [no ci]
* speculative : manage context in common_speculative
ggml-ci
* speculative : simplify
ggml-ci
* speculative : simplify (cont)
ggml-ci
* speculative : add --draft-min CLI arg
* speculative : minor fixup
* make : build fixes
* speculative : do not redraft previous drafts
ggml-ci
* speculative : fix the draft sampling
ggml-ci
* speculative : fix compile warning
* common : refactor args
ggml-ci
* common : change defaults [no ci]
* common : final touches
ggml-ci
2024-11-25 09:58:41 +02:00
Georgi Gerganov
cce5a90075
flake.lock: Update ( #10470 )
...
Flake lock file updates:
• Updated input 'nixpkgs':
'github:NixOS/nixpkgs/5e4fbfb6b3de1aa2872b76d49fafc942626e2add?narHash=sha256-OZiZ3m8SCMfh3B6bfGC/Bm4x3qc1m2SVEAlkV6iY7Yg%3D' (2024-11-15)
→ 'github:NixOS/nixpkgs/23e89b7da85c3640bbc2173fe04f4bd114342367?narHash=sha256-y/MEyuJ5oBWrWAic/14LaIr/u5E0wRVzyYsouYY3W6w%3D' (2024-11-19)
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2024-11-24 08:03:25 -08:00
Diego Devesa
dc39012cba
llama : fix op mul check with command-r-plus ( #10476 )
2024-11-24 16:10:26 +01:00
Gabe Goodhart
9336db462c
convert : XLMRoberta Type Vocab Size ( #10458 )
...
This matches the key in common bert-based embedding models and may have a
value other than 1 in it.
Branch: XLMRobertaTypeVocabSize
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2024-11-24 11:02:34 +02:00
momonga
96fa2c5e2d
fix gguf-py: Conversion error when multiple licenses are configured ( #9807 )
...
* fix general.license list to str
* fix join license list
---------
Co-authored-by: momonga <115213907+mmnga@users.noreply.github.com>
2024-11-24 01:09:22 +01:00
Diego Devesa
55ed008b2d
ggml : do not use ARM features not included in the build ( #10457 )
2024-11-23 14:41:12 +01:00
蕭澧邦
6dfcfef078
ci: Update oneAPI runtime dll packaging ( #10428 )
...
This is the minimum runtime dll dependencies for oneAPI 2025.0
2024-11-22 10:44:08 +01:00
Johannes Gäßler
599b3e0cd4
GitHub: ask for more info in issue templates ( #10426 )
...
* GitHub: ask for more info in issues [no ci]
* refactor issue templates to be component-specific
* more understandable issue description
* add dropdown for llama.cpp module
2024-11-22 08:32:40 +01:00