llama.cpp/.github
Rémy Oudompheng 66ee4f297c
vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360)
* vulkan: initial support for IQ3_S

* vulkan: initial support for IQ3_XXS

* vulkan: initial support for IQ2_XXS

* vulkan: initial support for IQ2_XS

* vulkan: optimize Q3_K by removing branches

* vulkan: implement dequantize variants for coopmat2

* vulkan: initial support for IQ2_S

* vulkan: vertically realign code

* port failing dequant callbacks from mul_mm

* Fix array length mismatches

* vulkan: avoid using workgroup size before it is referenced

* tests: increase timeout for Vulkan llvmpipe backend

---------

Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-01-29 18:29:39 +01:00
..
ISSUE_TEMPLATE github : add cmd line field to bug report (#11090) 2025-01-06 16:34:49 +01:00
workflows vulkan: implement initial support for IQ2 and IQ3 quantizations (#11360) 2025-01-29 18:29:39 +01:00
labeler.yml ci : add ubuntu cuda build, build with one arch on windows (#10456) 2024-11-26 13:05:07 +01:00
pull_request_template.md github : minify link [no ci] (revert) 2024-12-03 11:21:43 +02:00