mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 22:08:46 +01:00
Enable Vulkan MacOS CI
This commit is contained in:
parent
bb9dcd560a
commit
22f83f0c38
@ -255,11 +255,11 @@ effectiveStdenv.mkDerivation (
|
|||||||
# Configurations we don't want even the CI to evaluate. Results in the
|
# Configurations we don't want even the CI to evaluate. Results in the
|
||||||
# "unsupported platform" messages. This is mostly a no-op, because
|
# "unsupported platform" messages. This is mostly a no-op, because
|
||||||
# cudaPackages would've refused to evaluate anyway.
|
# cudaPackages would've refused to evaluate anyway.
|
||||||
badPlatforms = optionals (useCuda || useOpenCL || useVulkan) lib.platforms.darwin;
|
badPlatforms = optionals (useCuda || useOpenCL) lib.platforms.darwin;
|
||||||
|
|
||||||
# Configurations that are known to result in build failures. Can be
|
# Configurations that are known to result in build failures. Can be
|
||||||
# overridden by importing Nixpkgs with `allowBroken = true`.
|
# overridden by importing Nixpkgs with `allowBroken = true`.
|
||||||
broken = (useMetalKit && !effectiveStdenv.isDarwin) || (useVulkan && effectiveStdenv.isDarwin);
|
broken = (useMetalKit && !effectiveStdenv.isDarwin);
|
||||||
|
|
||||||
description = "Inference of LLaMA model in pure C/C++${descriptionSuffix}";
|
description = "Inference of LLaMA model in pure C/C++${descriptionSuffix}";
|
||||||
homepage = "https://github.com/ggerganov/llama.cpp/";
|
homepage = "https://github.com/ggerganov/llama.cpp/";
|
||||||
|
Loading…
Reference in New Issue
Block a user