mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
17eb6aa8a9
* Add Vulkan to CMake pkg * Add Sycl to CMake pkg * Add OpenMP to CMake pkg * Split generated shader file into separate translation unit * Add CMake target for Vulkan shaders * Update README.md * Add make target for Vulkan shaders * Use pkg-config to locate vulkan library * Add vulkan SDK dep to ubuntu-22-cmake-vulkan workflow * Clean up tabs * Move sudo to apt-key invocation * Forward GGML_EXTRA_LIBS to CMake config pkg * Update vulkan obj file paths * Add shaderc to nix pkg * Add python3 to Vulkan nix build * Link against ggml in cmake pkg * Remove Python dependency from Vulkan build * code review changes * Remove trailing newline * Add cflags from pkg-config to fix w64devkit build * Update README.md * Remove trailing whitespace * Update README.md * Remove trailing whitespace * Fix doc heading * Make glslc required Vulkan component * remove clblast from nix pkg |
||
---|---|---|
.. | ||
nix | ||
cloud-v-pipeline | ||
full-cuda.Dockerfile | ||
full-rocm.Dockerfile | ||
full.Dockerfile | ||
llama-cli-cuda.Dockerfile | ||
llama-cli-intel.Dockerfile | ||
llama-cli-rocm.Dockerfile | ||
llama-cli-vulkan.Dockerfile | ||
llama-cli.Dockerfile | ||
llama-cpp-cuda.srpm.spec | ||
llama-cpp.srpm.spec | ||
llama-server-cuda.Dockerfile | ||
llama-server-intel.Dockerfile | ||
llama-server-rocm.Dockerfile | ||
llama-server-vulkan.Dockerfile | ||
llama-server.Dockerfile | ||
tools.sh |