mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-02-09 01:38:14 +01:00
![0cc4m](/assets/img/avatar_default.png)
* Refactor shaders, extract GLSL code from ggml_vk_generate_shaders.py into vulkan-shaders directory * Improve debug log code * Add memory debug output option * Fix flake8 * Fix unnecessary high llama-3 VRAM use
8.4 MiB
8.4 MiB
The file is too large to be shown.
View Raw