llama.cpp/ggml
Diego Devesa b9e02e8184
ggml : fix memory leaks when loading invalid gguf files (#10094)
* ggml : fix gguf string leak when reading kv pairs fails

* ggml : avoid crashing with GGML_ABORT when the KV has an invalid type

* ggml : avoid crashing on failed memory allocations when loading a gguf file
2024-10-30 14:51:21 +01:00
..
cmake llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
include llama : refactor model loader with backend registry (#10026) 2024-10-30 02:01:23 +01:00
src ggml : fix memory leaks when loading invalid gguf files (#10094) 2024-10-30 14:51:21 +01:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt add amx kernel for gemm (#8998) 2024-10-18 13:34:36 +08:00