llama.cpp/ggml/include
Diego Devesa f010b77a37
vulkan : add backend registry / device interfaces (#9721)
* vulkan : add backend registry / device interfaces

* llama : print devices used on model load
2024-10-17 02:46:58 +02:00
..
ggml-alloc.h ggml : fix typo in example usage ggml_gallocr_new (ggml/984) 2024-10-04 18:50:05 +03:00
ggml-backend.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-blas.h ggml : add backend registry / device interfaces to BLAS backend (#9752) 2024-10-07 21:55:08 +02:00
ggml-cann.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-cuda.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h ggml : add metal backend registry / device (#9713) 2024-10-07 18:27:51 +03:00
ggml-rpc.h rpc : add backend registry / device interfaces (#9812) 2024-10-10 20:14:55 +02:00
ggml-sycl.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-vulkan.h vulkan : add backend registry / device interfaces (#9721) 2024-10-17 02:46:58 +02:00
ggml.h ggml : fix BLAS with unsupported types (#9775) 2024-10-08 14:21:43 +02:00