llama.cpp/ggml/include
bandoti d6fe7abf04
ggml: unify backend logging mechanism (#9709)
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
2024-10-03 17:39:03 +02:00
..
ggml-alloc.h Threadpool: take 2 (#8672) 2024-08-30 01:20:53 +02:00
ggml-backend.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-blas.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-cann.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-cuda.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-kompute.h llama : reorganize source code + improve CMake (#8006) 2024-06-26 18:33:02 +03:00
ggml-metal.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00
ggml-rpc.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-sycl.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml-vulkan.h ggml-backend : add device and backend reg interfaces (#9707) 2024-10-03 01:49:47 +02:00
ggml.h ggml: unify backend logging mechanism (#9709) 2024-10-03 17:39:03 +02:00