mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 22:59:24 +01:00
a5735e4426
* ggml: Added OpenMP for multi-threads processing * ggml : Limit the number of threads used to avoid deadlock * update shared state n_threads in parallel region * clear numa affinity for main thread even with openmp * enable openmp by default * fix msvc build * disable openmp on macos * ci : disable openmp with thread sanitizer * Update ggml.c Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> --------- Co-authored-by: slaren <slarengh@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com> |
||
---|---|---|
.. | ||
bench.yml | ||
build.yml | ||
close-issue.yml | ||
code-coverage.yml | ||
docker.yml | ||
editorconfig.yml | ||
gguf-publish.yml | ||
labeler.yml | ||
nix-ci-aarch64.yml | ||
nix-ci.yml | ||
nix-flake-update.yml | ||
nix-publish-flake.yml | ||
python-check-requirements.yml | ||
python-lint.yml | ||
server.yml |