1
0
mirror of https://github.com/ggerganov/llama.cpp.git synced 2025-01-24 10:29:21 +01:00
llama.cpp/.devops/nix
Evgeny Kurnevsky e52aba537a
nix: allow to override rocm gpu targets ()
This allows to reduce compile time when you are building for a single GPU.
2024-12-14 10:17:36 -08:00
..
apps.nix examples : remove finetune and train-text-from-scratch () 2024-07-25 10:39:04 +02:00
devshells.nix build(nix): Package gguf-py () 2024-09-02 14:21:01 +03:00
docker.nix nix: init singularity and docker images () 2024-02-22 11:44:10 -08:00
jetson-support.nix flake.nix: expose full scope in legacyPackages 2023-12-31 13:14:58 -08:00
nixpkgs-instances.nix build(nix): Package gguf-py () 2024-09-02 14:21:01 +03:00
package-gguf-py.nix build(nix): Package gguf-py () 2024-09-02 14:21:01 +03:00
package.nix nix: allow to override rocm gpu targets () 2024-12-14 10:17:36 -08:00
python-scripts.nix server : replace behave with pytest () 2024-11-26 16:20:18 +01:00
scope.nix build(nix): Package gguf-py () 2024-09-02 14:21:01 +03:00
sif.nix build(nix): Introduce flake.formatter for nix fmt () 2024-03-01 15:18:26 -08:00