mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-21 07:48:06 +01:00
7
Home
wzy edited this page 2023-08-15 20:32:11 -06:00
Table of Contents
Welcome to the llama.cpp wiki!
Install
ArchLinux
yay -S llama-cpp
yay -S llama-cpp-cuda
yay -S llama-cpp-opencl
Nix
nix run github:ggerganov/llama.cpp
nix run 'github:ggerganov/llama.cpp#opencl'
NixOS
{ config, pkgs, ... }:
{
nixpkgs.config.packageOverrides = pkgs: {
llama-cpp = (
builtins.getFlake "github:ggerganov/llama.cpp"
).packages.${builtins.currentSystem}.default;
};
};
environment.systemPackages = with pkgs; [ llama-cpp ]
}
Android Termux
Wait https://github.com/termux/termux-packages/pull/17457.
apt install llama-cpp
Windows Msys2
pacman -S llama-cpp
Debian (Ubuntu)
git clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G DEB
dpkg -i *.deb
Redhat
git clone --depth=1 https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -Bbuild
cmake --build build -D...
cd build
cpack -G RPM
rpm -i *.rpm
Users Guide
Useful information for users that doesn't fit into Readme.
- Home
- Feature Matrix
- GGML Tips & Tricks
- Chat Templating
- Metadata Override
- HuggingFace Model Card Metadata Interoperability Consideration
Technical Details
These are information useful for Maintainers and Developers which does not fit into code comments
Github Actions Main Branch Status
Click on a badge to jump to workflow. This is here as a useful general view of all the actions so that we may notice quicker if main branch automation is broken and where.