From c780e75305dba1f67691a8dc0e8bc8425838a452 Mon Sep 17 00:00:00 2001 From: Jeximo Date: Tue, 7 May 2024 21:26:43 -0300 Subject: [PATCH] Further tidy on Android instructions README.md (#7077) * Further tidy on Android instructions README.md Fixed some logic when following readme direction * Clean up redundent information A new user arriving will see simple directions on llama.cpp homepage * corrected puncuation Period after cmake, colon after termux * re-word for clarity method seems to be more correct, instead of alternative in this context * Organized required packages per build type building llama.cpp with NDK on a pc doesn't require installing clang, cmake, git, or wget in termux. * README.md corrected title * fix trailing whitespace --- README.md | 47 +++++++++++++++++++++-------------------------- 1 file changed, 21 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 75fc10a15..1c960b8c1 100644 --- a/README.md +++ b/README.md @@ -936,17 +936,25 @@ If your issue is with model generation quality, then please at least scan the fo ### Android +#### Build on Android using Termux +[Termux](https://github.com/termux/termux-app#installation) is a method to execute `llama.cpp` on an Android device (no root required). +``` +apt update && apt upgrade -y +apt install git make cmake +``` + +It's recommended to move your model inside the `~/` directory for best performance: +``` +cd storage/downloads +mv model.gguf ~/ +``` + +[Get the code](https://github.com/ggerganov/llama.cpp#get-the-code) & [follow the Linux build instructions](https://github.com/ggerganov/llama.cpp#build) to build `llama.cpp`. + #### Building the Project using Android NDK -You can easily run `llama.cpp` on Android device with [termux](https://termux.dev/). - -First, install the essential packages for termux: -``` -pkg install clang wget git cmake -``` -Second, obtain the [Android NDK](https://developer.android.com/ndk) and then build with CMake: - -You can execute the following commands on your computer to avoid downloading the NDK to your mobile. Of course, you can also do this in Termux. +Obtain the [Android NDK](https://developer.android.com/ndk) and then build with CMake. +Execute the following commands on your computer to avoid downloading the NDK to your mobile. Alternatively, you can also do this in Termux: ``` $ mkdir build-android $ cd build-android @@ -954,7 +962,9 @@ $ export NDK= $ cmake -DCMAKE_TOOLCHAIN_FILE=$NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DANDROID_PLATFORM=android-23 -DCMAKE_C_FLAGS=-march=armv8.4a+dotprod .. $ make ``` -Install [termux](https://termux.dev/) on your device and run `termux-setup-storage` to get access to your SD card. + +Install [termux](https://github.com/termux/termux-app#installation) on your device and run `termux-setup-storage` to get access to your SD card (if Android 11+ then run the command twice). + Finally, copy these built `llama` binaries and the model file to your device storage. Because the file permissions in the Android sdcard cannot be changed, you can copy the executable files to the `/data/data/com.termux/files/home/bin` path, and then execute the following commands in Termux to add executable permission: (Assumed that you have pushed the built executable files to the /sdcard/llama.cpp/bin path using `adb push`) @@ -976,25 +986,10 @@ $cd /data/data/com.termux/files/home/bin $./main -m ../model/llama-2-7b-chat.Q4_K_M.gguf -n 128 -cml ``` -Here is a demo of an interactive session running on Pixel 5 phone: +Here's a demo of an interactive session running on Pixel 5 phone: https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4 -#### Build on Android using Termux -[Termux](https://github.com/termux/termux-app#installation) is an alternative to execute `llama.cpp` on an Android device (no root required). -``` -apt update && apt upgrade -y -apt install git -``` - -It's recommended to move your model inside the `~/` directory for best performance: -``` -cd storage/downloads -mv model.gguf ~/ -``` - -[Follow the Linux build instructions](https://github.com/ggerganov/llama.cpp#build) to build `llama.cpp`. - ### Docker #### Prerequisites