llama.cpp/examples/batched.swift
Jhen-Jie Hong f117d84b48
swift : fix llama-vocab api usage (#11645)
* swiftui : fix vocab api usage

* batched.swift : fix vocab api usage
2025-02-04 13:15:24 +02:00
..
Sources swift : fix llama-vocab api usage (#11645) 2025-02-04 13:15:24 +02:00
.gitignore examples : add batched.swift + improve CI for swift (#3562) 2023-10-11 06:14:05 -05:00
Makefile build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
Package.swift build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00
README.md build: rename main → llama-cli, server → llama-server, llava-cli → llama-llava-cli, etc... (#7809) 2024-06-13 00:41:52 +01:00

This is a swift clone of examples/batched.

$ make $ ./llama-batched-swift MODEL_PATH [PROMPT] [PARALLEL]