llama.cpp/examples/embedding
Jared Van Bortel 8803582721 llama : handle added special tokens like HF does
Now the BERT tokenizer actually uses the SEP and CLS tokens from
SpecialVocab.
2024-04-04 16:01:26 -04:00
..
CMakeLists.txt build : link against build info instead of compiling against it (#3879) 2023-11-02 08:50:16 +02:00
embedding.cpp llama : handle added special tokens like HF does 2024-04-04 16:01:26 -04:00
README.md embedding : update README.md (#3224) 2023-09-21 11:57:40 +03:00

llama.cpp/example/embedding

This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.

Quick Start

To get started right away, run the following command, making sure to use the correct path for the model you have:

Unix-based systems (Linux, macOS, etc.):

./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null

Windows:

embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null

The above command will output space-separated float values.