Jared Van Bortel 1b67731e18
BERT tokenizer fixes (#6498)
Key changes:
* BERT conversion: fix abuse of LlamaHfVocab, do not set BOS or EOS
* Nomic Embed conversion: pad vocab instead of slicing embedding tensor
* llama_tokenize: handle added special tokens like HF does
2024-04-09 13:44:08 -04:00
..
2024-04-09 13:44:08 -04:00
2023-09-21 11:57:40 +03:00

llama.cpp/example/embedding

This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.

Quick Start

To get started right away, run the following command, making sure to use the correct path for the model you have:

Unix-based systems (Linux, macOS, etc.):

./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null

Windows:

embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null

The above command will output space-separated float values.