mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 07:30:17 +01:00
deb7240100
* embedding: assign `n_ubatch` value, print error on `n_batch` overflow
* Update examples/embedding/embedding.cpp
Co-authored-by: Xuan Son Nguyen <thichthat@gmail.com>
* use %ld instead of %lld
* Revert "use %ld instead of %lld"
This reverts commit
|
||
---|---|---|
.. | ||
CMakeLists.txt | ||
embedding.cpp | ||
README.md |
llama.cpp/example/embedding
This example demonstrates generate high-dimensional embedding vector of a given text with llama.cpp.
Quick Start
To get started right away, run the following command, making sure to use the correct path for the model you have:
Unix-based systems (Linux, macOS, etc.):
./embedding -m ./path/to/model --log-disable -p "Hello World!" 2>/dev/null
Windows:
embedding.exe -m ./path/to/model --log-disable -p "Hello World!" 2>$null
The above command will output space-separated float values.