mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-11 04:50:26 +01:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew * server: update refs -> llama-server gitignore llama-server * server: simplify nix package * main: update refs -> llama fix examples/main ref * main/server: fix targets * update more names * Update build.yml * rm accidentally checked in bins * update straggling refs * Update .gitignore * Update server-llm.sh * main: target name -> llama-cli * Prefix all example bins w/ llama- * fix main refs * rename {main->llama}-cmake-pkg binary * prefix more cmake targets w/ llama- * add/fix gbnf-validator subfolder to cmake * sort cmake example subdirs * rm bin files * fix llama-lookup-* Makefile rules * gitignore /llama-* * rename Dockerfiles * rename llama|main -> llama-cli; consistent RPM bin prefixes * fix some missing -cli suffixes * rename dockerfile w/ llama-cli * rename(make): llama-baby-llama * update dockerfile refs * more llama-cli(.exe) * fix test-eval-callback * rename: llama-cli-cmake-pkg(.exe) * address gbnf-validator unused fread warning (switched to C++ / ifstream) * add two missing llama- prefixes * Updating docs for eval-callback binary to use new `llama-` prefix. * Updating a few lingering doc references for rename of main to llama-cli * Updating `run-with-preset.py` to use new binary names. Updating docs around `perplexity` binary rename. * Updating documentation references for lookup-merge and export-lora * Updating two small `main` references missed earlier in the finetune docs. * Update apps.nix * update grammar/README.md w/ new llama-* names * update llama-rpc-server bin name + doc * Revert "update llama-rpc-server bin name + doc" This reverts commit e474ef1df481fd8936cd7d098e3065d7de378930. * add hot topic notice to README.md * Update README.md * Update README.md * rename gguf-split & quantize bins refs in **/tests.sh --------- Co-authored-by: HanClinto <hanclinto@gmail.com>
47 lines
2.5 KiB
Markdown
47 lines
2.5 KiB
Markdown
# llama.cpp/example/infill
|
|
|
|
This example shows how to use the infill mode with Code Llama models supporting infill mode.
|
|
Currently the 7B and 13B models support infill mode.
|
|
|
|
Infill supports most of the options available in the main example.
|
|
|
|
For further information have a look at the main README.md in llama.cpp/example/main/README.md
|
|
|
|
## Common Options
|
|
|
|
In this section, we cover the most commonly used options for running the `infill` program with the LLaMA models:
|
|
|
|
- `-m FNAME, --model FNAME`: Specify the path to the LLaMA model file (e.g., `models/7B/ggml-model.bin`).
|
|
- `-i, --interactive`: Run the program in interactive mode, allowing you to provide input directly and receive real-time responses.
|
|
- `-n N, --n-predict N`: Set the number of tokens to predict when generating text. Adjusting this value can influence the length of the generated text.
|
|
- `-c N, --ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference.
|
|
|
|
## Input Prompts
|
|
|
|
The `infill` program provides several ways to interact with the LLaMA models using input prompts:
|
|
|
|
- `--in-prefix PROMPT_BEFORE_CURSOR`: Provide the prefix directly as a command-line option.
|
|
- `--in-suffix PROMPT_AFTER_CURSOR`: Provide the suffix directly as a command-line option.
|
|
- `--interactive-first`: Run the program in interactive mode and wait for input right away. (More on this below.)
|
|
|
|
## Interaction
|
|
|
|
The `infill` program offers a seamless way to interact with LLaMA models, allowing users to receive real-time infill suggestions. The interactive mode can be triggered using `--interactive`, and `--interactive-first`
|
|
|
|
### Interaction Options
|
|
|
|
- `-i, --interactive`: Run the program in interactive mode, allowing users to get real time code suggestions from model.
|
|
- `--interactive-first`: Run the program in interactive mode and immediately wait for user input before starting the text generation.
|
|
- `--color`: Enable colorized output to differentiate visually distinguishing between prompts, user input, and generated text.
|
|
|
|
### Example
|
|
|
|
Download a model that supports infill, for example CodeLlama:
|
|
```console
|
|
scripts/hf.sh --repo TheBloke/CodeLlama-13B-GGUF --file codellama-13b.Q5_K_S.gguf --outdir models
|
|
```
|
|
|
|
```bash
|
|
./llama-infill -t 10 -ngl 0 -m models/codellama-13b.Q5_K_S.gguf -c 4096 --temp 0.7 --repeat_penalty 1.1 -n 20 --in-prefix "def helloworld():\n print(\"hell" --in-suffix "\n print(\"goodbye world\")\n "
|
|
```
|