mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-10-29 22:20:15 +01:00
Minor style changes
This commit is contained in:
parent
dc6a845b85
commit
1daf4dd712
@ -178,13 +178,15 @@ If you want a more ChatGPT-like experience, you can run in interactive mode by p
|
||||
In this mode, you can always interrupt generation by pressing Ctrl+C and enter one or more lines of text which will be converted into tokens and appended to the current context. You can also specify a *reverse prompt* with the parameter `-r "reverse prompt string"`. This will result in user input being prompted whenever the exact tokens of the reverse prompt string are encountered in the generation. A typical use is to use a prompt which makes LLaMa emulate a chat between multiple users, say Alice and Bob, and pass `-r "Alice:"`.
|
||||
|
||||
Here is an example few-shot interaction, invoked with the command
|
||||
```
|
||||
|
||||
```bash
|
||||
# default arguments using 7B model
|
||||
./chat.sh
|
||||
|
||||
# custom arguments using 13B model
|
||||
./main -m ./models/13B/ggml-model-q4_0.bin -n 256 --repeat_penalty 1.0 --color -i -r "User:" -f prompts/chat-with-bob.txt
|
||||
```
|
||||
|
||||
Note the use of `--color` to distinguish between user input and generated text.
|
||||
|
||||
![image](https://user-images.githubusercontent.com/1991296/224575029-2af3c7dc-5a65-4f64-a6bb-517a532aea38.png)
|
||||
|
Loading…
Reference in New Issue
Block a user