Conversations
+ New conversation
{{ conv.messages[0].content }}
Conversations are saved to browser's localStorage
llama.cpp
Download
Delete
auto
{{ messages.length === 0 ? 'Send a message to start' : '' }}
Send
Stop
Settings
Settings below are saved in browser's localStorage
System Message
Other sampler settings
Penalties settings
Reasoning models
Expand though process by default for generating message
Exclude thought process when sending request to API (Recommended for DeepSeek-R1)
Advanced config
(debug) Import demo conversation
Show tokens per second
Custom JSON config (For more info, refer to
server documentation
)
Reset to default
Close
Save
Cancel
Submit
Thinking
Thought Process
Speed: {{ timings.predicted_per_second.toFixed(1) }} t/s
Prompt
- Tokens: {{ timings.prompt_n }}
- Time: {{ timings.prompt_ms }} ms
- Speed: {{ timings.prompt_per_second.toFixed(1) }} t/s
Generation
- Tokens: {{ timings.predicted_n }}
- Time: {{ timings.predicted_ms }} ms
- Speed: {{ timings.predicted_per_second.toFixed(1) }} t/s
✍️ Edit
🔄 Regenerate
📋 Copy
{{ label || configKey }}
{{ configInfo[configKey] || '(no help message available)' }}