mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-12-04 21:10:30 +01:00
36 lines
2.3 KiB
Markdown
36 lines
2.3 KiB
Markdown
|
Used to generate raw completions starting from your prompt.
|
||
|
|
||
|
## Default tab
|
||
|
|
||
|
This tab contains two main text boxes: Input, where you enter your prompt, and Output, where the model output will appear.
|
||
|
|
||
|
### Input
|
||
|
|
||
|
The number on the lower right of the Input box counts the number of tokens in the input. It gets updated whenever you update the input text as long as a model is loaded (otherwise there is no tokenizer to count the tokens).
|
||
|
|
||
|
Below the Input box, the following buttons can be found:
|
||
|
|
||
|
* **Generate**: starts a new generation.
|
||
|
* **Stop**: stops an ongoing generation as soon as the next token is generated (which can take a while for a slow model).
|
||
|
* **Continue**: starts a new generation taking as input the text in the "Output" box.
|
||
|
|
||
|
In the **Prompt** menu, you can select from some predefined prompts defined under `text-generation-webui/prompts`. The 💾 button saves your current input as a new prompt, the 🗑️ button deletes the selected prompt, and the 🔄 button refreshes the list. If you come up with an interesting prompt for a certain task, you are welcome to submit it to the repository.
|
||
|
|
||
|
### Output
|
||
|
|
||
|
Four tabs can be found:
|
||
|
|
||
|
* **Raw**: where the raw text generated by the model appears.
|
||
|
* **Markdown**: it contains a "Render" button. You can click on it at any time to render the current output as markdown. This is particularly useful for models that generate LaTeX equations like GALACTICA.
|
||
|
* **HTML**: displays the output in an HTML style that is meant to be easier to read. Its style is defined under `text-generation-webui/css/html_readable_style.css`.
|
||
|
* **Logits**: when you click on "Get next token probabilities", this tab displays the 50 most likely next tokens and their probabilities based on your current input. If "Use samplers" is checked, the probabilities will be the ones after the sampling parameters in the "Parameters" > "Generation" tab are applied. Otherwise, they will be the raw probabilities generated by the model.
|
||
|
* **Tokens**: allows you to tokenize your prompt and see the ID numbers for the individuals tokens.
|
||
|
|
||
|
## Notebook tab
|
||
|
|
||
|
Precisely the same thing as the Default tab, with the difference that the output appears in the same text box as the input.
|
||
|
|
||
|
It contains the following additional button:
|
||
|
|
||
|
* **Regenerate**: uses your previous input for generation while discarding the last output.
|