diff --git a/docs/Generation-parameters.md b/docs/Generation-parameters.md deleted file mode 100644 index 44774216..00000000 --- a/docs/Generation-parameters.md +++ /dev/null @@ -1,35 +0,0 @@ -# Generation parameters - -For a description of the generation parameters provided by the transformers library, see this link: https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig - -### llama.cpp - -llama.cpp only uses the following parameters: - -* temperature -* top_p -* top_k -* repetition_penalty -* tfs -* mirostat_mode -* mirostat_tau -* mirostat_eta - -### ExLlama - -ExLlama only uses the following parameters: - -* temperature -* top_p -* top_k -* repetition_penalty -* repetition_penalty_range -* typical_p - -### RWKV - -RWKV only uses the following parameters when loaded through the old .pth weights: - -* temperature -* top_p -* top_k diff --git a/docs/README.md b/docs/README.md index 972f8c44..6ab8d213 100644 --- a/docs/README.md +++ b/docs/README.md @@ -8,7 +8,6 @@ * [Docker](Docker.md) * [ExLlama](ExLlama.md) * [Extensions](Extensions.md) -* [Generation parameters](Generation-parameters.md) * [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md) * [LLaMA model](LLaMA-model.md) * [llama.cpp](llama.cpp.md) diff --git a/server.py b/server.py index 67b630a6..dc368642 100644 --- a/server.py +++ b/server.py @@ -357,8 +357,6 @@ def create_settings_menus(default_preset): with gr.Accordion("Learn more", open=False): gr.Markdown(""" - Not all parameters are used by all loaders. See [this page](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Generation-parameters.md) for details. - For a technical description of the parameters, the [transformers documentation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) is a good reference. The best presets, according to the [Preset Arena](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md) experiment, are: