text-generation-webui/modules
2025-01-05 05:47:00 -08:00
..
grammar Let grammar escape backslashes (#5865) 2024-05-19 20:26:09 -03:00
AutoGPTQ_loader.py Backend cleanup (#6025) 2024-05-21 13:32:02 -03:00
block_requests.py UI: update a link 2024-12-18 06:28:14 -08:00
cache_utils.py Fix StreamingLLM when content is removed from the beginning of the prompt 2024-03-14 09:18:54 -07:00
callbacks.py Make responses start faster by removing unnecessary cleanup calls (#6625) 2025-01-01 18:33:38 -03:00
chat.py UI: add a "Search chats" input field 2025-01-02 18:46:40 -08:00
deepspeed_parameters.py Fix typo in deepspeed_parameters.py (#3222) 2023-07-24 11:17:28 -03:00
evaluate.py Perplexity evaluation: print to terminal after calculation is finished 2024-02-28 19:58:21 -08:00
exllamav2_hf.py UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
exllamav2.py Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF) 2025-01-04 16:25:06 -08:00
extensions.py Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
github.py Fix several typos in the codebase (#6151) 2024-06-22 21:40:25 -03:00
gradio_hijack.py Bump gradio to 4.23 (#5758) 2024-03-26 16:32:20 -03:00
html_generator.py UI: reduce the size of CSS sent to the UI during streaming 2025-01-04 14:09:36 -08:00
llama_cpp_python_hijack.py Fix locally compiled llama-cpp-python failing to import 2024-10-14 13:24:13 -07:00
llamacpp_hf.py UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
llamacpp_model.py UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
loaders.py Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
logging_colors.py Lint 2023-12-19 21:36:57 -08:00
logits.py Fix CUDA error on MPS backend during API request (#6572) 2025-01-02 00:06:11 -03:00
LoRA.py Fix CUDA error on MPS backend during API request (#6572) 2025-01-02 00:06:11 -03:00
metadata_gguf.py llama.cpp: read instruction template from GGUF metadata (#4975) 2023-12-18 01:51:58 -03:00
models_settings.py Remove AutoAWQ as a standalone loader 2024-07-23 15:31:17 -07:00
models.py Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
one_click_installer_check.py Lint 2023-11-16 18:03:06 -08:00
presets.py Exclude Top Choices (XTC): A sampler that boosts creativity, breaks writing clichés, and inhibits non-verbatim repetition (#6335) 2024-09-27 22:50:12 -03:00
prompts.py Fix "send instruction template to..." buttons (closes #4625) 2023-11-16 18:16:42 -08:00
relative_imports.py Add ExLlama+LoRA support (#2756) 2023-06-19 12:31:24 -03:00
sampler_hijack.py Connect XTC, DRY, smoothing_factor, and dynatemp to ExLlamaV2 loader (non-HF) 2025-01-04 16:25:06 -08:00
sane_markdown_lists.py Sane handling of markdown lists (#6626) 2025-01-04 15:41:31 -03:00
shared.py Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
tensorrt_llm.py Add TensorRT-LLM support (#5715) 2024-06-24 02:30:03 -03:00
text_generation.py Add a "Static KV cache" option for transformers 2025-01-04 17:52:57 -08:00
training.py Don't import PEFT unless necessary 2024-09-03 19:40:53 -07:00
ui_chat.py UI: add a "Search chats" input field 2025-01-02 18:46:40 -08:00
ui_default.py Lint 2024-12-17 20:13:32 -08:00
ui_file_saving.py Fix the "save preset" event 2024-10-01 11:20:48 -07:00
ui_model_menu.py Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
ui_notebook.py Lint 2024-12-17 20:13:32 -08:00
ui_parameters.py Add a "Static KV cache" option for transformers 2025-01-04 17:52:57 -08:00
ui_session.py UI: improve the style of code blocks in light theme 2024-07-20 20:32:57 -07:00
ui.py Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
utils.py Optimize the UI (#6251) 2024-07-21 00:01:42 -03:00