.. |
grammar
|
Better HF grammar implementation (#4953)
|
2023-12-17 02:01:23 -03:00 |
AutoGPTQ_loader.py
|
AutoGPTQ: Add --disable_exllamav2 flag (Mixtral CPU offloading needs this)
|
2023-12-15 06:46:13 -08:00 |
block_requests.py
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
cache_utils.py
|
Fix StreamingLLM when content is removed from the beginning of the prompt
|
2024-03-14 09:18:54 -07:00 |
callbacks.py
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
chat.py
|
fix handling of prefix with intentional space
|
2024-04-13 04:00:24 +00:00 |
deepspeed_parameters.py
|
Fix typo in deepspeed_parameters.py (#3222)
|
2023-07-24 11:17:28 -03:00 |
evaluate.py
|
Perplexity evaluation: print to terminal after calculation is finished
|
2024-02-28 19:58:21 -08:00 |
exllamav2_hf.py
|
Update cache_4bit documentation (#5649)
|
2024-03-07 13:08:21 -03:00 |
exllamav2.py
|
Add cache_4bit option for ExLlamaV2 (#5645)
|
2024-03-06 23:02:25 -03:00 |
extensions.py
|
Move update_wizard_windows.sh to update_wizard_windows.bat (oops)
|
2024-03-04 19:26:24 -08:00 |
github.py
|
Lint
|
2023-09-25 20:31:11 -07:00 |
GPTQ_loader.py
|
Improve several log messages
|
2023-12-19 20:54:32 -08:00 |
gradio_hijack.py
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
html_generator.py
|
Fix issue #5783 for character images with transparency (#5827)
|
2024-04-11 02:23:43 -03:00 |
llama_cpp_python_hijack.py
|
Bump llama-cpp-python to 0.2.61 & fix the crash
|
2024-04-11 14:15:34 -07:00 |
llamacpp_hf.py
|
Fix loading sharted GGUF models through llamacpp_HF
|
2024-04-11 14:50:05 -07:00 |
llamacpp_model.py
|
llama.cpp: add a progress bar for prompt evaluation
|
2024-02-07 21:56:10 -08:00 |
loaders.py
|
Remove CTransformers support (#5807)
|
2024-04-04 20:23:58 -03:00 |
logging_colors.py
|
Lint
|
2023-12-19 21:36:57 -08:00 |
logits.py
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
LoRA.py
|
Revert "Remove non-HF ExLlamaV2 loader (#5431)"
|
2024-02-06 06:21:36 -08:00 |
metadata_gguf.py
|
llama.cpp: read instruction template from GGUF metadata (#4975)
|
2023-12-18 01:51:58 -03:00 |
models_settings.py
|
Fix loading command-r context length metadata
|
2024-04-10 21:39:59 -07:00 |
models.py
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
monkey_patch_gptq_lora.py
|
fix lora training with alpaca_lora_4bit (#3853)
|
2023-09-11 01:22:20 -03:00 |
one_click_installer_check.py
|
Lint
|
2023-11-16 18:03:06 -08:00 |
presets.py
|
Organize the parameters tab (#5767)
|
2024-03-28 16:45:03 -03:00 |
prompts.py
|
Fix "send instruction template to..." buttons (closes #4625)
|
2023-11-16 18:16:42 -08:00 |
relative_imports.py
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
RoPE.py
|
Lint
|
2024-01-09 16:27:50 -08:00 |
sampler_hijack.py
|
Cubic sampling w/ curve param (#5551)
|
2024-03-03 13:22:21 -03:00 |
shared.py
|
Add a simple min_p preset, make it the default (#5836)
|
2024-04-09 12:50:16 -03:00 |
text_generation.py
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
training.py
|
Perplexity evaluation: make UI events more robust (attempt)
|
2024-02-22 07:13:22 -08:00 |
ui_chat.py
|
UI: Focus on the chat input after starting a new chat
|
2024-04-06 12:57:57 -07:00 |
ui_default.py
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
ui_file_saving.py
|
Improve the file saving/deletion menus
|
2024-01-09 06:33:47 -08:00 |
ui_model_menu.py
|
Add Ascend NPU support (basic) (#5541)
|
2024-04-11 18:42:20 -03:00 |
ui_notebook.py
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
ui_parameters.py
|
Add a simple min_p preset, make it the default (#5836)
|
2024-04-09 12:50:16 -03:00 |
ui_session.py
|
Bump gradio to 4.23 (#5758)
|
2024-03-26 16:32:20 -03:00 |
ui.py
|
Fix saving of UI defaults to settings.yaml - Fixes #5592 (#5794)
|
2024-04-11 18:19:16 -03:00 |
utils.py
|
Add a menu for customizing the instruction template for the model (#5521)
|
2024-02-16 14:21:17 -03:00 |