..
AutoGPTQ_loader.py
Add --no_use_cuda_fp16 param for AutoGPTQ
2023-06-23 12:22:56 -03:00
block_requests.py
Block a cloudfare request
2023-07-06 22:24:52 -07:00
callbacks.py
Make stop_everything work with non-streamed generation ( #2848 )
2023-06-24 11:19:16 -03:00
chat.py
Chat history download creates more detailed file names ( #3051 )
2023-07-12 00:10:36 -03:00
deepspeed_parameters.py
Style improvements ( #1957 )
2023-05-09 22:49:39 -03:00
evaluate.py
Sort some imports
2023-06-25 01:44:36 -03:00
exllama_hf.py
Add Support for Static NTK RoPE scaling for exllama/exllama_hf ( #2955 )
2023-07-04 01:13:16 -03:00
exllama.py
Add decode functions to llama.cpp/exllama
2023-07-07 09:11:30 -07:00
extensions.py
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
github.py
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
GPTQ_loader.py
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
html_generator.py
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
llama_attn_hijack.py
Prevent unwanted log messages from modules
2023-05-21 22:42:34 -03:00
llamacpp_model.py
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
loaders.py
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
logging_colors.py
Add menus for saving presets/characters/instruction templates/prompts ( #2621 )
2023-06-11 12:19:18 -03:00
LoRA.py
Lora fixes for AutoGPTQ ( #2818 )
2023-07-09 01:03:43 -03:00
models_settings.py
[Fixed] wbits and groupsize values from model not shown ( #2977 )
2023-07-11 23:27:38 -03:00
models.py
Make --model work with argument like models/folder_name
2023-07-08 10:22:54 -07:00
monkey_patch_gptq_lora.py
Sort some imports
2023-06-25 01:44:36 -03:00
presets.py
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
relative_imports.py
Add ExLlama+LoRA support ( #2756 )
2023-06-19 12:31:24 -03:00
RWKV.py
Add ExLlama support ( #2444 )
2023-06-16 20:35:38 -03:00
sampler_hijack.py
Add repetition penalty range parameter to transformers ( #2916 )
2023-06-29 13:40:13 -03:00
shared.py
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
text_generation.py
Implement sessions + add basic multi-user support ( #2991 )
2023-07-04 00:03:30 -03:00
training.py
Add ability to load all text files from a subdirectory for training ( #1997 )
2023-07-12 11:44:30 -03:00
ui.py
Add low vram mode on llama cpp ( #3076 )
2023-07-12 11:05:13 -03:00
utils.py
Add ability to load all text files from a subdirectory for training ( #1997 )
2023-07-12 11:44:30 -03:00