.. |
callbacks.py
|
Remove mutable defaults from function signature. (#1663)
|
2023-05-08 22:55:41 -03:00 |
chat.py
|
Add chat-instruct mode (#2049)
|
2023-05-14 10:43:55 -03:00 |
deepspeed_parameters.py
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
evaluate.py
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
extensions.py
|
Fix custom_generate_chat_prompt (#1965)
|
2023-05-10 11:29:59 -03:00 |
GPTQ_loader.py
|
Fix bug in --cpu-memory
|
2023-05-12 06:17:07 -03:00 |
html_generator.py
|
Add markdown table rendering
|
2023-05-10 13:41:23 -03:00 |
llama_attn_hijack.py
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
llamacpp_model.py
|
Add in-memory cache support for llama.cpp (#1936)
|
2023-05-15 20:19:55 -03:00 |
logging_colors.py
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
LoRA.py
|
fixed LoRA loading issue (#1865)
|
2023-05-08 16:21:55 -03:00 |
models.py
|
Create get_max_memory_dict() function
|
2023-05-15 19:38:27 -03:00 |
monkey_patch_gptq_lora.py
|
Better warning messages
|
2023-05-03 21:43:17 -03:00 |
RWKV.py
|
Style improvements (#1957)
|
2023-05-09 22:49:39 -03:00 |
shared.py
|
add api port options (#1990)
|
2023-05-15 20:44:16 -03:00 |
text_generation.py
|
Better handle spaces in LlamaTokenizer
|
2023-05-11 17:55:50 -03:00 |
training.py
|
Fix Training fails when evaluation dataset is selected (#2099)
|
2023-05-16 13:40:19 -03:00 |
ui.py
|
Add settings UI for llama.cpp and fixed reloading of llama.cpp models (#2087)
|
2023-05-15 19:51:23 -03:00 |
utils.py
|
Fix prompt order in the dropdown
|
2023-05-10 02:24:09 -03:00 |