..
baby-llama
ggml : implement backward pass for llama + small training-llama-from-scratch example ( #1360 )
2023-05-13 15:56:40 +03:00
benchmark
llama : add llama_init_backend() API ( close #1527 )
2023-05-20 11:06:37 +03:00
embedding
llama : add llama_init_backend() API ( close #1527 )
2023-05-20 11:06:37 +03:00
jeopardy
examples : add Jeopardy example ( #1168 )
2023-04-28 19:13:33 +03:00
main
Some improvements to loading the session with --prompt-cache ( #1550 )
2023-05-25 20:18:01 -06:00
perplexity
llama : add llama_init_backend() API ( close #1527 )
2023-05-20 11:06:37 +03:00
quantize
llama : add llama_init_backend() API ( close #1527 )
2023-05-20 11:06:37 +03:00
quantize-stats
Remove unused n_parts parameter ( #1509 )
2023-05-17 22:12:01 +00:00
save-load-state
Remove unused n_parts parameter ( #1509 )
2023-05-17 22:12:01 +00:00
server
examples : add server example with REST API ( #1443 )
2023-05-21 20:51:18 +03:00
alpaca.sh
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience ( #1107 )
2023-04-22 09:54:33 +03:00
chat-13B.bat
Create chat-13B.bat ( #592 )
2023-03-29 20:21:09 +03:00
chat-13B.sh
examples : read chat prompts from a template file ( #1196 )
2023-05-03 20:58:11 +03:00
chat-persistent.sh
chat-persistent.sh : use bracket expressions in grep ( #1564 )
2023-05-24 09:16:22 +03:00
chat.sh
If n_predict == -1, generate forever
2023-03-25 21:51:41 +02:00
CMakeLists.txt
examples : add server example with REST API ( #1443 )
2023-05-21 20:51:18 +03:00
common.cpp
Fix for mingw ( #1462 )
2023-05-20 00:40:02 -07:00
common.h
minor : fix compile warnings
2023-05-19 20:14:51 +03:00
gpt4all.sh
examples : add -n to alpaca and gpt4all scripts ( #706 )
2023-04-13 16:03:39 +03:00
Miku.sh
examples : various prompt and example fixes ( #1298 )
2023-05-03 18:26:47 +03:00
reason-act.sh
add example of re-act pattern ( #583 )
2023-03-29 10:10:24 -05:00