..
benchmark
llama : fix compile warnings
2023-05-02 23:09:08 +03:00
embedding
examples : add llama_init_from_gpt_params() common function ( #1290 )
2023-05-02 23:39:51 +03:00
jeopardy
examples : add Jeopardy example ( #1168 )
2023-04-28 19:13:33 +03:00
main
main : add option to save full output to session ( #1338 )
2023-05-10 11:37:14 -04:00
perplexity
llama : require first token to be BOS ( #1303 )
2023-05-08 17:41:54 +03:00
quantize
quantize: make output filename optional, default to ggml-model-<ftype>.bin ( #1301 )
2023-05-05 00:58:56 +02:00
quantize-stats
Add git-based build information for better issue tracking ( #1232 )
2023-05-01 18:23:47 +02:00
save-load-state
Add git-based build information for better issue tracking ( #1232 )
2023-05-01 18:23:47 +02:00
alpaca.sh
examples : Improve Alpaca Default Repeat Penalty: Better Match Alpaca.cpp Experience ( #1107 )
2023-04-22 09:54:33 +03:00
chat-13B.bat
Create chat-13B.bat ( #592 )
2023-03-29 20:21:09 +03:00
chat-13B.sh
examples : read chat prompts from a template file ( #1196 )
2023-05-03 20:58:11 +03:00
chat.sh
If n_predict == -1, generate forever
2023-03-25 21:51:41 +02:00
CMakeLists.txt
Various fixes to mat_mul benchmark ( #1253 )
2023-04-30 12:32:37 +00:00
common.cpp
main : add option to save full output to session ( #1338 )
2023-05-10 11:37:14 -04:00
common.h
main : add option to save full output to session ( #1338 )
2023-05-10 11:37:14 -04:00
gpt4all.sh
examples : add -n to alpaca and gpt4all scripts ( #706 )
2023-04-13 16:03:39 +03:00
Miku.sh
examples : various prompt and example fixes ( #1298 )
2023-05-03 18:26:47 +03:00
reason-act.sh
add example of re-act pattern ( #583 )
2023-03-29 10:10:24 -05:00