mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-30 16:07:17 +01:00
42c76d1358
* Introduce ggml_compute_threadpool - OpenMP functional: check - Vanilla ggml functional: Check - ggml w/threadpool functional: Check - OpenMP no regression: No glaring problems - Vanilla ggml no regression: No glaring problems - ggml w/threadpool no regression: No glaring problems * Minor fixes * fixed use after release bug * fixed a harmless race condition * Fix Android bulid issue * fix more race conditions * fix deadlock for cases where cgraph.n_nodes == 1 and fix --poll case * threadpool: use cpu_get_num_math to set the default number of threadpool threads This way we avoid using E-Cores and Hyperthreaded siblings. * bench: create fresh threadpool for each test For benchmarking it's better to start a fresh pool for each test with the exact number of threads needed for that test. Having larger pools is suboptimal (causes more load, etc). * atomics: always use stdatomics with clang and use relaxed memory order when polling in ggml_barrier This also removes sched_yield() calls from ggml_barrier() to match OpenMP behavior. * threadpool: make polling the default to match openmp behavior All command line args now allow for setting poll to 0 (false). * threadpool: do not wakeup threads in already paused threadpool * fix potential race condition in check_for_work * threadpool: do not create two threadpools if their params are identical * threadpool: reduce pause/resume/wakeup overhead in common cases We now start threadpool in paused state only if we have two. The resume is now implicit (ie new work) which allows for reduced locking and context-switch overhead. * threadpool: add support for hybrid polling poll params (--poll, ...) now specify "polling level", i.e. how aggresively we poll before waiting on cond.var. poll=0 means no polling, 1 means poll for 128K rounds then wait, 2 for 256K rounds, ... The default value of 50 (ie 50x128K rounds) seems like a decent default across modern platforms. We can tune this further as things evolve. * threadpool: reduce the number of barrier required New work is now indicated with an atomic counter that is incremented for each new graph that needs to be computed. This removes the need for extra barrier for clearing the "new_work" and removes the special case for trivial graphs. * threadpool: remove special-casing for disposable threadpools With the efficient hybrid polling there is no need to make disposable pools any different. This simplifies the overall logic and reduces branching. Include n_threads in debug print for disposable threadpool. Declare pause and stop flags as atomic_bool This doesn't actually generate any memory barriers and simply informs the thread sanitizer that these flags can be written & read by different threads without locking. * threadpool: do not clear barrier counters between graphs computes (fixes race with small graphs) This fixes the race condition with very small graphs where the main thread happens to start a new graph while the workers are just about to exit from barriers. * threadpool: use relaxed order for chunk sync Full memory barrier is an overkill for this since each thread works on different chunk * threadpool: remove abort_callback from threadpool state * threadpool: better naming for thread/cpumask releated functions * threadpool: consistent use of int type for n_threads params * threadpool: add support for ggml_threadpool_params_default/init Also removes the need for explicit mask_specified param. all-zero cpumask means use default (usually inherited) cpu affinity mask. * threadpool: move typedef into ggml.h * threadpool: fix apply_priority() function name * threadpool: fix swift wrapper errors due to n_threads int type cleanup * threadpool: enable --cpu-mask and other threadpool related options only if threadpool is enabled * threadpool: replace checks for compute_thread ret code with proper status check * threadpool: simplify threadpool init logic and fix main thread affinity application Most of the init code is now exactly the same between threadpool and openmp. * threadpool: update threadpool resume/pause function names * threadpool: enable openmp by default for now * threadpool: don't forget to free workers state when omp is enabled * threadpool: avoid updating process priority on the platforms that do not require it On Windows we need to change overall process priority class in order to set thread priorities, but on Linux, Mac, etc we do not need to touch the overall process settings. * threadpool: update calling thread prio and affinity only at start/resume This avoids extra syscalls for each graph_compute() * llama-bench: turn threadpool params into vectors, add output headers, etc * llama-bench: add support for cool off between tests --delay This helps for long running tests on platforms that are thermally limited (phones, laptops, etc). --delay (disabled by default) introduces the sleep for N seconds before starting each test. * threadpool: move process priority setting into the apps (bench and cli) This avoids changing the overall process priority on Windows for the apps that use ggml/llama.cpp directy. * threadpool: move all pause/resume logic into ggml * threadpool: futher api cleanup and prep for future refactoring All threadpool related functions and structs use ggml_threadpool prefix. * threadpool: minor indent fixes * threadpool: improve setprioty error message * Update examples/llama-bench/llama-bench.cpp Co-authored-by: slaren <slarengh@gmail.com> * threadpool: fix indent in set_threadpool call * use int32_t for n_thread type in public llama.cpp API * threadpool: use _new and _free instead of _create and _release * fix two more public APIs to use int32_t for n_threads * build: set _GNU_SOURCE for Adroid --------- Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com> Co-authored-by: fmz <quic_fzaghlou@quic.com> Co-authored-by: Max Krasnyansky <max.krasnyansky@gmail.com> Co-authored-by: slaren <slarengh@gmail.com>
1035 lines
41 KiB
C++
1035 lines
41 KiB
C++
#include "common.h"
|
|
|
|
#include "console.h"
|
|
#include "llama.h"
|
|
|
|
#include <cassert>
|
|
#include <cinttypes>
|
|
#include <cmath>
|
|
#include <cstdio>
|
|
#include <cstring>
|
|
#include <ctime>
|
|
#include <fstream>
|
|
#include <iostream>
|
|
#include <sstream>
|
|
#include <string>
|
|
#include <vector>
|
|
|
|
#if defined (__unix__) || (defined (__APPLE__) && defined (__MACH__))
|
|
#include <signal.h>
|
|
#include <unistd.h>
|
|
#elif defined (_WIN32)
|
|
#define WIN32_LEAN_AND_MEAN
|
|
#ifndef NOMINMAX
|
|
#define NOMINMAX
|
|
#endif
|
|
#include <windows.h>
|
|
#include <signal.h>
|
|
#endif
|
|
|
|
#if defined(_MSC_VER)
|
|
#pragma warning(disable: 4244 4267) // possible loss of data
|
|
#endif
|
|
|
|
static llama_context ** g_ctx;
|
|
static llama_model ** g_model;
|
|
static gpt_params * g_params;
|
|
static std::vector<llama_token> * g_input_tokens;
|
|
static std::ostringstream * g_output_ss;
|
|
static std::vector<llama_token> * g_output_tokens;
|
|
static bool is_interacting = false;
|
|
static bool need_insert_eot = false;
|
|
|
|
static bool file_exists(const std::string & path) {
|
|
std::ifstream f(path.c_str());
|
|
return f.good();
|
|
}
|
|
|
|
static bool file_is_empty(const std::string & path) {
|
|
std::ifstream f;
|
|
f.exceptions(std::ifstream::failbit | std::ifstream::badbit);
|
|
f.open(path.c_str(), std::ios::in | std::ios::binary | std::ios::ate);
|
|
return f.tellg() == 0;
|
|
}
|
|
|
|
static void write_logfile(
|
|
const llama_context * ctx, const gpt_params & params, const llama_model * model,
|
|
const std::vector<llama_token> & input_tokens, const std::string & output,
|
|
const std::vector<llama_token> & output_tokens
|
|
) {
|
|
if (params.logdir.empty()) {
|
|
return;
|
|
}
|
|
|
|
const std::string timestamp = string_get_sortable_timestamp();
|
|
|
|
const bool success = fs_create_directory_with_parents(params.logdir);
|
|
if (!success) {
|
|
fprintf(stderr, "%s: warning: failed to create logdir %s, cannot write logfile\n",
|
|
__func__, params.logdir.c_str());
|
|
return;
|
|
}
|
|
|
|
const std::string logfile_path = params.logdir + timestamp + ".yml";
|
|
FILE * logfile = fopen(logfile_path.c_str(), "w");
|
|
|
|
if (logfile == NULL) {
|
|
fprintf(stderr, "%s: failed to open logfile %s\n", __func__, logfile_path.c_str());
|
|
return;
|
|
}
|
|
|
|
fprintf(logfile, "binary: main\n");
|
|
char model_desc[128];
|
|
llama_model_desc(model, model_desc, sizeof(model_desc));
|
|
yaml_dump_non_result_info(logfile, params, ctx, timestamp, input_tokens, model_desc);
|
|
|
|
fprintf(logfile, "\n");
|
|
fprintf(logfile, "######################\n");
|
|
fprintf(logfile, "# Generation Results #\n");
|
|
fprintf(logfile, "######################\n");
|
|
fprintf(logfile, "\n");
|
|
|
|
yaml_dump_string_multiline(logfile, "output", output.c_str());
|
|
yaml_dump_vector_int(logfile, "output_tokens", output_tokens);
|
|
|
|
llama_dump_timing_info_yaml(logfile, ctx);
|
|
fclose(logfile);
|
|
}
|
|
|
|
#if defined (__unix__) || (defined (__APPLE__) && defined (__MACH__)) || defined (_WIN32)
|
|
static void sigint_handler(int signo) {
|
|
if (signo == SIGINT) {
|
|
if (!is_interacting && g_params->interactive) {
|
|
is_interacting = true;
|
|
need_insert_eot = true;
|
|
} else {
|
|
console::cleanup();
|
|
printf("\n");
|
|
llama_print_timings(*g_ctx);
|
|
write_logfile(*g_ctx, *g_params, *g_model, *g_input_tokens, g_output_ss->str(), *g_output_tokens);
|
|
_exit(130);
|
|
}
|
|
}
|
|
}
|
|
#endif
|
|
|
|
static void llama_log_callback_logTee(ggml_log_level level, const char * text, void * user_data) {
|
|
(void) level;
|
|
(void) user_data;
|
|
LOG_TEE("%s", text);
|
|
}
|
|
|
|
static std::string chat_add_and_format(struct llama_model * model, std::vector<llama_chat_msg> & chat_msgs, std::string role, std::string content) {
|
|
llama_chat_msg new_msg{role, content};
|
|
auto formatted = llama_chat_format_single(
|
|
model, g_params->chat_template, chat_msgs, new_msg, role == "user");
|
|
chat_msgs.push_back({role, content});
|
|
LOG("formatted: %s\n", formatted.c_str());
|
|
return formatted;
|
|
}
|
|
|
|
int main(int argc, char ** argv) {
|
|
gpt_params params;
|
|
g_params = ¶ms;
|
|
|
|
if (!gpt_params_parse(argc, argv, params)) {
|
|
gpt_params_print_usage(argc, argv, params);
|
|
return 1;
|
|
}
|
|
|
|
llama_sampling_params & sparams = params.sparams;
|
|
|
|
#ifndef LOG_DISABLE_LOGS
|
|
log_set_target(log_filename_generator("main", "log"));
|
|
LOG_TEE("Log start\n");
|
|
log_dump_cmdline(argc, argv);
|
|
llama_log_set(llama_log_callback_logTee, nullptr);
|
|
#endif // LOG_DISABLE_LOGS
|
|
|
|
// TODO: Dump params ?
|
|
//LOG("Params perplexity: %s\n", LOG_TOSTR(params.perplexity));
|
|
|
|
// save choice to use color for later
|
|
// (note for later: this is a slightly awkward choice)
|
|
console::init(params.simple_io, params.use_color);
|
|
atexit([]() { console::cleanup(); });
|
|
|
|
if (params.logits_all) {
|
|
printf("\n************\n");
|
|
printf("%s: please use the 'perplexity' tool for perplexity calculations\n", __func__);
|
|
printf("************\n\n");
|
|
|
|
return 0;
|
|
}
|
|
|
|
if (params.embedding) {
|
|
printf("\n************\n");
|
|
printf("%s: please use the 'embedding' tool for embedding calculations\n", __func__);
|
|
printf("************\n\n");
|
|
|
|
return 0;
|
|
}
|
|
|
|
if (params.n_ctx != 0 && params.n_ctx < 8) {
|
|
LOG_TEE("%s: warning: minimum context size is 8, using minimum size.\n", __func__);
|
|
params.n_ctx = 8;
|
|
}
|
|
|
|
if (params.rope_freq_base != 0.0) {
|
|
LOG_TEE("%s: warning: changing RoPE frequency base to %g.\n", __func__, params.rope_freq_base);
|
|
}
|
|
|
|
if (params.rope_freq_scale != 0.0) {
|
|
LOG_TEE("%s: warning: scaling RoPE frequency by %g.\n", __func__, params.rope_freq_scale);
|
|
}
|
|
|
|
LOG_TEE("%s: build = %d (%s)\n", __func__, LLAMA_BUILD_NUMBER, LLAMA_COMMIT);
|
|
LOG_TEE("%s: built with %s for %s\n", __func__, LLAMA_COMPILER, LLAMA_BUILD_TARGET);
|
|
|
|
if (params.seed == LLAMA_DEFAULT_SEED) {
|
|
params.seed = time(NULL);
|
|
}
|
|
|
|
LOG_TEE("%s: seed = %u\n", __func__, params.seed);
|
|
|
|
std::mt19937 rng(params.seed);
|
|
|
|
LOG("%s: llama backend init\n", __func__);
|
|
llama_backend_init();
|
|
llama_numa_init(params.numa);
|
|
|
|
llama_model * model;
|
|
llama_context * ctx;
|
|
llama_context * ctx_guidance = NULL;
|
|
std::vector<llama_chat_msg> chat_msgs;
|
|
g_model = &model;
|
|
g_ctx = &ctx;
|
|
|
|
// load the model and apply lora adapter, if any
|
|
LOG("%s: load the model and apply lora adapter, if any\n", __func__);
|
|
llama_init_result llama_init = llama_init_from_gpt_params(params);
|
|
|
|
model = llama_init.model;
|
|
ctx = llama_init.context;
|
|
if (sparams.cfg_scale > 1.f) {
|
|
struct llama_context_params lparams = llama_context_params_from_gpt_params(params);
|
|
ctx_guidance = llama_new_context_with_model(model, lparams);
|
|
}
|
|
|
|
if (model == NULL) {
|
|
LOG_TEE("%s: error: unable to load model\n", __func__);
|
|
return 1;
|
|
}
|
|
|
|
LOG("%s: llama threadpool init = n_threads = %d\n",
|
|
__func__,
|
|
(int) params.cpuparams.n_threads
|
|
);
|
|
struct ggml_threadpool_params tpp_batch =
|
|
ggml_threadpool_params_from_cpu_params(params.cpuparams_batch);
|
|
struct ggml_threadpool_params tpp =
|
|
ggml_threadpool_params_from_cpu_params(params.cpuparams);
|
|
|
|
set_process_priority(params.cpuparams.priority);
|
|
|
|
struct ggml_threadpool * threadpool_batch = NULL;
|
|
if (!ggml_threadpool_params_match(&tpp, &tpp_batch)) {
|
|
threadpool_batch = ggml_threadpool_new(&tpp_batch);
|
|
if (!threadpool_batch) {
|
|
LOG_TEE("%s: batch threadpool create failed : n_threads %d\n", __func__, tpp_batch.n_threads);
|
|
exit(1);
|
|
}
|
|
|
|
// Start the non-batch threadpool in the paused state
|
|
tpp.paused = true;
|
|
}
|
|
|
|
struct ggml_threadpool * threadpool = ggml_threadpool_new(&tpp);
|
|
if (!threadpool) {
|
|
LOG_TEE("%s: threadpool create failed : n_threads %d\n", __func__, tpp.n_threads);
|
|
exit(1);
|
|
}
|
|
|
|
llama_attach_threadpool(ctx, threadpool, threadpool_batch);
|
|
if (ctx_guidance) {
|
|
llama_attach_threadpool(ctx_guidance, threadpool, threadpool_batch);
|
|
}
|
|
|
|
const int n_ctx_train = llama_n_ctx_train(model);
|
|
const int n_ctx = llama_n_ctx(ctx);
|
|
LOG("n_ctx: %d\n", n_ctx);
|
|
|
|
if (n_ctx > n_ctx_train) {
|
|
LOG_TEE("%s: warning: model was trained on only %d context tokens (%d specified)\n",
|
|
__func__, n_ctx_train, n_ctx);
|
|
}
|
|
|
|
// print chat template example in conversation mode
|
|
if (params.conversation) {
|
|
if (params.enable_chat_template) {
|
|
LOG_TEE("%s: chat template example: %s\n", __func__, llama_chat_format_example(model, params.chat_template).c_str());
|
|
} else {
|
|
LOG_TEE("%s: in-suffix/prefix is specified, chat template will be disabled\n", __func__);
|
|
}
|
|
}
|
|
|
|
// print system information
|
|
{
|
|
LOG_TEE("\n");
|
|
LOG_TEE("%s\n", gpt_params_get_system_info(params).c_str());
|
|
}
|
|
|
|
std::string path_session = params.path_prompt_cache;
|
|
std::vector<llama_token> session_tokens;
|
|
|
|
if (!path_session.empty()) {
|
|
LOG_TEE("%s: attempting to load saved session from '%s'\n", __func__, path_session.c_str());
|
|
if (!file_exists(path_session)) {
|
|
LOG_TEE("%s: session file does not exist, will create.\n", __func__);
|
|
} else if (file_is_empty(path_session)) {
|
|
LOG_TEE("%s: The session file is empty. A new session will be initialized.\n", __func__);
|
|
} else {
|
|
// The file exists and is not empty
|
|
session_tokens.resize(n_ctx);
|
|
size_t n_token_count_out = 0;
|
|
if (!llama_state_load_file(ctx, path_session.c_str(), session_tokens.data(), session_tokens.capacity(), &n_token_count_out)) {
|
|
LOG_TEE("%s: error: failed to load session file '%s'\n", __func__, path_session.c_str());
|
|
return 1;
|
|
}
|
|
session_tokens.resize(n_token_count_out);
|
|
LOG_TEE("%s: loaded a session with prompt size of %d tokens\n", __func__, (int)session_tokens.size());
|
|
}
|
|
}
|
|
|
|
const bool add_bos = llama_add_bos_token(model);
|
|
if (!llama_model_has_encoder(model)) {
|
|
GGML_ASSERT(!llama_add_eos_token(model));
|
|
}
|
|
LOG("add_bos: %d\n", add_bos);
|
|
|
|
std::vector<llama_token> embd_inp;
|
|
|
|
{
|
|
auto prompt = (params.conversation && params.enable_chat_template && !params.prompt.empty())
|
|
? chat_add_and_format(model, chat_msgs, "system", params.prompt) // format the system prompt in conversation mode
|
|
: params.prompt;
|
|
if (params.interactive_first || !params.prompt.empty() || session_tokens.empty()) {
|
|
LOG("tokenize the prompt\n");
|
|
embd_inp = ::llama_tokenize(ctx, prompt, true, true);
|
|
} else {
|
|
LOG("use session tokens\n");
|
|
embd_inp = session_tokens;
|
|
}
|
|
|
|
LOG("prompt: \"%s\"\n", log_tostr(prompt));
|
|
LOG("tokens: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd_inp).c_str());
|
|
}
|
|
|
|
// Should not run without any tokens
|
|
if (embd_inp.empty()) {
|
|
if (add_bos) {
|
|
embd_inp.push_back(llama_token_bos(model));
|
|
LOG("embd_inp was considered empty and bos was added: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd_inp).c_str());
|
|
} else {
|
|
LOG_TEE("error: input is empty\n");
|
|
return -1;
|
|
}
|
|
}
|
|
|
|
// Tokenize negative prompt
|
|
std::vector<llama_token> guidance_inp;
|
|
int guidance_offset = 0;
|
|
int original_prompt_len = 0;
|
|
if (ctx_guidance) {
|
|
LOG("cfg_negative_prompt: \"%s\"\n", log_tostr(sparams.cfg_negative_prompt));
|
|
|
|
guidance_inp = ::llama_tokenize(ctx_guidance, sparams.cfg_negative_prompt, true, true);
|
|
LOG("guidance_inp tokenized: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx_guidance, guidance_inp).c_str());
|
|
|
|
std::vector<llama_token> original_inp = ::llama_tokenize(ctx, params.prompt, true, true);
|
|
LOG("original_inp tokenized: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, original_inp).c_str());
|
|
|
|
original_prompt_len = original_inp.size();
|
|
guidance_offset = (int)guidance_inp.size() - original_prompt_len;
|
|
LOG("original_prompt_len: %s", log_tostr(original_prompt_len));
|
|
LOG("guidance_offset: %s", log_tostr(guidance_offset));
|
|
}
|
|
|
|
if ((int) embd_inp.size() > n_ctx - 4) {
|
|
LOG_TEE("%s: error: prompt is too long (%d tokens, max %d)\n", __func__, (int) embd_inp.size(), n_ctx - 4);
|
|
return 1;
|
|
}
|
|
|
|
// debug message about similarity of saved session, if applicable
|
|
size_t n_matching_session_tokens = 0;
|
|
if (!session_tokens.empty()) {
|
|
for (llama_token id : session_tokens) {
|
|
if (n_matching_session_tokens >= embd_inp.size() || id != embd_inp[n_matching_session_tokens]) {
|
|
break;
|
|
}
|
|
n_matching_session_tokens++;
|
|
}
|
|
if (params.prompt.empty() && n_matching_session_tokens == embd_inp.size()) {
|
|
LOG_TEE("%s: using full prompt from session file\n", __func__);
|
|
} else if (n_matching_session_tokens >= embd_inp.size()) {
|
|
LOG_TEE("%s: session file has exact match for prompt!\n", __func__);
|
|
} else if (n_matching_session_tokens < (embd_inp.size() / 2)) {
|
|
LOG_TEE("%s: warning: session file has low similarity to prompt (%zu / %zu tokens); will mostly be reevaluated\n",
|
|
__func__, n_matching_session_tokens, embd_inp.size());
|
|
} else {
|
|
LOG_TEE("%s: session file matches %zu / %zu tokens of prompt\n",
|
|
__func__, n_matching_session_tokens, embd_inp.size());
|
|
}
|
|
|
|
// remove any "future" tokens that we might have inherited from the previous session
|
|
llama_kv_cache_seq_rm(ctx, -1, n_matching_session_tokens, -1);
|
|
}
|
|
|
|
LOGLN(
|
|
"recalculate the cached logits (check): embd_inp.empty() %s, n_matching_session_tokens %zu, embd_inp.size() %zu, session_tokens.size() %zu, embd_inp.size() %zu",
|
|
log_tostr(embd_inp.empty()), n_matching_session_tokens, embd_inp.size(), session_tokens.size(), embd_inp.size());
|
|
|
|
// if we will use the cache for the full prompt without reaching the end of the cache, force
|
|
// reevaluation of the last token to recalculate the cached logits
|
|
if (!embd_inp.empty() && n_matching_session_tokens == embd_inp.size() && session_tokens.size() > embd_inp.size()) {
|
|
LOGLN("recalculate the cached logits (do): session_tokens.resize( %zu )", embd_inp.size() - 1);
|
|
|
|
session_tokens.resize(embd_inp.size() - 1);
|
|
}
|
|
|
|
// number of tokens to keep when resetting context
|
|
if (params.n_keep < 0 || params.n_keep > (int) embd_inp.size()) {
|
|
params.n_keep = (int)embd_inp.size();
|
|
} else {
|
|
params.n_keep += add_bos; // always keep the BOS token
|
|
}
|
|
|
|
if (params.conversation) {
|
|
params.interactive_first = true;
|
|
}
|
|
|
|
// enable interactive mode if interactive start is specified
|
|
if (params.interactive_first) {
|
|
params.interactive = true;
|
|
}
|
|
|
|
if (params.verbose_prompt) {
|
|
LOG_TEE("\n");
|
|
LOG_TEE("%s: prompt: '%s'\n", __func__, params.prompt.c_str());
|
|
LOG_TEE("%s: number of tokens in prompt = %zu\n", __func__, embd_inp.size());
|
|
for (int i = 0; i < (int) embd_inp.size(); i++) {
|
|
LOG_TEE("%6d -> '%s'\n", embd_inp[i], llama_token_to_piece(ctx, embd_inp[i]).c_str());
|
|
}
|
|
|
|
if (ctx_guidance) {
|
|
LOG_TEE("\n");
|
|
LOG_TEE("%s: negative prompt: '%s'\n", __func__, sparams.cfg_negative_prompt.c_str());
|
|
LOG_TEE("%s: number of tokens in negative prompt = %zu\n", __func__, guidance_inp.size());
|
|
for (int i = 0; i < (int) guidance_inp.size(); i++) {
|
|
LOG_TEE("%6d -> '%s'\n", guidance_inp[i], llama_token_to_piece(ctx, guidance_inp[i]).c_str());
|
|
}
|
|
}
|
|
|
|
if (params.n_keep > add_bos) {
|
|
LOG_TEE("%s: static prompt based on n_keep: '", __func__);
|
|
for (int i = 0; i < params.n_keep; i++) {
|
|
LOG_TEE("%s", llama_token_to_piece(ctx, embd_inp[i]).c_str());
|
|
}
|
|
LOG_TEE("'\n");
|
|
}
|
|
LOG_TEE("\n");
|
|
}
|
|
|
|
// ctrl+C handling
|
|
{
|
|
#if defined (__unix__) || (defined (__APPLE__) && defined (__MACH__))
|
|
struct sigaction sigint_action;
|
|
sigint_action.sa_handler = sigint_handler;
|
|
sigemptyset (&sigint_action.sa_mask);
|
|
sigint_action.sa_flags = 0;
|
|
sigaction(SIGINT, &sigint_action, NULL);
|
|
#elif defined (_WIN32)
|
|
auto console_ctrl_handler = +[](DWORD ctrl_type) -> BOOL {
|
|
return (ctrl_type == CTRL_C_EVENT) ? (sigint_handler(SIGINT), true) : false;
|
|
};
|
|
SetConsoleCtrlHandler(reinterpret_cast<PHANDLER_ROUTINE>(console_ctrl_handler), true);
|
|
#endif
|
|
}
|
|
|
|
if (params.interactive) {
|
|
LOG_TEE("%s: interactive mode on.\n", __func__);
|
|
|
|
if (!params.antiprompt.empty()) {
|
|
for (const auto & antiprompt : params.antiprompt) {
|
|
LOG_TEE("Reverse prompt: '%s'\n", antiprompt.c_str());
|
|
if (params.verbose_prompt) {
|
|
auto tmp = ::llama_tokenize(ctx, antiprompt, false, true);
|
|
for (int i = 0; i < (int) tmp.size(); i++) {
|
|
LOG_TEE("%6d -> '%s'\n", tmp[i], llama_token_to_piece(ctx, tmp[i]).c_str());
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
if (params.input_prefix_bos) {
|
|
LOG_TEE("Input prefix with BOS\n");
|
|
}
|
|
|
|
if (!params.input_prefix.empty()) {
|
|
LOG_TEE("Input prefix: '%s'\n", params.input_prefix.c_str());
|
|
if (params.verbose_prompt) {
|
|
auto tmp = ::llama_tokenize(ctx, params.input_prefix, true, true);
|
|
for (int i = 0; i < (int) tmp.size(); i++) {
|
|
LOG_TEE("%6d -> '%s'\n", tmp[i], llama_token_to_piece(ctx, tmp[i]).c_str());
|
|
}
|
|
}
|
|
}
|
|
|
|
if (!params.input_suffix.empty()) {
|
|
LOG_TEE("Input suffix: '%s'\n", params.input_suffix.c_str());
|
|
if (params.verbose_prompt) {
|
|
auto tmp = ::llama_tokenize(ctx, params.input_suffix, false, true);
|
|
for (int i = 0; i < (int) tmp.size(); i++) {
|
|
LOG_TEE("%6d -> '%s'\n", tmp[i], llama_token_to_piece(ctx, tmp[i]).c_str());
|
|
}
|
|
}
|
|
}
|
|
}
|
|
LOG_TEE("sampling: \n%s\n", llama_sampling_print(sparams).c_str());
|
|
LOG_TEE("sampling order: \n%s\n", llama_sampling_order_print(sparams).c_str());
|
|
LOG_TEE("generate: n_ctx = %d, n_batch = %d, n_predict = %d, n_keep = %d\n", n_ctx, params.n_batch, params.n_predict, params.n_keep);
|
|
|
|
// group-attention state
|
|
// number of grouped KV tokens so far (used only if params.grp_attn_n > 1)
|
|
int ga_i = 0;
|
|
|
|
const int ga_n = params.grp_attn_n;
|
|
const int ga_w = params.grp_attn_w;
|
|
|
|
if (ga_n != 1) {
|
|
GGML_ASSERT(ga_n > 0 && "grp_attn_n must be positive"); // NOLINT
|
|
GGML_ASSERT(ga_w % ga_n == 0 && "grp_attn_w must be a multiple of grp_attn_n"); // NOLINT
|
|
//GGML_ASSERT(n_ctx_train % ga_w == 0 && "n_ctx_train must be a multiple of grp_attn_w"); // NOLINT
|
|
//GGML_ASSERT(n_ctx >= n_ctx_train * ga_n && "n_ctx must be at least n_ctx_train * grp_attn_n"); // NOLINT
|
|
LOG_TEE("self-extend: n_ctx_train = %d, grp_attn_n = %d, grp_attn_w = %d\n", n_ctx_train, ga_n, ga_w);
|
|
}
|
|
LOG_TEE("\n\n");
|
|
|
|
if (params.interactive) {
|
|
const char * control_message;
|
|
if (params.multiline_input) {
|
|
control_message = " - To return control to the AI, end your input with '\\'.\n"
|
|
" - To return control without starting a new line, end your input with '/'.\n";
|
|
} else {
|
|
control_message = " - Press Return to return control to the AI.\n"
|
|
" - To return control without starting a new line, end your input with '/'.\n"
|
|
" - If you want to submit another line, end your input with '\\'.\n";
|
|
}
|
|
LOG_TEE("== Running in interactive mode. ==\n");
|
|
#if defined (__unix__) || (defined (__APPLE__) && defined (__MACH__)) || defined (_WIN32)
|
|
LOG_TEE( " - Press Ctrl+C to interject at any time.\n");
|
|
#endif
|
|
LOG_TEE( "%s\n", control_message);
|
|
|
|
is_interacting = params.interactive_first;
|
|
}
|
|
|
|
bool is_antiprompt = false;
|
|
bool input_echo = true;
|
|
bool display = true;
|
|
bool need_to_save_session = !path_session.empty() && n_matching_session_tokens < embd_inp.size();
|
|
|
|
int n_past = 0;
|
|
int n_remain = params.n_predict;
|
|
int n_consumed = 0;
|
|
int n_session_consumed = 0;
|
|
int n_past_guidance = 0;
|
|
|
|
std::vector<int> input_tokens; g_input_tokens = &input_tokens;
|
|
std::vector<int> output_tokens; g_output_tokens = &output_tokens;
|
|
std::ostringstream output_ss; g_output_ss = &output_ss;
|
|
std::ostringstream assistant_ss; // for storing current assistant message, used in conversation mode
|
|
|
|
// the first thing we will do is to output the prompt, so set color accordingly
|
|
console::set_display(console::prompt);
|
|
display = params.display_prompt;
|
|
|
|
std::vector<llama_token> embd;
|
|
std::vector<llama_token> embd_guidance;
|
|
|
|
// tokenized antiprompts
|
|
std::vector<std::vector<llama_token>> antiprompt_ids;
|
|
|
|
antiprompt_ids.reserve(params.antiprompt.size());
|
|
for (const std::string & antiprompt : params.antiprompt) {
|
|
antiprompt_ids.emplace_back(::llama_tokenize(ctx, antiprompt, false, true));
|
|
}
|
|
|
|
struct llama_sampling_context * ctx_sampling = llama_sampling_init(sparams);
|
|
if (!ctx_sampling) {
|
|
fprintf(stderr, "%s: failed to initialize sampling subsystem\n", __func__);
|
|
exit(1);
|
|
}
|
|
|
|
if (llama_model_has_encoder(model)) {
|
|
int enc_input_size = embd_inp.size();
|
|
llama_token * enc_input_buf = embd_inp.data();
|
|
|
|
if (llama_encode(ctx, llama_batch_get_one(enc_input_buf, enc_input_size, 0, 0))) {
|
|
LOG_TEE("%s : failed to eval\n", __func__);
|
|
return 1;
|
|
}
|
|
|
|
llama_token decoder_start_token_id = llama_model_decoder_start_token(model);
|
|
if (decoder_start_token_id == -1) {
|
|
decoder_start_token_id = llama_token_bos(model);
|
|
}
|
|
|
|
embd_inp.clear();
|
|
embd_inp.push_back(decoder_start_token_id);
|
|
}
|
|
|
|
while ((n_remain != 0 && !is_antiprompt) || params.interactive) {
|
|
// predict
|
|
if (!embd.empty()) {
|
|
// Note: (n_ctx - 4) here is to match the logic for commandline prompt handling via
|
|
// --prompt or --file which uses the same value.
|
|
int max_embd_size = n_ctx - 4;
|
|
|
|
// Ensure the input doesn't exceed the context size by truncating embd if necessary.
|
|
if ((int) embd.size() > max_embd_size) {
|
|
const int skipped_tokens = (int) embd.size() - max_embd_size;
|
|
embd.resize(max_embd_size);
|
|
|
|
console::set_display(console::error);
|
|
printf("<<input too long: skipped %d token%s>>", skipped_tokens, skipped_tokens != 1 ? "s" : "");
|
|
console::set_display(console::reset);
|
|
fflush(stdout);
|
|
}
|
|
|
|
if (ga_n == 1) {
|
|
// infinite text generation via context shifting
|
|
// if we run out of context:
|
|
// - take the n_keep first tokens from the original prompt (via n_past)
|
|
// - take half of the last (n_ctx - n_keep) tokens and recompute the logits in batches
|
|
if (n_past + (int) embd.size() + std::max<int>(0, guidance_offset) >= n_ctx) {
|
|
if (params.n_predict == -2) {
|
|
LOG_TEE("\n\n%s: context full and n_predict == -%d => stopping\n", __func__, params.n_predict);
|
|
break;
|
|
}
|
|
|
|
const int n_left = n_past - params.n_keep;
|
|
const int n_discard = n_left/2;
|
|
|
|
LOG("context full, swapping: n_past = %d, n_left = %d, n_ctx = %d, n_keep = %d, n_discard = %d\n",
|
|
n_past, n_left, n_ctx, params.n_keep, n_discard);
|
|
|
|
llama_kv_cache_seq_rm (ctx, 0, params.n_keep , params.n_keep + n_discard);
|
|
llama_kv_cache_seq_add(ctx, 0, params.n_keep + n_discard, n_past, -n_discard);
|
|
|
|
n_past -= n_discard;
|
|
|
|
if (ctx_guidance) {
|
|
n_past_guidance -= n_discard;
|
|
}
|
|
|
|
LOG("after swap: n_past = %d, n_past_guidance = %d\n", n_past, n_past_guidance);
|
|
|
|
LOG("embd: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd).c_str());
|
|
|
|
LOG("clear session path\n");
|
|
path_session.clear();
|
|
}
|
|
} else {
|
|
// context extension via Self-Extend
|
|
while (n_past >= ga_i + ga_w) {
|
|
const int ib = (ga_n*ga_i)/ga_w;
|
|
const int bd = (ga_w/ga_n)*(ga_n - 1);
|
|
const int dd = (ga_w/ga_n) - ib*bd - ga_w;
|
|
|
|
LOG("\n");
|
|
LOG("shift: [%6d, %6d] + %6d -> [%6d, %6d]\n", ga_i, n_past, ib*bd, ga_i + ib*bd, n_past + ib*bd);
|
|
LOG("div: [%6d, %6d] / %6d -> [%6d, %6d]\n", ga_i + ib*bd, ga_i + ib*bd + ga_w, ga_n, (ga_i + ib*bd)/ga_n, (ga_i + ib*bd + ga_w)/ga_n);
|
|
LOG("shift: [%6d, %6d] + %6d -> [%6d, %6d]\n", ga_i + ib*bd + ga_w, n_past + ib*bd, dd, ga_i + ib*bd + ga_w + dd, n_past + ib*bd + dd);
|
|
|
|
llama_kv_cache_seq_add(ctx, 0, ga_i, n_past, ib*bd);
|
|
llama_kv_cache_seq_div(ctx, 0, ga_i + ib*bd, ga_i + ib*bd + ga_w, ga_n);
|
|
llama_kv_cache_seq_add(ctx, 0, ga_i + ib*bd + ga_w, n_past + ib*bd, dd);
|
|
|
|
n_past -= bd;
|
|
|
|
ga_i += ga_w/ga_n;
|
|
|
|
LOG("\nn_past_old = %d, n_past = %d, ga_i = %d\n\n", n_past + bd, n_past, ga_i);
|
|
}
|
|
}
|
|
|
|
// try to reuse a matching prefix from the loaded session instead of re-eval (via n_past)
|
|
if (n_session_consumed < (int) session_tokens.size()) {
|
|
size_t i = 0;
|
|
for ( ; i < embd.size(); i++) {
|
|
if (embd[i] != session_tokens[n_session_consumed]) {
|
|
session_tokens.resize(n_session_consumed);
|
|
break;
|
|
}
|
|
|
|
n_past++;
|
|
n_session_consumed++;
|
|
|
|
if (n_session_consumed >= (int) session_tokens.size()) {
|
|
++i;
|
|
break;
|
|
}
|
|
}
|
|
if (i > 0) {
|
|
embd.erase(embd.begin(), embd.begin() + i);
|
|
}
|
|
}
|
|
|
|
// evaluate tokens in batches
|
|
// embd is typically prepared beforehand to fit within a batch, but not always
|
|
if (ctx_guidance) {
|
|
int input_size = 0;
|
|
llama_token * input_buf = NULL;
|
|
|
|
if (n_past_guidance < (int) guidance_inp.size()) {
|
|
// Guidance context should have the same data with these modifications:
|
|
//
|
|
// * Replace the initial prompt
|
|
// * Shift everything by guidance_offset
|
|
embd_guidance = guidance_inp;
|
|
if (embd.begin() + original_prompt_len < embd.end()) {
|
|
embd_guidance.insert(
|
|
embd_guidance.end(),
|
|
embd.begin() + original_prompt_len,
|
|
embd.end()
|
|
);
|
|
}
|
|
|
|
input_buf = embd_guidance.data();
|
|
input_size = embd_guidance.size();
|
|
|
|
LOG("guidance context: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd_guidance).c_str());
|
|
} else {
|
|
input_buf = embd.data();
|
|
input_size = embd.size();
|
|
}
|
|
|
|
for (int i = 0; i < input_size; i += params.n_batch) {
|
|
int n_eval = std::min(input_size - i, params.n_batch);
|
|
if (llama_decode(ctx_guidance, llama_batch_get_one(input_buf + i, n_eval, n_past_guidance, 0))) {
|
|
LOG_TEE("%s : failed to eval\n", __func__);
|
|
return 1;
|
|
}
|
|
|
|
n_past_guidance += n_eval;
|
|
}
|
|
}
|
|
|
|
for (int i = 0; i < (int) embd.size(); i += params.n_batch) {
|
|
int n_eval = (int) embd.size() - i;
|
|
if (n_eval > params.n_batch) {
|
|
n_eval = params.n_batch;
|
|
}
|
|
|
|
LOG("eval: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, embd).c_str());
|
|
|
|
if (llama_decode(ctx, llama_batch_get_one(&embd[i], n_eval, n_past, 0))) {
|
|
LOG_TEE("%s : failed to eval\n", __func__);
|
|
return 1;
|
|
}
|
|
|
|
n_past += n_eval;
|
|
|
|
LOG("n_past = %d\n", n_past);
|
|
// Display total tokens alongside total time
|
|
if (params.n_print > 0 && n_past % params.n_print == 0) {
|
|
LOG_TEE("\n\033[31mTokens consumed so far = %d / %d \033[0m\n", n_past, n_ctx);
|
|
}
|
|
}
|
|
|
|
if (!embd.empty() && !path_session.empty()) {
|
|
session_tokens.insert(session_tokens.end(), embd.begin(), embd.end());
|
|
n_session_consumed = session_tokens.size();
|
|
}
|
|
}
|
|
|
|
embd.clear();
|
|
embd_guidance.clear();
|
|
|
|
if ((int) embd_inp.size() <= n_consumed && !is_interacting) {
|
|
// optionally save the session on first sample (for faster prompt loading next time)
|
|
if (!path_session.empty() && need_to_save_session && !params.prompt_cache_ro) {
|
|
need_to_save_session = false;
|
|
llama_state_save_file(ctx, path_session.c_str(), session_tokens.data(), session_tokens.size());
|
|
|
|
LOG("saved session to %s\n", path_session.c_str());
|
|
}
|
|
|
|
const llama_token id = llama_sampling_sample(ctx_sampling, ctx, ctx_guidance);
|
|
|
|
llama_sampling_accept(ctx_sampling, ctx, id, /* apply_grammar= */ true);
|
|
|
|
LOG("last: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, ctx_sampling->prev).c_str());
|
|
|
|
embd.push_back(id);
|
|
|
|
// echo this to console
|
|
input_echo = true;
|
|
|
|
// decrement remaining sampling budget
|
|
--n_remain;
|
|
|
|
LOG("n_remain: %d\n", n_remain);
|
|
} else {
|
|
// some user input remains from prompt or interaction, forward it to processing
|
|
LOG("embd_inp.size(): %d, n_consumed: %d\n", (int) embd_inp.size(), n_consumed);
|
|
while ((int) embd_inp.size() > n_consumed) {
|
|
embd.push_back(embd_inp[n_consumed]);
|
|
|
|
// push the prompt in the sampling context in order to apply repetition penalties later
|
|
// for the prompt, we don't apply grammar rules
|
|
llama_sampling_accept(ctx_sampling, ctx, embd_inp[n_consumed], /* apply_grammar= */ false);
|
|
|
|
++n_consumed;
|
|
if ((int) embd.size() >= params.n_batch) {
|
|
break;
|
|
}
|
|
}
|
|
}
|
|
|
|
// display text
|
|
if (input_echo && display) {
|
|
for (auto id : embd) {
|
|
const std::string token_str = llama_token_to_piece(ctx, id, params.special);
|
|
|
|
// Console/Stream Output
|
|
fprintf(stdout, "%s", token_str.c_str());
|
|
|
|
// Record Displayed Tokens To Log
|
|
// Note: Generated tokens are created one by one hence this check
|
|
if (embd.size() > 1) {
|
|
// Incoming Requested Tokens
|
|
input_tokens.push_back(id);
|
|
} else {
|
|
// Outgoing Generated Tokens
|
|
output_tokens.push_back(id);
|
|
output_ss << token_str;
|
|
}
|
|
|
|
fflush(stdout);
|
|
}
|
|
}
|
|
|
|
// reset color to default if there is no pending user input
|
|
if (input_echo && (int) embd_inp.size() == n_consumed) {
|
|
console::set_display(console::reset);
|
|
display = true;
|
|
}
|
|
|
|
// if not currently processing queued inputs;
|
|
if ((int) embd_inp.size() <= n_consumed) {
|
|
// check for reverse prompt in the last n_prev tokens
|
|
if (!params.antiprompt.empty()) {
|
|
const int n_prev = 32;
|
|
const std::string last_output = llama_sampling_prev_str(ctx_sampling, ctx, n_prev);
|
|
|
|
is_antiprompt = false;
|
|
// Check if each of the reverse prompts appears at the end of the output.
|
|
// If we're not running interactively, the reverse prompt might be tokenized with some following characters
|
|
// so we'll compensate for that by widening the search window a bit.
|
|
for (std::string & antiprompt : params.antiprompt) {
|
|
size_t extra_padding = params.interactive ? 0 : 2;
|
|
size_t search_start_pos = last_output.length() > static_cast<size_t>(antiprompt.length() + extra_padding)
|
|
? last_output.length() - static_cast<size_t>(antiprompt.length() + extra_padding)
|
|
: 0;
|
|
|
|
if (last_output.find(antiprompt, search_start_pos) != std::string::npos) {
|
|
if (params.interactive) {
|
|
is_interacting = true;
|
|
}
|
|
is_antiprompt = true;
|
|
break;
|
|
}
|
|
}
|
|
|
|
// check for reverse prompt using special tokens
|
|
llama_token last_token = llama_sampling_last(ctx_sampling);
|
|
for (std::vector<llama_token> ids : antiprompt_ids) {
|
|
if (ids.size() == 1 && last_token == ids[0]) {
|
|
if (params.interactive) {
|
|
is_interacting = true;
|
|
}
|
|
is_antiprompt = true;
|
|
break;
|
|
}
|
|
}
|
|
|
|
if (is_antiprompt) {
|
|
LOG("found antiprompt: %s\n", last_output.c_str());
|
|
}
|
|
}
|
|
|
|
// deal with end of generation tokens in interactive mode
|
|
if (llama_token_is_eog(model, llama_sampling_last(ctx_sampling))) {
|
|
LOG("found an EOG token\n");
|
|
|
|
if (params.interactive) {
|
|
if (!params.antiprompt.empty()) {
|
|
// tokenize and inject first reverse prompt
|
|
const auto first_antiprompt = ::llama_tokenize(ctx, params.antiprompt.front(), false, true);
|
|
embd_inp.insert(embd_inp.end(), first_antiprompt.begin(), first_antiprompt.end());
|
|
is_antiprompt = true;
|
|
}
|
|
|
|
if (params.enable_chat_template) {
|
|
chat_add_and_format(model, chat_msgs, "assistant", assistant_ss.str());
|
|
}
|
|
is_interacting = true;
|
|
printf("\n");
|
|
}
|
|
}
|
|
|
|
// if current token is not EOG, we add it to current assistant message
|
|
if (params.conversation) {
|
|
auto id = llama_sampling_last(ctx_sampling);
|
|
assistant_ss << llama_token_to_piece(ctx, id, false);
|
|
}
|
|
|
|
if (n_past > 0 && is_interacting) {
|
|
LOG("waiting for user input\n");
|
|
|
|
if (params.conversation) {
|
|
printf("\n> ");
|
|
}
|
|
|
|
if (params.input_prefix_bos) {
|
|
LOG("adding input prefix BOS token\n");
|
|
embd_inp.push_back(llama_token_bos(model));
|
|
}
|
|
|
|
std::string buffer;
|
|
if (!params.input_prefix.empty() && !params.conversation) {
|
|
LOG("appending input prefix: '%s'\n", params.input_prefix.c_str());
|
|
printf("%s", params.input_prefix.c_str());
|
|
}
|
|
|
|
// color user input only
|
|
console::set_display(console::user_input);
|
|
display = params.display_prompt;
|
|
|
|
std::string line;
|
|
bool another_line = true;
|
|
do {
|
|
another_line = console::readline(line, params.multiline_input);
|
|
buffer += line;
|
|
} while (another_line);
|
|
|
|
// done taking input, reset color
|
|
console::set_display(console::reset);
|
|
display = true;
|
|
|
|
// Add tokens to embd only if the input buffer is non-empty
|
|
// Entering a empty line lets the user pass control back
|
|
if (buffer.length() > 1) {
|
|
// append input suffix if any
|
|
if (!params.input_suffix.empty() && !params.conversation) {
|
|
LOG("appending input suffix: '%s'\n", params.input_suffix.c_str());
|
|
printf("%s", params.input_suffix.c_str());
|
|
}
|
|
|
|
LOG("buffer: '%s'\n", buffer.c_str());
|
|
|
|
const size_t original_size = embd_inp.size();
|
|
|
|
if (params.escape) {
|
|
string_process_escapes(buffer);
|
|
}
|
|
|
|
bool format_chat = params.conversation && params.enable_chat_template;
|
|
std::string user_inp = format_chat
|
|
? chat_add_and_format(model, chat_msgs, "user", std::move(buffer))
|
|
: std::move(buffer);
|
|
// TODO: one inconvenient of current chat template implementation is that we can't distinguish between user input and special tokens (prefix/postfix)
|
|
const auto line_pfx = ::llama_tokenize(ctx, params.input_prefix, false, true);
|
|
const auto line_inp = ::llama_tokenize(ctx, user_inp, false, format_chat);
|
|
const auto line_sfx = ::llama_tokenize(ctx, params.input_suffix, false, true);
|
|
|
|
LOG("input tokens: %s\n", LOG_TOKENS_TOSTR_PRETTY(ctx, line_inp).c_str());
|
|
|
|
// if user stop generation mid-way, we must add EOT to finish model's last response
|
|
if (need_insert_eot && format_chat) {
|
|
llama_token eot = llama_token_eot(model);
|
|
embd_inp.push_back(eot == -1 ? llama_token_eos(model) : eot);
|
|
need_insert_eot = false;
|
|
}
|
|
|
|
embd_inp.insert(embd_inp.end(), line_pfx.begin(), line_pfx.end());
|
|
embd_inp.insert(embd_inp.end(), line_inp.begin(), line_inp.end());
|
|
embd_inp.insert(embd_inp.end(), line_sfx.begin(), line_sfx.end());
|
|
|
|
for (size_t i = original_size; i < embd_inp.size(); ++i) {
|
|
const llama_token token = embd_inp[i];
|
|
output_tokens.push_back(token);
|
|
output_ss << llama_token_to_piece(ctx, token);
|
|
}
|
|
|
|
// reset assistant message
|
|
assistant_ss.str("");
|
|
|
|
n_remain -= line_inp.size();
|
|
LOG("n_remain: %d\n", n_remain);
|
|
} else {
|
|
LOG("empty line, passing control back\n");
|
|
}
|
|
|
|
input_echo = false; // do not echo this again
|
|
}
|
|
|
|
if (n_past > 0) {
|
|
if (is_interacting) {
|
|
llama_sampling_reset(ctx_sampling);
|
|
}
|
|
is_interacting = false;
|
|
}
|
|
}
|
|
|
|
// end of generation
|
|
if (!embd.empty() && llama_token_is_eog(model, embd.back()) && !(params.interactive)) {
|
|
LOG_TEE(" [end of text]\n");
|
|
break;
|
|
}
|
|
|
|
// In interactive mode, respect the maximum number of tokens and drop back to user input when reached.
|
|
// We skip this logic when n_predict == -1 (infinite) or -2 (stop at context size).
|
|
if (params.interactive && n_remain <= 0 && params.n_predict >= 0) {
|
|
n_remain = params.n_predict;
|
|
is_interacting = true;
|
|
}
|
|
}
|
|
|
|
if (!path_session.empty() && params.prompt_cache_all && !params.prompt_cache_ro) {
|
|
LOG_TEE("\n%s: saving final output to session file '%s'\n", __func__, path_session.c_str());
|
|
llama_state_save_file(ctx, path_session.c_str(), session_tokens.data(), session_tokens.size());
|
|
}
|
|
|
|
llama_print_timings(ctx);
|
|
write_logfile(ctx, params, model, input_tokens, output_ss.str(), output_tokens);
|
|
|
|
if (ctx_guidance) { llama_free(ctx_guidance); }
|
|
llama_free(ctx);
|
|
llama_free_model(model);
|
|
|
|
llama_sampling_free(ctx_sampling);
|
|
llama_backend_free();
|
|
|
|
ggml_threadpool_free(threadpool);
|
|
ggml_threadpool_free(threadpool_batch);
|
|
|
|
#ifndef LOG_DISABLE_LOGS
|
|
LOG_TEE("Log end\n");
|
|
#endif // LOG_DISABLE_LOGS
|
|
|
|
return 0;
|
|
}
|