mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-24 13:28:50 +01:00
6381d4e110
* gguf : first API pass
* gguf : read header + meta data
* gguf : read tensor info
* gguf : initial model loading - not tested
* gguf : add gguf_get_tensor_name()
* gguf : do not support passing existing ggml_context to gguf_init
* gguf : simplify gguf_get_val
* gguf : gguf.c is now part of ggml.c
* gguf : read / write sample models
* gguf : add comments
* refactor : reduce code duplication and better API (#2415)
* gguf : expose the gguf_type enum through the API for now
* gguf : add array support
* gguf.py : some code style changes
* convert.py : start a new simplified implementation by removing old stuff
* convert.py : remove GGML vocab + other obsolete stuff
* GGUF : write tensor (#2426)
* WIP: Write tensor
* GGUF : Support writing tensors in Python
* refactor : rm unused import and upd todos
* fix : fix errors upd writing example
* rm example.gguf
* gitignore *.gguf
* undo formatting
* gguf : add gguf_find_key (#2438)
* gguf.cpp : find key example
* ggml.h : add gguf_find_key
* ggml.c : add gguf_find_key
* gguf : fix writing tensors
* gguf : do not hardcode tensor names to read
* gguf : write sample tensors to read
* gguf : add tokenization constants
* quick and dirty conversion example
* gguf : fix writing gguf arrays
* gguf : write tensors one by one and code reuse
* gguf : fix writing gguf arrays
* gguf : write tensors one by one
* gguf : write tensors one by one
* gguf : write tokenizer data
* gguf : upd gguf conversion script
* Update convert-llama-h5-to-gguf.py
* gguf : handle already encoded string
* ggml.h : get array str and f32
* ggml.c : get arr str and f32
* gguf.py : support any type
* Update convert-llama-h5-to-gguf.py
* gguf : fix set is not subscriptable
* gguf : update convert-llama-h5-to-gguf.py
* constants.py : add layer norm eps
* gguf.py : add layer norm eps and merges
* ggml.h : increase GGML_MAX_NAME to 64
* ggml.c : add gguf_get_arr_n
* Update convert-llama-h5-to-gguf.py
* add gptneox gguf example
* Makefile : add gptneox gguf example
* Update convert-llama-h5-to-gguf.py
* add gptneox gguf example
* Update convert-llama-h5-to-gguf.py
* Update convert-gptneox-h5-to-gguf.py
* Update convert-gptneox-h5-to-gguf.py
* Update convert-llama-h5-to-gguf.py
* gguf : support custom alignment value
* gguf : fix typo in function call
* gguf : mmap tensor data example
* fix : update convert-llama-h5-to-gguf.py
* Update convert-llama-h5-to-gguf.py
* convert-gptneox-h5-to-gguf.py : Special tokens
* gptneox-main.cpp : special tokens
* Update gptneox-main.cpp
* constants.py : special tokens
* gguf.py : accumulate kv and tensor info data + special tokens
* convert-gptneox-h5-to-gguf.py : accumulate kv and ti + special tokens
* gguf : gguf counterpart of llama-util.h
* gguf-util.h : update note
* convert-llama-h5-to-gguf.py : accumulate kv / ti + special tokens
* convert-llama-h5-to-gguf.py : special tokens
* Delete gptneox-common.cpp
* Delete gptneox-common.h
* convert-gptneox-h5-to-gguf.py : gpt2bpe tokenizer
* gptneox-main.cpp : gpt2 bpe tokenizer
* gpt2 bpe tokenizer (handles merges and unicode)
* Makefile : remove gptneox-common
* gguf.py : bytesarray for gpt2bpe tokenizer
* cmpnct_gpt2bpe.hpp : comments
* gguf.py : use custom alignment if present
* gguf : minor stuff
* Update gptneox-main.cpp
* map tensor names
* convert-gptneox-h5-to-gguf.py : map tensor names
* convert-llama-h5-to-gguf.py : map tensor names
* gptneox-main.cpp : map tensor names
* gguf : start implementing libllama in GGUF (WIP)
* gguf : start implementing libllama in GGUF (WIP)
* rm binary commited by mistake
* upd .gitignore
* gguf : calculate n_mult
* gguf : inference with 7B model working (WIP)
* gguf : rm deprecated function
* gguf : start implementing gguf_file_saver (WIP)
* gguf : start implementing gguf_file_saver (WIP)
* gguf : start implementing gguf_file_saver (WIP)
* gguf : add gguf_get_kv_type
* gguf : add gguf_get_kv_type
* gguf : write metadata in gguf_file_saver (WIP)
* gguf : write metadata in gguf_file_saver (WIP)
* gguf : write metadata in gguf_file_saver
* gguf : rm references to old file formats
* gguf : shorter name for member variable
* gguf : rm redundant method
* gguf : get rid of n_mult, read n_ff from file
* Update gguf_tensor_map.py
* Update gptneox-main.cpp
* gguf : rm references to old file magics
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : start implementing quantization (WIP)
* gguf : quantization is working
* gguf : roper closing of file
* gguf.py : no need to convert tensors twice
* convert-gptneox-h5-to-gguf.py : no need to convert tensors twice
* convert-llama-h5-to-gguf.py : no need to convert tensors twice
* convert-gptneox-h5-to-gguf.py : simplify nbytes
* convert-llama-h5-to-gguf.py : simplify nbytes
* gptneox-main.cpp : n_layer --> n_block
* constants.py : n_layer --> n_block
* gguf.py : n_layer --> n_block
* convert-gptneox-h5-to-gguf.py : n_layer --> n_block
* convert-llama-h5-to-gguf.py : n_layer --> n_block
* gptneox-main.cpp : n_layer --> n_block
* Update gguf_tensor_map.py
* convert-gptneox-h5-to-gguf.py : load model in parts to save memory
* convert-llama-h5-to-gguf.py : load model in parts to save memory
* convert : write more metadata for LLaMA
* convert : rm quantization version
* convert-gptneox-h5-to-gguf.py : add file_type key
* gptneox-main.cpp : add file_type key
* fix conflicts
* gguf : add todos and comments
* convert-gptneox-h5-to-gguf.py : tensor name map changes
* Create gguf_namemap.py : tensor name map changes
* Delete gguf_tensor_map.py
* gptneox-main.cpp : tensor name map changes
* convert-llama-h5-to-gguf.py : fixes
* gguf.py : dont add empty strings
* simple : minor style changes
* gguf : use UNIX line ending
* Create convert-llama-7b-pth-to-gguf.py
* llama : sync gguf-llama.cpp with latest llama.cpp (#2608)
* llama : sync gguf-llama.cpp with latest llama.cpp
* minor : indentation + assert
* llama : refactor gguf_buffer and gguf_ctx_buffer
* llama : minor
* gitignore : add gptneox-main
* llama : tokenizer fixes (#2549)
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* convert : update convert-new.py with tokenizer fixes (#2614)
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* llama : sync gguf-llama with llama (#2613)
* llama : sync gguf-llama with llama
* tests : fix build + warnings (test-tokenizer-1 still fails)
* tests : fix wstring_convert
* convert : fix layer names
* llama : sync gguf-llama.cpp
* convert : update HF converter to new tokenizer voodoo magics
* llama : update tokenizer style
* convert-llama-h5-to-gguf.py : add token types
* constants.py : add token types
* gguf.py : add token types
* convert-llama-7b-pth-to-gguf.py : add token types
* gguf-llama.cpp : fix n_head_kv
* convert-llama-h5-to-gguf.py : add 70b gqa support
* gguf.py : add tensor data layout
* convert-llama-h5-to-gguf.py : add tensor data layout
* convert-llama-7b-pth-to-gguf.py : add tensor data layout
* gptneox-main.cpp : add tensor data layout
* convert-llama-h5-to-gguf.py : clarify the reverse permute
* llama : refactor model loading code (#2620)
* llama : style formatting + remove helper methods
* llama : fix quantization using gguf tool
* llama : simplify gguf_file_saver
* llama : fix method names
* llama : simplify write_header()
* llama : no need to pass full file loader to the file saver
just gguf_ctx
* llama : gguf_file_saver write I32
* llama : refactor tensor names (#2622)
* gguf: update tensor names searched in quantization
* gguf : define tensor names as constants
* gguf : initial write API (not tested yet)
* gguf : write to file API (not tested)
* gguf : initial write API ready + example
* gguf : fix header write
* gguf : fixes + simplify example + add ggml_nbytes_pad()
* gguf : minor
* llama : replace gguf_file_saver with new gguf write API
* gguf : streaming support when writing files
* gguf : remove oboslete write methods
* gguf : remove obosolete gguf_get_arr_xxx API
* llama : simplify gguf_file_loader
* llama : move hparams and vocab from gguf_file_loader to llama_model_loader
* llama : merge gguf-util.h in llama.cpp
* llama : reorder definitions in .cpp to match .h
* llama : minor simplifications
* llama : refactor llama_model_loader (WIP)
wip : remove ggml_ctx from llama_model_loader
wip : merge gguf_file_loader in llama_model_loader
* llama : fix shape prints
* llama : fix Windows build + fix norm_rms_eps key
* llama : throw error on missing KV paris in model meta data
* llama : improve printing + log meta data
* llama : switch print order of meta data
---------
Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
* gguf : deduplicate (#2629)
* gguf : better type names
* dedup : CPU + Metal is working
* ggml : fix warnings about unused results
* llama.cpp : fix line feed and compiler warning
* llama : fix strncpy warning + note token_to_str does not write null
* llama : restore the original load/save session implementation
Will migrate this to GGUF in the future
* convert-llama-h5-to-gguf.py : support alt ctx param name
* ggml : assert when using ggml_mul with non-F32 src1
* examples : dedup simple
---------
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
* gguf.py : merge all files in gguf.py
* convert-new.py : pick #2427 for HF 70B support
* examples/gguf : no need to keep q option for quantization any more
* llama.cpp : print actual model size
* llama.cpp : use ggml_elements()
* convert-new.py : output gguf (#2635)
* convert-new.py : output gguf (WIP)
* convert-new.py : add gguf key-value pairs
* llama : add hparams.ctx_train + no longer print ftype
* convert-new.py : minor fixes
* convert-new.py : vocab-only option should work now
* llama : fix tokenizer to use llama_char_to_byte
* tests : add new ggml-vocab-llama.gguf
* convert-new.py : tensor name mapping
* convert-new.py : add map for skipping tensor serialization
* convert-new.py : convert script now works
* gguf.py : pick some of the refactoring from #2644
* convert-new.py : minor fixes
* convert.py : update to support GGUF output
* Revert "ci : disable CI temporary to not waste energy"
This reverts commit 7e82d25f40
.
* convert.py : n_head_kv optional and .gguf file extension
* convert.py : better always have n_head_kv and default it to n_head
* llama : sync with recent PRs on master
* editorconfig : ignore models folder
ggml-ci
* ci : update ".bin" to ".gguf" extension
ggml-ci
* llama : fix llama_model_loader memory leak
* gptneox : move as a WIP example
* llama : fix lambda capture
ggml-ci
* ggml : fix bug in gguf_set_kv
ggml-ci
* common.h : .bin --> .gguf
* quantize-stats.cpp : .bin --> .gguf
* convert.py : fix HF tensor permuting / unpacking
ggml-ci
* llama.cpp : typo
* llama : throw error if gguf fails to init from file
ggml-ci
* llama : fix tensor name grepping during quantization
ggml-ci
* gguf.py : write tensors in a single pass (#2644)
* gguf : single pass for writing tensors + refactoring writer
* gguf : single pass for writing tensors + refactoring writer
* gguf : single pass for writing tensors + refactoring writer
* gguf : style fixes in simple conversion script
* gguf : refactor gptneox conversion script
* gguf : rename h5 to hf (for HuggingFace)
* gguf : refactor pth to gguf conversion script
* gguf : rm file_type key and method
* gguf.py : fix vertical alignment
* gguf.py : indentation
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* convert-gptneox-hf-to-gguf.py : fixes
* gguf.py : gptneox mapping
* convert-llama-hf-to-gguf.py : fixes
* convert-llama-7b-pth-to-gguf.py : fixes
* ggml.h : reverse GGUF_MAGIC
* gguf.py : reverse GGUF_MAGIC
* test-tokenizer-0.cpp : fix warning
* llama.cpp : print kv general.name
* llama.cpp : get special token kv and linefeed token id
* llama : print number of tensors per type + print arch + style
* tests : update vocab file with new magic
* editorconfig : fix whitespaces
* llama : re-order functions
* llama : remove C++ API + reorganize common source in /common dir
* llama : minor API updates
* llama : avoid hardcoded special tokens
* llama : fix MPI build
ggml-ci
* llama : introduce enum llama_vocab_type + remove hardcoded string constants
* convert-falcon-hf-to-gguf.py : falcon HF --> gguf conversion, not tested
* falcon-main.cpp : falcon inference example
* convert-falcon-hf-to-gguf.py : remove extra kv
* convert-gptneox-hf-to-gguf.py : remove extra kv
* convert-llama-7b-pth-to-gguf.py : remove extra kv
* convert-llama-hf-to-gguf.py : remove extra kv
* gguf.py : fix for falcon 40b
* falcon-main.cpp : fix for falcon 40b
* convert-falcon-hf-to-gguf.py : update ref
* convert-falcon-hf-to-gguf.py : add tensor data layout
* cmpnct_gpt2bpe.hpp : fixes
* falcon-main.cpp : fixes
* gptneox-main.cpp : fixes
* cmpnct_gpt2bpe.hpp : remove non-general stuff
* Update examples/server/README.md
Co-authored-by: slaren <slarengh@gmail.com>
* cmpnct_gpt2bpe.hpp : cleanup
* convert-llama-hf-to-gguf.py : special tokens
* convert-llama-7b-pth-to-gguf.py : special tokens
* convert-permute-debug.py : permute debug print
* convert-permute-debug-master.py : permute debug for master
* convert-permute-debug.py : change permute type of attn_q
* convert.py : 70b model working (change attn_q permute)
* Delete convert-permute-debug-master.py
* Delete convert-permute-debug.py
* convert-llama-hf-to-gguf.py : fix attn_q permute
* gguf.py : fix rope scale kv
* convert-llama-hf-to-gguf.py : rope scale and added tokens
* convert-llama-7b-pth-to-gguf.py : rope scale and added tokens
* llama.cpp : use rope scale kv
* convert-llama-7b-pth-to-gguf.py : rope scale fix
* convert-llama-hf-to-gguf.py : rope scale fix
* py : fix whitespace
* gguf : add Python script to convert GGMLv3 LLaMA models to GGUF (#2682)
* First pass at converting GGMLv3 LLaMA models to GGUF
* Cleanups, better output during conversion
* Fix vocab space conversion logic
* More vocab conversion fixes
* Add description to converted GGUF files
* Improve help text, expand warning
* Allow specifying name and description for output GGUF
* Allow overriding vocab and hyperparams from original model metadata
* Use correct params override var name
* Fix wrong type size for Q8_K
Better handling of original style metadata
* Set default value for gguf add_tensor raw_shape KW arg
* llama : improve token type support (#2668)
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* Improved tokenizer test
But does it work on MacOS?
* Improve token type support
- Added @klosax code to convert.py
- Improved token type support in vocabulary
* Exclude platform dependent tests
* More sentencepiece compatibility by eliminating magic numbers
* Restored accidentally removed comment
* llama : add API for token type
ggml-ci
* tests : use new tokenizer type API (#2692)
* Merge tokenizer fixes into the gguf branch.
* Add test vocabularies
* Adapt convert-new.py (and fix a clang-cl compiler error on windows)
* Improved tokenizer test
But does it work on MacOS?
* Improve token type support
- Added @klosax code to convert.py
- Improved token type support in vocabulary
* Exclude platform dependent tests
* More sentencepiece compatibility by eliminating magic numbers
* Restored accidentally removed comment
* Improve commentary
* Use token type API in test-tokenizer-1.cpp
* py : cosmetics
* readme : add notice about new file format
ggml-ci
---------
Co-authored-by: M. Yusuf Sarıgöz <yusufsarigoz@gmail.com>
Co-authored-by: klosax <131523366+klosax@users.noreply.github.com>
Co-authored-by: goerch <jhr.walter@t-online.de>
Co-authored-by: slaren <slarengh@gmail.com>
Co-authored-by: Kerfuffle <44031344+KerfuffleV2@users.noreply.github.com>
506 lines
23 KiB
C++
506 lines
23 KiB
C++
#ifndef LLAMA_H
|
|
#define LLAMA_H
|
|
|
|
#include "ggml.h"
|
|
#ifdef GGML_USE_CUBLAS
|
|
#include "ggml-cuda.h"
|
|
#define LLAMA_MAX_DEVICES GGML_CUDA_MAX_DEVICES
|
|
#else
|
|
#define LLAMA_MAX_DEVICES 1
|
|
#endif // GGML_USE_CUBLAS
|
|
#include <stddef.h>
|
|
#include <stdint.h>
|
|
#include <stdbool.h>
|
|
|
|
#ifdef LLAMA_SHARED
|
|
# if defined(_WIN32) && !defined(__MINGW32__)
|
|
# ifdef LLAMA_BUILD
|
|
# define LLAMA_API __declspec(dllexport)
|
|
# else
|
|
# define LLAMA_API __declspec(dllimport)
|
|
# endif
|
|
# else
|
|
# define LLAMA_API __attribute__ ((visibility ("default")))
|
|
# endif
|
|
#else
|
|
# define LLAMA_API
|
|
#endif
|
|
|
|
#ifdef __GNUC__
|
|
# define DEPRECATED(func, hint) func __attribute__((deprecated(hint)))
|
|
#elif defined(_MSC_VER)
|
|
# define DEPRECATED(func, hint) __declspec(deprecated(hint)) func
|
|
#else
|
|
# define DEPRECATED(func, hint) func
|
|
#endif
|
|
|
|
#define LLAMA_DEFAULT_SEED 0xFFFFFFFF
|
|
|
|
#define LLAMA_FILE_MAGIC_GGSN 0x6767736eu // 'ggsn'
|
|
|
|
#define LLAMA_SESSION_MAGIC LLAMA_FILE_MAGIC_GGSN
|
|
#define LLAMA_SESSION_VERSION 1
|
|
|
|
#if defined(GGML_USE_CUBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_METAL)
|
|
// Defined when llama.cpp is compiled with support for offloading model layers to GPU.
|
|
#define LLAMA_SUPPORTS_GPU_OFFLOAD
|
|
#endif
|
|
|
|
#ifdef __cplusplus
|
|
extern "C" {
|
|
#endif
|
|
|
|
//
|
|
// C interface
|
|
//
|
|
// TODO: show sample usage
|
|
//
|
|
|
|
struct llama_model;
|
|
struct llama_context;
|
|
|
|
typedef int llama_token;
|
|
|
|
enum llama_log_level {
|
|
LLAMA_LOG_LEVEL_ERROR = 2,
|
|
LLAMA_LOG_LEVEL_WARN = 3,
|
|
LLAMA_LOG_LEVEL_INFO = 4
|
|
};
|
|
|
|
enum llama_vocab_type {
|
|
LLAMA_VOCAB_TYPE_SPM = 0, // SentencePiece
|
|
LLAMA_VOCAB_TYPE_BPE = 1, // Byte Pair Encoding
|
|
};
|
|
|
|
enum llama_token_type {
|
|
LLAMA_TOKEN_TYPE_UNDEFINED = 0,
|
|
LLAMA_TOKEN_TYPE_NORMAL = 1,
|
|
LLAMA_TOKEN_TYPE_UNKNOWN = 2,
|
|
LLAMA_TOKEN_TYPE_CONTROL = 3,
|
|
LLAMA_TOKEN_TYPE_USER_DEFINED = 4,
|
|
LLAMA_TOKEN_TYPE_UNUSED = 5,
|
|
LLAMA_TOKEN_TYPE_BYTE = 6,
|
|
};
|
|
|
|
// model file types
|
|
enum llama_ftype {
|
|
LLAMA_FTYPE_ALL_F32 = 0,
|
|
LLAMA_FTYPE_MOSTLY_F16 = 1, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q4_0 = 2, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q4_1 = 3, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4, // tok_embeddings.weight and output.weight are F16
|
|
// LLAMA_FTYPE_MOSTLY_Q4_2 = 5, // support has been removed
|
|
// LLAMA_FTYPE_MOSTLY_Q4_3 = 6, // support has been removed
|
|
LLAMA_FTYPE_MOSTLY_Q8_0 = 7, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q5_0 = 8, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q5_1 = 9, // except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q2_K = 10,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q3_K_S = 11,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q3_K_M = 12,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q3_K_L = 13,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q4_K_S = 14,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q4_K_M = 15,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q5_K_S = 16,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q5_K_M = 17,// except 1d tensors
|
|
LLAMA_FTYPE_MOSTLY_Q6_K = 18,// except 1d tensors
|
|
};
|
|
|
|
typedef struct llama_token_data {
|
|
llama_token id; // token id
|
|
float logit; // log-odds of the token
|
|
float p; // probability of the token
|
|
} llama_token_data;
|
|
|
|
typedef struct llama_token_data_array {
|
|
llama_token_data * data;
|
|
size_t size;
|
|
bool sorted;
|
|
} llama_token_data_array;
|
|
|
|
typedef void (*llama_progress_callback)(float progress, void *ctx);
|
|
|
|
struct llama_context_params {
|
|
uint32_t seed; // RNG seed, -1 for random
|
|
int32_t n_ctx; // text context
|
|
int32_t n_batch; // prompt processing batch size
|
|
int32_t n_gpu_layers; // number of layers to store in VRAM
|
|
int32_t main_gpu; // the GPU that is used for scratch and small tensors
|
|
|
|
const float * tensor_split; // how to split layers across multiple GPUs (size: LLAMA_MAX_DEVICES)
|
|
|
|
// ref: https://github.com/ggerganov/llama.cpp/pull/2054
|
|
float rope_freq_base; // RoPE base frequency
|
|
float rope_freq_scale; // RoPE frequency scaling factor
|
|
|
|
// called with a progress value between 0 and 1, pass NULL to disable
|
|
llama_progress_callback progress_callback;
|
|
// context pointer passed to the progress callback
|
|
void * progress_callback_user_data;
|
|
|
|
// Keep the booleans together to avoid misalignment during copy-by-value.
|
|
bool low_vram; // if true, reduce VRAM usage at the cost of performance
|
|
bool mul_mat_q; // if true, use experimental mul_mat_q kernels
|
|
bool f16_kv; // use fp16 for KV cache
|
|
bool logits_all; // the llama_eval() call computes all logits, not just the last one
|
|
bool vocab_only; // only load the vocabulary, no weights
|
|
bool use_mmap; // use mmap if possible
|
|
bool use_mlock; // force system to keep model in RAM
|
|
bool embedding; // embedding mode only
|
|
};
|
|
|
|
// Signature for logging events
|
|
// Note that text includes the new line character at the end for most events.
|
|
// If your logging mechanism cannot handle that, check if the last character is '\n' and strip it
|
|
// if it exists.
|
|
// It might not exist for progress report where '.' is output repeatedly.
|
|
typedef void (*llama_log_callback)(enum llama_log_level level, const char * text, void * user_data);
|
|
|
|
// model quantization parameters
|
|
typedef struct llama_model_quantize_params {
|
|
int nthread; // number of threads to use for quantizing, if <=0 will use std::thread::hardware_concurrency()
|
|
enum llama_ftype ftype; // quantize to this llama_ftype
|
|
bool allow_requantize; // allow quantizing non-f32/f16 tensors
|
|
bool quantize_output_tensor; // quantize output.weight
|
|
} llama_model_quantize_params;
|
|
|
|
// grammar types
|
|
struct llama_grammar;
|
|
|
|
// grammar element type
|
|
enum llama_gretype {
|
|
// end of rule definition
|
|
LLAMA_GRETYPE_END = 0,
|
|
|
|
// start of alternate definition for rule
|
|
LLAMA_GRETYPE_ALT = 1,
|
|
|
|
// non-terminal element: reference to rule
|
|
LLAMA_GRETYPE_RULE_REF = 2,
|
|
|
|
// terminal element: character (code point)
|
|
LLAMA_GRETYPE_CHAR = 3,
|
|
|
|
// inverse char(s) ([^a], [^a-b] [^abc])
|
|
LLAMA_GRETYPE_CHAR_NOT = 4,
|
|
|
|
// modifies a preceding LLAMA_GRETYPE_CHAR or LLAMA_GRETYPE_CHAR_ALT to
|
|
// be an inclusive range ([a-z])
|
|
LLAMA_GRETYPE_CHAR_RNG_UPPER = 5,
|
|
|
|
// modifies a preceding LLAMA_GRETYPE_CHAR or
|
|
// LLAMA_GRETYPE_CHAR_RNG_UPPER to add an alternate char to match ([ab], [a-zA])
|
|
LLAMA_GRETYPE_CHAR_ALT = 6,
|
|
};
|
|
|
|
typedef struct llama_grammar_element {
|
|
enum llama_gretype type;
|
|
uint32_t value; // Unicode code point or rule ID
|
|
} llama_grammar_element;
|
|
|
|
// performance timing information
|
|
struct llama_timings {
|
|
double t_start_ms;
|
|
double t_end_ms;
|
|
double t_load_ms;
|
|
double t_sample_ms;
|
|
double t_p_eval_ms;
|
|
double t_eval_ms;
|
|
|
|
int32_t n_sample;
|
|
int32_t n_p_eval;
|
|
int32_t n_eval;
|
|
};
|
|
|
|
LLAMA_API struct llama_context_params llama_context_default_params(void);
|
|
LLAMA_API struct llama_model_quantize_params llama_model_quantize_default_params(void);
|
|
|
|
// Initialize the llama + ggml backend
|
|
// If numa is true, use NUMA optimizations
|
|
// Call once at the start of the program
|
|
LLAMA_API void llama_backend_init(bool numa);
|
|
|
|
// Call once at the end of the program - currently only used for MPI
|
|
LLAMA_API void llama_backend_free(void);
|
|
|
|
LLAMA_API struct llama_model * llama_load_model_from_file(
|
|
const char * path_model,
|
|
struct llama_context_params params);
|
|
|
|
LLAMA_API void llama_free_model(struct llama_model * model);
|
|
|
|
LLAMA_API struct llama_context * llama_new_context_with_model(
|
|
struct llama_model * model,
|
|
struct llama_context_params params);
|
|
|
|
// Frees all allocated memory
|
|
LLAMA_API void llama_free(struct llama_context * ctx);
|
|
|
|
LLAMA_API int64_t llama_time_us(void);
|
|
|
|
LLAMA_API int llama_max_devices (void);
|
|
LLAMA_API bool llama_mmap_supported (void);
|
|
LLAMA_API bool llama_mlock_supported(void);
|
|
|
|
LLAMA_API int llama_n_vocab(const struct llama_context * ctx);
|
|
LLAMA_API int llama_n_ctx (const struct llama_context * ctx);
|
|
LLAMA_API int llama_n_embd (const struct llama_context * ctx);
|
|
|
|
LLAMA_API int llama_model_n_vocab(const struct llama_model * model);
|
|
LLAMA_API int llama_model_n_ctx (const struct llama_model * model);
|
|
LLAMA_API int llama_model_n_embd (const struct llama_model * model);
|
|
|
|
// Get a string describing the model type
|
|
LLAMA_API int llama_model_type(const struct llama_model * model, char * buf, size_t buf_size);
|
|
|
|
// Returns 0 on success
|
|
LLAMA_API int llama_model_quantize(
|
|
const char * fname_inp,
|
|
const char * fname_out,
|
|
const llama_model_quantize_params * params);
|
|
|
|
// Apply a LoRA adapter to a loaded model
|
|
// path_base_model is the path to a higher quality model to use as a base for
|
|
// the layers modified by the adapter. Can be NULL to use the current loaded model.
|
|
// The model needs to be reloaded before applying a new adapter, otherwise the adapter
|
|
// will be applied on top of the previous one
|
|
// Returns 0 on success
|
|
LLAMA_API DEPRECATED(int llama_apply_lora_from_file(
|
|
struct llama_context * ctx,
|
|
const char * path_lora,
|
|
const char * path_base_model,
|
|
int n_threads),
|
|
"please use llama_model_apply_lora_from_file instead");
|
|
|
|
LLAMA_API int llama_model_apply_lora_from_file(
|
|
const struct llama_model * model,
|
|
const char * path_lora,
|
|
const char * path_base_model,
|
|
int n_threads);
|
|
|
|
// Returns the number of tokens in the KV cache
|
|
LLAMA_API int llama_get_kv_cache_token_count(const struct llama_context * ctx);
|
|
|
|
// Sets the current rng seed.
|
|
LLAMA_API void llama_set_rng_seed(struct llama_context * ctx, uint32_t seed);
|
|
|
|
// Returns the maximum size in bytes of the state (rng, logits, embedding
|
|
// and kv_cache) - will often be smaller after compacting tokens
|
|
LLAMA_API size_t llama_get_state_size(const struct llama_context * ctx);
|
|
|
|
// Copies the state to the specified destination address.
|
|
// Destination needs to have allocated enough memory.
|
|
// Returns the number of bytes copied
|
|
LLAMA_API size_t llama_copy_state_data(struct llama_context * ctx, uint8_t * dst);
|
|
|
|
// Set the state reading from the specified address
|
|
// Returns the number of bytes read
|
|
LLAMA_API size_t llama_set_state_data(struct llama_context * ctx, uint8_t * src);
|
|
|
|
// Save/load session file
|
|
LLAMA_API bool llama_load_session_file(struct llama_context * ctx, const char * path_session, llama_token * tokens_out, size_t n_token_capacity, size_t * n_token_count_out);
|
|
LLAMA_API bool llama_save_session_file(struct llama_context * ctx, const char * path_session, const llama_token * tokens, size_t n_token_count);
|
|
|
|
// Run the llama inference to obtain the logits and probabilities for the next token.
|
|
// tokens + n_tokens is the provided batch of new tokens to process
|
|
// n_past is the number of tokens to use from previous eval calls
|
|
// Returns 0 on success
|
|
LLAMA_API int llama_eval(
|
|
struct llama_context * ctx,
|
|
const llama_token * tokens,
|
|
int n_tokens,
|
|
int n_past,
|
|
int n_threads);
|
|
|
|
// Same as llama_eval, but use float matrix input directly.
|
|
LLAMA_API int llama_eval_embd(
|
|
struct llama_context * ctx,
|
|
const float * embd,
|
|
int n_tokens,
|
|
int n_past,
|
|
int n_threads);
|
|
|
|
// Export a static computation graph for context of 511 and batch size of 1
|
|
// NOTE: since this functionality is mostly for debugging and demonstration purposes, we hardcode these
|
|
// parameters here to keep things simple
|
|
// IMPORTANT: do not use for anything else other than debugging and testing!
|
|
LLAMA_API int llama_eval_export(struct llama_context * ctx, const char * fname);
|
|
|
|
// Token logits obtained from the last call to llama_eval()
|
|
// The logits for the last token are stored in the last row
|
|
// Can be mutated in order to change the probabilities of the next token
|
|
// Rows: n_tokens
|
|
// Cols: n_vocab
|
|
LLAMA_API float * llama_get_logits(struct llama_context * ctx);
|
|
|
|
// Get the embeddings for the input
|
|
// shape: [n_embd] (1-dimensional)
|
|
LLAMA_API float * llama_get_embeddings(struct llama_context * ctx);
|
|
|
|
//
|
|
// Vocab
|
|
//
|
|
|
|
LLAMA_API const char * llama_token_get_text(const struct llama_context * ctx, llama_token token);
|
|
|
|
LLAMA_API float llama_token_get_score(const struct llama_context * ctx, llama_token token);
|
|
|
|
LLAMA_API llama_token_type llama_token_get_type(const struct llama_context * ctx, llama_token token);
|
|
|
|
// Special tokens
|
|
LLAMA_API llama_token llama_token_bos(const struct llama_context * ctx); // beginning-of-sentence
|
|
LLAMA_API llama_token llama_token_eos(const struct llama_context * ctx); // end-of-sentence
|
|
LLAMA_API llama_token llama_token_nl (const struct llama_context * ctx); // next-line
|
|
|
|
//
|
|
// Tokenization
|
|
//
|
|
|
|
// Convert the provided text into tokens.
|
|
// The tokens pointer must be large enough to hold the resulting tokens.
|
|
// Returns the number of tokens on success, no more than n_max_tokens
|
|
// Returns a negative number on failure - the number of tokens that would have been returned
|
|
LLAMA_API int llama_tokenize(
|
|
struct llama_context * ctx,
|
|
const char * text,
|
|
llama_token * tokens,
|
|
int n_max_tokens,
|
|
bool add_bos);
|
|
|
|
LLAMA_API int llama_tokenize_bpe(
|
|
struct llama_context * ctx,
|
|
const char * text,
|
|
llama_token * tokens,
|
|
int n_max_tokens,
|
|
bool add_bos);
|
|
|
|
LLAMA_API int llama_tokenize_with_model(
|
|
const struct llama_model * model,
|
|
const char * text,
|
|
llama_token * tokens,
|
|
int n_max_tokens,
|
|
bool add_bos);
|
|
|
|
// Token Id -> String. Uses the vocabulary in the provided context
|
|
// Does not write null terminator to the buffer
|
|
LLAMA_API int llama_token_to_str(
|
|
const struct llama_context * ctx,
|
|
llama_token token,
|
|
char * buf,
|
|
int length);
|
|
|
|
LLAMA_API int llama_token_to_str_bpe(
|
|
const struct llama_context * ctx,
|
|
llama_token token,
|
|
char * buf,
|
|
int length);
|
|
|
|
LLAMA_API int llama_token_to_str_with_model(
|
|
const struct llama_model * model,
|
|
llama_token token,
|
|
char * buf,
|
|
int length);
|
|
|
|
//
|
|
// Grammar
|
|
//
|
|
|
|
LLAMA_API struct llama_grammar * llama_grammar_init(
|
|
const llama_grammar_element ** rules,
|
|
size_t n_rules,
|
|
size_t start_rule_index);
|
|
|
|
LLAMA_API void llama_grammar_free(struct llama_grammar * grammar);
|
|
|
|
//
|
|
// Sampling functions
|
|
//
|
|
|
|
/// @details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
|
|
LLAMA_API void llama_sample_repetition_penalty(struct llama_context * ctx, llama_token_data_array * candidates, const llama_token * last_tokens, size_t last_tokens_size, float penalty);
|
|
|
|
/// @details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
|
|
LLAMA_API void llama_sample_frequency_and_presence_penalties(struct llama_context * ctx, llama_token_data_array * candidates, const llama_token * last_tokens, size_t last_tokens_size, float alpha_frequency, float alpha_presence);
|
|
|
|
/// @details Apply classifier-free guidance to the logits as described in academic paper "Stay on topic with Classifier-Free Guidance" https://arxiv.org/abs/2306.17806
|
|
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted.
|
|
/// @params guidance_ctx A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
|
|
/// @params scale Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
|
|
LLAMA_API void llama_sample_classifier_free_guidance(
|
|
struct llama_context * ctx,
|
|
llama_token_data_array * candidates,
|
|
struct llama_context * guidance_ctx,
|
|
float scale);
|
|
|
|
/// @details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
|
|
LLAMA_API void llama_sample_softmax(struct llama_context * ctx, llama_token_data_array * candidates);
|
|
|
|
/// @details Top-K sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
|
|
LLAMA_API void llama_sample_top_k(struct llama_context * ctx, llama_token_data_array * candidates, int k, size_t min_keep);
|
|
|
|
/// @details Nucleus sampling described in academic paper "The Curious Case of Neural Text Degeneration" https://arxiv.org/abs/1904.09751
|
|
LLAMA_API void llama_sample_top_p(struct llama_context * ctx, llama_token_data_array * candidates, float p, size_t min_keep);
|
|
|
|
/// @details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
|
|
LLAMA_API void llama_sample_tail_free(struct llama_context * ctx, llama_token_data_array * candidates, float z, size_t min_keep);
|
|
|
|
/// @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
|
|
LLAMA_API void llama_sample_typical(struct llama_context * ctx, llama_token_data_array * candidates, float p, size_t min_keep);
|
|
LLAMA_API void llama_sample_temperature(struct llama_context * ctx, llama_token_data_array * candidates, float temp);
|
|
|
|
/// @details Apply constraints from grammar
|
|
LLAMA_API void llama_sample_grammar(struct llama_context * ctx, llama_token_data_array * candidates, const struct llama_grammar * grammar);
|
|
|
|
/// @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
|
|
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
|
|
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
|
|
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
|
|
/// @param m The number of tokens considered in the estimation of `s_hat`. This is an arbitrary value that is used to calculate `s_hat`, which in turn helps to calculate the value of `k`. In the paper, they use `m = 100`, but you can experiment with different values to see how it affects the performance of the algorithm.
|
|
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
|
|
LLAMA_API llama_token llama_sample_token_mirostat(struct llama_context * ctx, llama_token_data_array * candidates, float tau, float eta, int m, float * mu);
|
|
|
|
/// @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
|
|
/// @param candidates A vector of `llama_token_data` containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
|
|
/// @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
|
|
/// @param eta The learning rate used to update `mu` based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause `mu` to be updated more quickly, while a smaller learning rate will result in slower updates.
|
|
/// @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (`2 * tau`) and is updated in the algorithm based on the error between the target and observed surprisal.
|
|
LLAMA_API llama_token llama_sample_token_mirostat_v2(struct llama_context * ctx, llama_token_data_array * candidates, float tau, float eta, float * mu);
|
|
|
|
/// @details Selects the token with the highest probability.
|
|
LLAMA_API llama_token llama_sample_token_greedy(struct llama_context * ctx, llama_token_data_array * candidates);
|
|
|
|
/// @details Randomly selects a token from the candidates based on their probabilities.
|
|
LLAMA_API llama_token llama_sample_token(struct llama_context * ctx, llama_token_data_array * candidates);
|
|
|
|
/// @details Accepts the sampled token into the grammar
|
|
LLAMA_API void llama_grammar_accept_token(struct llama_context * ctx, struct llama_grammar * grammar, llama_token token);
|
|
|
|
// Performance information
|
|
LLAMA_API struct llama_timings llama_get_timings(struct llama_context * ctx);
|
|
LLAMA_API void llama_print_timings(struct llama_context * ctx);
|
|
LLAMA_API void llama_reset_timings(struct llama_context * ctx);
|
|
|
|
// Print system information
|
|
LLAMA_API const char * llama_print_system_info(void);
|
|
|
|
// Set callback for all future logging events.
|
|
// If this is not called, or NULL is supplied, everything is output on stderr.
|
|
LLAMA_API void llama_log_set(llama_log_callback log_callback, void * user_data);
|
|
|
|
#ifdef __cplusplus
|
|
}
|
|
#endif
|
|
|
|
// Internal API to be implemented by llama.cpp and used by tests/benchmarks only
|
|
#ifdef LLAMA_API_INTERNAL
|
|
|
|
#include <vector>
|
|
#include <string>
|
|
|
|
struct ggml_tensor;
|
|
|
|
const std::vector<std::pair<std::string, struct ggml_tensor *>>& llama_internal_get_tensor_map(struct llama_context * ctx);
|
|
|
|
#endif // LLAMA_API_INTERNAL
|
|
|
|
#endif // LLAMA_H
|