Commit Graph

1338 Commits

Author SHA1 Message Date
Georgi Gerganov
6028879f56 parallel : print misses on each request 2023-09-19 23:50:05 +03:00
Georgi Gerganov
eed3fd4234 parallel : count cache misses 2023-09-19 23:47:47 +03:00
Georgi Gerganov
8a9aca37c1
parallel : remove question with short answers 2023-09-19 23:34:30 +03:00
Georgi Gerganov
4b5f3cd6bf
parallel : process system prompt once + configurable paramters + llama API 2023-09-19 17:00:42 +03:00
Georgi Gerganov
82e20e9ba0 parallel : remove new line from prompt 2023-09-19 13:54:41 +03:00
Georgi Gerganov
d37081ae5d
llama : silence errors KV cache errors 2023-09-19 13:42:59 +03:00
Georgi Gerganov
16090a5dde
parallel : fix sequence termination criteria 2023-09-19 13:29:29 +03:00
Georgi Gerganov
806d397c1a
parallel : try smaller batches when the KV cache is fragmented 2023-09-19 13:21:36 +03:00
Georgi Gerganov
ddad227782
llama : fix cell_max logic + rename functions 2023-09-19 13:21:12 +03:00
Georgi Gerganov
36714e16d0
parallel : various improvements 2023-09-19 12:29:37 +03:00
Georgi Gerganov
467e307931
simple : fix token counting 2023-09-19 11:45:33 +03:00
Georgi Gerganov
25bd254089
make : add parallel to build + fix static functions in llama.cpp 2023-09-19 11:37:02 +03:00
slaren
7e2b9974d1
ggml-cuda : update rope implementation for parallel decoding (#3254)
* ggml-cuda : update rope implementation for parallel decoding

* better solution for p0 computation

* fix rope

* simpler rope implementation

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-19 11:31:36 +03:00
Georgi Gerganov
daf4c6d360
llama : fix worst case graph build 2023-09-19 11:05:08 +03:00
Georgi Gerganov
fa0e677820
llama : extend batch API to select which logits to output 2023-09-19 00:24:13 +03:00
Georgi Gerganov
897caccdf4
fixes : speculative KV cache + llama worst-case graph 2023-09-18 22:32:28 +03:00
Georgi Gerganov
466b513851
parallel : disable hot-plug to avoid cache fragmentation 2023-09-18 21:34:20 +03:00
Georgi Gerganov
0161372b9a
parallel : example for serving multiple users in parallel 2023-09-18 20:37:28 +03:00
Georgi Gerganov
1f17ea631c
speculative : fix KV cache management 2023-09-18 19:01:20 +03:00
Georgi Gerganov
7c1bdd0e8a
llama : apply K-cache roping for Falcon and Baichuan 2023-09-18 18:26:05 +03:00
Georgi Gerganov
0cbf3bfef8
llama : add llama_kv_cache_shift_seq + no more context swaps 2023-09-18 18:10:43 +03:00
Georgi Gerganov
86c90e34f5
metal : disable concurrency optimization 2023-09-18 18:00:01 +03:00
Georgi Gerganov
f015b26689
llama : more robust cell_max heuristic + wip shift 2023-09-18 17:15:58 +03:00
Cebtenzzre
8781013ef6
make : restore build-info.h dependency for several targets (#3205) 2023-09-18 10:03:53 -04:00
Georgi Gerganov
4d76d762ef
llama : extend llama_kv_cache API 2023-09-18 15:53:03 +03:00
Georgi Gerganov
6952a460b9
llama : add cell_max heuristic for more efficient kv_cache 2023-09-18 15:31:24 +03:00
Georgi Gerganov
9f42e75489
llama : add new llama_decode() API that works with llama_batch 2023-09-18 14:23:52 +03:00
Georgi Gerganov
58bb5110ca
Merge branch 'master' into custom-attention-mask 2023-09-18 11:15:18 +03:00
Georgi Gerganov
d29e76937c
llama : unified KV cache + batch inference API 2023-09-18 11:08:15 +03:00
Erik Scholz
7ddf185537
ci : switch cudatoolkit install on windows to networked (#3236) 2023-09-18 02:21:47 +02:00
Johannes Gäßler
ee66942d7e
CUDA: fix peer access logic (#3231) 2023-09-17 23:35:20 +02:00
Georgi Gerganov
fad56936d4
metal : add rope_f16 kernel + optimize cpy kernels 2023-09-17 23:39:45 +03:00
Georgi Gerganov
1fb033fd85
ggml : ggml_rope now takes a vector with positions instead of n_past 2023-09-17 21:17:10 +03:00
Georgi Gerganov
3b4bab6a38
llama : replace ggml_diag_mask_inf with ggml_add (custom -inf mask) 2023-09-17 19:42:39 +03:00
Georgi Gerganov
c5df72e848
tests : verify that RoPE is "additive" 2023-09-17 17:55:12 +03:00
Johannes Gäßler
111163e246
CUDA: enable peer access between devices (#2470) 2023-09-17 16:37:53 +02:00
slaren
8b428c9bc8
llama.cpp : show model size and BPW on load (#3223) 2023-09-17 14:33:28 +02:00
Johannes Gäßler
578d8c8f5c
CUDA: fix scratch malloced on non-main device (#3220) 2023-09-17 14:16:22 +02:00
IsaacDynamo
b541b4f0b1
Enable BUILD_SHARED_LIBS=ON on all Windows builds (#3215) 2023-09-16 19:35:25 +02:00
Vlad
5dbc2b3213
Enable build with CUDA 11.0 (make) (#3132)
* CUDA 11.0 fixes

* Cleaner CUDA/host flags separation

Also renamed GGML_ASSUME into GGML_CUDA_ASSUME
2023-09-16 16:55:43 +02:00
goerch
b08e75baea
Fixing the last deviations from sentencepiece indicated by test-tokenizer-1 (#3170)
* Fix für #2721

* Reenable tokenizer test for LLaMa

* Add `console.cpp` dependency

* Fix dependency to `common`

* Fixing wrong fix.

* Make console usage platform specific

Work on compiler warnings.

* Adapting makefile

* Remove trailing whitespace

* Adapting the other parts of the makefile

* Fix typo.

* Fixing the last deviations from sentencepiece indicated by test-tokenizer-1

* Simplify logic

* Add missing change...

* Fix ugly compiler warning

* llama_tokenize should accept strings containing NUL now

* Adding huichen's test case
2023-09-16 13:41:33 +02:00
Cebtenzzre
e6616cf0db
examples : add compiler version and target to build info (#2998) 2023-09-15 16:59:49 -04:00
Cebtenzzre
3aefaab9e5
check C++ code with -Wmissing-declarations (#3184) 2023-09-15 15:38:27 -04:00
Cebtenzzre
69eb67e282
fix build numbers by setting fetch-depth=0 (#3197) 2023-09-15 15:18:15 -04:00
Meng Zhang
4fe09dfe66
llama : add support for StarCoder model architectures (#3187)
* add placeholder of starcoder in gguf / llama.cpp

* support convert starcoder weights to gguf

* convert MQA to MHA

* fix ffn_down name

* add LLM_ARCH_STARCODER to llama.cpp

* set head_count_kv = 1

* load starcoder weight

* add max_position_embeddings

* set n_positions to max_positioin_embeddings

* properly load all starcoder params

* fix head count kv

* fix comments

* fix vram calculation for starcoder

* store mqa directly

* add input embeddings handling

* add TBD

* working in cpu, metal buggy

* cleanup useless code

* metal : fix out-of-bounds access in soft_max kernels

* llama : make starcoder graph build more consistent with others

* refactor: cleanup comments a bit

* add other starcoder models: 3B, 7B, 15B

* support-mqa-directly

* fix: remove max_position_embeddings, use n_train_ctx

* Update llama.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update llama.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix: switch to space from tab

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2023-09-15 22:02:13 +03:00
Cebtenzzre
80291a1d02
common : do not use GNU zero-length __VA_ARGS__ extension (#3195) 2023-09-15 21:02:01 +03:00
Georgi Gerganov
c6f1491da0
metal : fix bug in soft_max kernels (out-of-bounds access) (#3194) 2023-09-15 20:17:24 +03:00
Cebtenzzre
e3d87a6c36
convert : make ftype optional in simple scripts (#3185) 2023-09-15 12:29:02 -04:00
Georgi Gerganov
8c00b7a6ff
sync : ggml (Metal F32 support + reduce ggml-alloc size) (#3192)
* sync : ggml (Metal F32 support + reduce ggml-alloc size)

ggml-ci

* llama-bench : fix ggml_cpu_has_metal() duplicate function

ggml-ci
2023-09-15 19:06:03 +03:00
Engininja2
7e50d34be6
cmake : fix building shared libs for clang (rocm) on windows (#3176) 2023-09-15 15:24:30 +03:00