slaren
|
72e7ef4e53
|
simple : fixes
|
2023-09-26 23:19:36 +02:00 |
|
Georgi Gerganov
|
8845160058
|
simple : add README.md
|
2023-09-21 20:10:14 +02:00 |
|
Georgi Gerganov
|
5a3369d8e8
|
llama : llama.h formatting + comments
|
2023-09-21 19:51:32 +02:00 |
|
Georgi Gerganov
|
b2debf65f2
|
parallel : add disabled experimental batch chunking in powers of two
|
2023-09-20 20:14:05 +03:00 |
|
Georgi Gerganov
|
ded9b43cad
|
parallel : fix cases where the input prompts can overflow the batch
|
2023-09-20 19:09:25 +03:00 |
|
Georgi Gerganov
|
ee1d670cc6
|
parallel : fix bug (extra BOS) + smaller token_prev array
|
2023-09-20 17:32:21 +03:00 |
|
slaren
|
1be2b8c19b
|
ggml : revert change to ggml_cpy, add ggml_cont_Nd instead (#3275)
ggml-ci
|
2023-09-20 16:12:51 +03:00 |
|
Georgi Gerganov
|
2f3a46fccf
|
train : make KQ_pos memory buffer permanent via dummy scale op
|
2023-09-20 14:14:50 +03:00 |
|
Georgi Gerganov
|
54206962c7
|
llama : disable MPI for now
ggml-ci
|
2023-09-20 14:07:29 +03:00 |
|
slaren
|
e04dc51988
|
ggml-cuda : add rope f16, restore performance with parallel decoding (#3272)
* ggml-cuda : add rope f16, restore performance
* offload KQ_mask with all models
* fix rope shift
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
2023-09-20 14:00:28 +03:00 |
|
Georgi Gerganov
|
db0fc2da06
|
simple : improve comments + free batch
|
2023-09-20 13:54:20 +03:00 |
|
Georgi Gerganov
|
b377bf2266
|
simple : add parallel decoding support
|
2023-09-20 13:06:34 +03:00 |
|
Georgi Gerganov
|
addae65fd4
|
llama : improve llama_batch API + simplify parallel example
|
2023-09-20 11:03:18 +03:00 |
|
Georgi Gerganov
|
a1327c71c6
|
parallel : rename hot-plug to continuous-batching
|
2023-09-20 09:24:41 +03:00 |
|
Georgi Gerganov
|
e1067efbfa
|
llama : fix n_kv to never become 0
|
2023-09-20 09:17:05 +03:00 |
|
Georgi Gerganov
|
7b7472ee26
|
parallel : minor
|
2023-09-20 00:35:10 +03:00 |
|
Georgi Gerganov
|
6028879f56
|
parallel : print misses on each request
|
2023-09-19 23:50:05 +03:00 |
|
Georgi Gerganov
|
eed3fd4234
|
parallel : count cache misses
|
2023-09-19 23:47:47 +03:00 |
|
Georgi Gerganov
|
8a9aca37c1
|
parallel : remove question with short answers
|
2023-09-19 23:34:30 +03:00 |
|
Georgi Gerganov
|
4b5f3cd6bf
|
parallel : process system prompt once + configurable paramters + llama API
|
2023-09-19 17:00:42 +03:00 |
|
Georgi Gerganov
|
82e20e9ba0
|
parallel : remove new line from prompt
|
2023-09-19 13:54:41 +03:00 |
|
Georgi Gerganov
|
d37081ae5d
|
llama : silence errors KV cache errors
|
2023-09-19 13:42:59 +03:00 |
|
Georgi Gerganov
|
16090a5dde
|
parallel : fix sequence termination criteria
|
2023-09-19 13:29:29 +03:00 |
|
Georgi Gerganov
|
806d397c1a
|
parallel : try smaller batches when the KV cache is fragmented
|
2023-09-19 13:21:36 +03:00 |
|
Georgi Gerganov
|
ddad227782
|
llama : fix cell_max logic + rename functions
|
2023-09-19 13:21:12 +03:00 |
|
Georgi Gerganov
|
36714e16d0
|
parallel : various improvements
|
2023-09-19 12:29:37 +03:00 |
|
Georgi Gerganov
|
467e307931
|
simple : fix token counting
|
2023-09-19 11:45:33 +03:00 |
|
Georgi Gerganov
|
25bd254089
|
make : add parallel to build + fix static functions in llama.cpp
|
2023-09-19 11:37:02 +03:00 |
|
slaren
|
7e2b9974d1
|
ggml-cuda : update rope implementation for parallel decoding (#3254)
* ggml-cuda : update rope implementation for parallel decoding
* better solution for p0 computation
* fix rope
* simpler rope implementation
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
|
2023-09-19 11:31:36 +03:00 |
|
Georgi Gerganov
|
daf4c6d360
|
llama : fix worst case graph build
|
2023-09-19 11:05:08 +03:00 |
|
Georgi Gerganov
|
fa0e677820
|
llama : extend batch API to select which logits to output
|
2023-09-19 00:24:13 +03:00 |
|
Georgi Gerganov
|
897caccdf4
|
fixes : speculative KV cache + llama worst-case graph
|
2023-09-18 22:32:28 +03:00 |
|
Georgi Gerganov
|
466b513851
|
parallel : disable hot-plug to avoid cache fragmentation
|
2023-09-18 21:34:20 +03:00 |
|
Georgi Gerganov
|
0161372b9a
|
parallel : example for serving multiple users in parallel
|
2023-09-18 20:37:28 +03:00 |
|
Georgi Gerganov
|
1f17ea631c
|
speculative : fix KV cache management
|
2023-09-18 19:01:20 +03:00 |
|
Georgi Gerganov
|
7c1bdd0e8a
|
llama : apply K-cache roping for Falcon and Baichuan
|
2023-09-18 18:26:05 +03:00 |
|
Georgi Gerganov
|
0cbf3bfef8
|
llama : add llama_kv_cache_shift_seq + no more context swaps
|
2023-09-18 18:10:43 +03:00 |
|
Georgi Gerganov
|
86c90e34f5
|
metal : disable concurrency optimization
|
2023-09-18 18:00:01 +03:00 |
|
Georgi Gerganov
|
f015b26689
|
llama : more robust cell_max heuristic + wip shift
|
2023-09-18 17:15:58 +03:00 |
|
Georgi Gerganov
|
4d76d762ef
|
llama : extend llama_kv_cache API
|
2023-09-18 15:53:03 +03:00 |
|
Georgi Gerganov
|
6952a460b9
|
llama : add cell_max heuristic for more efficient kv_cache
|
2023-09-18 15:31:24 +03:00 |
|
Georgi Gerganov
|
9f42e75489
|
llama : add new llama_decode() API that works with llama_batch
|
2023-09-18 14:23:52 +03:00 |
|
Georgi Gerganov
|
58bb5110ca
|
Merge branch 'master' into custom-attention-mask
|
2023-09-18 11:15:18 +03:00 |
|
Georgi Gerganov
|
d29e76937c
|
llama : unified KV cache + batch inference API
|
2023-09-18 11:08:15 +03:00 |
|
Erik Scholz
|
7ddf185537
|
ci : switch cudatoolkit install on windows to networked (#3236)
|
2023-09-18 02:21:47 +02:00 |
|
Johannes Gäßler
|
ee66942d7e
|
CUDA: fix peer access logic (#3231)
|
2023-09-17 23:35:20 +02:00 |
|
Georgi Gerganov
|
fad56936d4
|
metal : add rope_f16 kernel + optimize cpy kernels
|
2023-09-17 23:39:45 +03:00 |
|
Georgi Gerganov
|
1fb033fd85
|
ggml : ggml_rope now takes a vector with positions instead of n_past
|
2023-09-17 21:17:10 +03:00 |
|
Georgi Gerganov
|
3b4bab6a38
|
llama : replace ggml_diag_mask_inf with ggml_add (custom -inf mask)
|
2023-09-17 19:42:39 +03:00 |
|
Georgi Gerganov
|
c5df72e848
|
tests : verify that RoPE is "additive"
|
2023-09-17 17:55:12 +03:00 |
|