Commit Graph

3064 Commits

Author SHA1 Message Date
Нияз Гарифзянов
4be04c8965
scripts : add non-interactive server-llm.sh (#5303)
* Update server-llm.sh

Add flag --non-interactive that allows run script without asking a permission

* Update scripts/server-llm.sh

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-05 09:43:57 +02:00
chiranko
5d55b0cd82
readme : add CodeShell models to the supported models list (#5330) 2024-02-05 09:41:38 +02:00
AidanBeltonS
4833ac209d
[SYCL] Fix cpy with dims of 3 (#5289)
* Fix cpy with dims of 3

* rm asserts

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-05 12:38:24 +05:30
github-actions[bot]
9392ebd49e flake.lock: Update
Flake lock file updates:

• Updated input 'flake-parts':
    'github:hercules-ci/flake-parts/07f6395285469419cf9d078f59b5b49993198c00' (2024-01-11)
  → 'github:hercules-ci/flake-parts/b253292d9c0a5ead9bc98c4e9a26c6312e27d69f' (2024-02-01)
• Updated input 'flake-parts/nixpkgs-lib':
    'github:NixOS/nixpkgs/b0d36bd0a420ecee3bc916c91886caca87c894e9?dir=lib' (2023-12-30)
  → 'github:NixOS/nixpkgs/97b17f32362e475016f942bbdfda4a4a72a8a652?dir=lib' (2024-01-29)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/ae5c332cbb5827f6b1f02572496b141021de335f' (2024-01-25)
  → 'github:NixOS/nixpkgs/b8b232ae7b8b144397fdb12d20f592e5e7c1a64d' (2024-01-31)
2024-02-04 08:45:35 -08:00
Kawrakow
5ed26e1fc9
Adding some imatrix tools (#5302)
* imatrix: adding --combine and --continue-from

* imatrix: be able to start from a specific chunk

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-04 10:39:58 +02:00
Welby Seely
277fad30c6
cmake : use set() for LLAMA_WIN_VER (#5298)
option() is specifically for booleans.

Fixes #5158
2024-02-03 23:18:51 -05:00
Johannes Gäßler
3c0d25c475
make: add nvcc info print (#5310) 2024-02-03 20:15:13 +01:00
Johannes Gäßler
3cc5ed353c
make: fix nvcc optimization flags for host code (#5309) 2024-02-03 20:14:59 +01:00
Martin Schwaighofer
60ecf099ed add Vulkan support to Nix flake 2024-02-03 13:13:07 -06:00
0cc4m
e920ed393d
Vulkan Intel Fixes, Optimizations and Debugging Flags (#5301)
* Fix Vulkan on Intel ARC

Optimize matmul for Intel ARC

Add Vulkan dequant test

* Add Vulkan debug and validate flags to Make and CMakeLists.txt

* Enable asynchronous transfers in Vulkan backend

* Fix flake8

* Disable Vulkan async backend functions for now

* Also add Vulkan run tests command to Makefile and CMakeLists.txt
2024-02-03 18:15:00 +01:00
Michael Klimenko
52bb63c708
refactor : switch to emplace_back to avoid extra object (#5291) 2024-02-03 13:23:37 +02:00
Jared Van Bortel
1ec3332ade
YaRN : store rope scaling type as int32_t in memory (#5285)
* YaRN : store rope scaling type as int32_t in memory

* llama : store mapped names as const char *
2024-02-03 13:22:06 +02:00
BADR
6a66c5071a
readme : add tenere in the ui tools list (#5284) 2024-02-03 13:20:26 +02:00
AidanBeltonS
a305dba8ff
Fix im2col with 32fp (#5286) 2024-02-03 16:11:37 +08:00
kalomaze
191221178f
perplexity : fix KL divergence calculations on Windows (#5273) 2024-02-02 16:15:30 +02:00
Georgi Gerganov
e437b37fd0
scripts : parse wtype in server-llm.sh (#5167)
* scripts : parse wtype in server-llm.sh

* scripts : fix check for wfile
2024-02-02 14:23:40 +02:00
Mirror Azure
2d40085c26
py : add check for '.attn.masked_bias' layers to GPT2model (#5281) 2024-02-02 13:39:09 +02:00
AidanBeltonS
b05102fe8c
Tidy ggml-sycl (#5261)
* Tidy some code in ggml-sycl

* Remove blank space

* Remove std::printf comments

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-02-02 16:39:48 +08:00
Xuan Son Nguyen
6b91b1e0a9
docker : add build for SYCL, Vulkan + update readme (#5228)
* add vulkan dockerfile

* intel dockerfile: compile sycl by default

* fix vulkan dockerfile

* add docs for vulkan

* docs: sycl build in docker

* docs: remove trailing spaces

* docs: sycl: add docker section

* docs: clarify install vulkan SDK outside docker

* sycl: use intel/oneapi-basekit docker image

* docs: correct TOC

* docs: correct docker image for Intel oneMKL
2024-02-02 09:56:31 +02:00
Meng, Hengyu
e805f0fa99
[SYCL] get MAX_MEM_ALLOC from device property (#5270)
* get max alloc size from device prop

* fix macro typo
2024-02-02 15:54:14 +08:00
Neo Zhang Jianyu
af3ba5d946
[SYCL] update guide of SYCL backend (#5254)
* update guide for make installation, memory, gguf model link,  rm todo for windows build

* add vs install requirement

* update for gpu device check

* update help of llama-bench

* fix grammer issues
2024-02-02 15:53:27 +08:00
Ian Bull
e1e721094d
llama : fix memory leak in llama_batch_free (#5252)
The llama_batch_init allocates memory for a fixed number of tokens.
However, the llama_batch_free only frees memory for the number of
tokens that were added to the batch.

This change-set uses a null terminated array for the batch seq_id, and
frees all the elements until the nullptr is reached. This change-set
also changes the name of the first parameter from `n_tokens` to
`n_tokens_alloc` to more clearly indicate that this value is the number
of tokens allocated to the batch, not the number of tokens in the batch.
2024-02-02 09:20:13 +02:00
Neo Zhang Jianyu
128dcbd3c9
add --no-mmap in llama-bench (#5257)
* add --no-mmap, show sycl backend

* fix conflict

* fix code format, change print for --no-mmap

* ren no_mmap to mmap, show mmap when not default value in printer

* update guide for mmap

* mv position to reduce model reload
2024-02-01 20:48:53 +01:00
0cc4m
4d0924a890
Vulkan Phi Fix for AMD Proprietary Drivers (#5260)
* Replace tanh to avoid NaN in gelu shader on AMD proprietary driver

* Fix another Vulkan CPY buffer size bug
2024-02-01 19:25:24 +01:00
slaren
8ca511cade
cuda : fix LLAMA_CUDA_F16 (#5262) 2024-02-01 18:30:17 +01:00
Ali Nehzat
d71ac90985
make : generate .a library for static linking (#5205) 2024-02-01 17:18:53 +02:00
Guoteng
ce32060198
llama : support InternLM2 (#5184)
* support InternLM2 inference
  * add add_space_prefix KV pair
2024-02-01 11:19:51 +02:00
Eve
1cfb5372cf
Fix broken Vulkan Cmake (properly) (#5230)
* build vulkan as object

* vulkan ci
2024-01-31 20:21:55 +01:00
Georgi Gerganov
d3bac7d584
llama : reorder build_orion() at correct place (#5118) 2024-01-31 18:47:10 +02:00
Georgi Gerganov
5cb04dbc16
llama : remove LLAMA_MAX_DEVICES and LLAMA_SUPPORTS_GPU_OFFLOAD (#5240)
* llama : remove LLAMA_MAX_DEVICES from llama.h

ggml-ci

* Update llama.cpp

Co-authored-by: slaren <slarengh@gmail.com>

* server : remove LLAMA_MAX_DEVICES

ggml-ci

* llama : remove LLAMA_SUPPORTS_GPU_OFFLOAD

ggml-ci

* train : remove LLAMA_SUPPORTS_GPU_OFFLOAD

* readme : add deprecation notice

* readme : change deprecation notice to "remove" and fix url

* llama : remove gpu includes from llama.h

ggml-ci

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-01-31 17:30:17 +02:00
Georgi Gerganov
efb7bdbbd0
metal : add im2col F32 dst support (#5132) 2024-01-31 15:35:41 +02:00
JidongZhang-THU
15606309a0
llava : add MobileVLM support (#5132)
* New Feature:
    1. Sum_Rows:
        fix cuda kernel overflow
        fix block shape error when nrows too big
    2. Im2Col:
        Support Batch in cuda
        Support f32 to f32 both in cpu && cuda
    3. DepthWiseConv:
        Support by Im2Col && MulMat
    4. Pool_2d:
        Supoort avg pooling in cuda
    5. HardSigmoid:
        Imp in cuda
    6. HardSwish:
        Imp in cuda

* fix tabs instead of spaces

* code clean

* CUDA POOL2D

* ADD POOL2D test case in test-backend-ops.cpp

* code clean

* fix pool2d_kernel

nits

* fix bug in pool2d kernel

* fix avg pooling, count_include_pad

nits

* test-backend-ops : add more pool_2d tests

* cuda : fix warnings and formatting

* ggml : check types in release builds too in pool_2d

* test-backend-ops : remove f16 pool_2d tests

* cuda : more style fixes

* Add assert in ggml_cuda_op_pool2d

* pool2d float padding fallback

* test-backend-ops : add dst_type to im2col

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-01-31 15:10:15 +02:00
Neo Zhang Jianyu
b2b9f025e7
format license text, restore apache license by legal suggestion (#5233) 2024-01-31 18:34:46 +05:30
slaren
dabcc5b471
ggml : limit n_threads to the max n_tasks (#5238) 2024-01-31 13:43:03 +01:00
0cc4m
f8e9140cb4
Vulkan Fixes (#5223)
* Fix Vulkan F16 models

* Fix Vulkan context shift crash

* Add Vulkan to common.cpp dump_non_result_info_yaml function

* Fix bug in Vulkan CPY op

* Fix small matrix multiplication errors in AMD GPUs on Windows or with amdvlk

Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>

---------

Co-authored-by: Engininja2 <139037756+Engininja2@users.noreply.github.com>
2024-01-31 11:44:19 +01:00
Yiming Cui
d62520eb2c
Fix typos of IQ2_XXS and IQ3_XXS in llama.cpp (#5231) 2024-01-30 22:04:21 -05:00
Neo Zhang Jianyu
01684139c3
support SYCL backend windows build (#5208)
* support SYCL backend windows build

* add windows build in CI

* add for win build CI

* correct install oneMKL

* fix install issue

* fix ci

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix win build

* fix win build

* fix win build

* restore other CI part

* restore as base

* rm no new line

* fix no new line issue, add -j

* fix grammer issue

* allow to trigger manually, fix format issue

* fix format

* add newline

* fix format

* fix format

* fix format issuse

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-01-31 08:08:07 +05:30
Jared Van Bortel
e8dc55d006
kompute : llama-bench support and ggml_cpu_has_kompute() (#5226) 2024-01-30 19:04:37 -05:00
Georgi Gerganov
e0085fdf7c
Revert "server : change deps.sh xxd files to string literals (#5221)"
This reverts commit 4003be0e5f.
2024-01-30 21:19:26 +02:00
Georgi Gerganov
e6f291d158
server : fix context shift (#5195)
* server : fix context shift + simplify self-extend

* server : take system_tokens into account

* server : more n_past fixes

* server : rever n_past_se changes
2024-01-30 20:17:30 +02:00
JohnnyB
4003be0e5f
server : change deps.sh xxd files to string literals (#5221)
* Changed ugly xxd to literals.

HPP files are much more readable as multiline literals rather than hex arrays.

* Dashes in literal variable names.

Replace . and - with _ in file names -> variable names.

* Comment on removing xxd.

XXD-> string literals

* XXD to string literals.

Replaced these unreadable headers with string literal versions using new deps.sh.
2024-01-30 20:15:05 +02:00
Kawrakow
fea4fd4ba7
ggml : fix IQ3_XXS on Metal (#5219)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-01-30 19:15:28 +02:00
Georgi Gerganov
8f8ddfcfad
sync : ggml (#0) 2024-01-30 16:21:57 +02:00
Georgi Gerganov
6fb50ebbf0
gguf : fix comparison (ggml/715)
ggml-ci
2024-01-30 16:20:25 +02:00
John Balis
625a699b54
ggml_cuda_cpy support for 4d tensors and float16->float32 upcasting (ggml/686)
* added cuda float16->float32 upcasting to ggml_cuda_cpy

* added ability to copy 4d tensors with the cuda backend

* added tests for float16_>float32 upcast and 4d tensor cuda copys

* added 4d copy test for float32->float16 copy

* applied patch suggested by @iamlemec

* simplify cpy tests

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-01-30 16:20:25 +02:00
Georgi Gerganov
a4b07c057a
gguf : add input validation, prevent integer overflows (ggml/709)
* gguf : add input validation, prevent integer overflows

ggml-ci

* gguf : fix switch default case

* gguf : sanitize info->n_dims and info->type

ggml-ci

* gguf : assert GGUF_TYPE_SIZE access

ggml-ci

* ggml : assert mallocs are successful

ggml-ci

* gguf : prevent integer overflow

* gguf : sanitize tensor info

ggml-ci

* gguf : stricter limit on the number of items

ggml-ci
2024-01-30 16:20:25 +02:00
Georgi Gerganov
549a1e6cd5
ci : fix yolo URLs + fix metal capture (ggml/712) 2024-01-30 16:20:25 +02:00
Jack Mousseau
5f14ee0b0c
metal : add debug capture backend function (ggml/694)
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-01-30 16:20:25 +02:00
Kawrakow
8e14e3ddb3
Faster AVX2 dot product for IQ2_XS (#5187)
* iq2xs: faster AVX2 dot product

* iq2xs: small AVX2 imrovement

* Speed up computing sign bits in AVX2 iq2_xs dot product

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Peter Reid <peter@peterreid.net>
2024-01-30 15:15:07 +02:00
Kawrakow
f4d7e54974
SOTA 3-bit quants (#5196)
* iq3_xxs: quantize/dequantize

RMSE seems a bit high-ish at about half-way between q2_K and
q3_K, so need to check more.

* iq3_xxs: CUDA dequantize works

* iq2_xxs: tuning quantization

* iq3_xxs: starting to look better

PPL on wiki.test.raw
LLaMA-v1-7B: 6.4218
LLaMA-v2-7B: 6.3560
Mistral-7B : 6.0717

This is better than Q3_K_XS, with a 5% reduction in quantized model
size.

* iq3_xxs: CUDA dot product

We have
PP-512: 5891 t/s
TG-128: 143.9 t/s

* iq3_xxs: scalar and AVX2 dot products

* iq3_xxs: ARM_NEON and Metal

Metal performance is decent, ARM_NEON is pathetic

* iq3_xxs: slightly better grid points

* Faster iq3_xxs and iq2_xs dot products on CUDA

* iq3_xxs: add some quant mix

* iq3_xxs: fix failing quantization test

Dot product still fails. Is this real?

* iq3_xxs: hopefully fix ROCm

* iq3_xxs: failing tests

This time the dot product accuracy did find an actual bug
in the AVX2 implementation.

* Add IQ3_XXS to test-backend-ops

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-01-30 15:14:12 +02:00