llama.cpp/tests
HimariO ba1cb19cdd
llama : add Qwen2VL support + multimodal RoPE (#10361)
* Barebone Qwen2VL LLM convertor

* Add Qwen2VL cli entrypoint

* [WIP] add qwen2vl arch

* Verify m-rope output

* Add vl-rope/2d-rope support for qwen2vl ViT

* update qwen2vl cli tool

* update 5D tensor op workaround

* [WIP] qwen2vl vision model

* make batch and clip utils compatible with qwen2vl

* [WIP] create inference workflow, gguf convert script but fix

* correcting vision-rope behavior, add the missing last layer back to ViT

* add arg parser to qwen2vl_surgery

* replace variable size array with vector

* cuda-gdb cmake preset

* add fp32 mrope, vision rope kernel

* add fp16 support for qwen2vl and m-rope

* add `GGML_ROPE_TYPE_MROPE`, `GGML_ROPE_TYPE_VISION`

* fix rope op mode switching, out dated func args

* update `llama_hparams`

* update to keep up stream changes

* resolve linter, test errors

* add makefile entry, update speical image padding token

* add mrope unit test, fix few compiler warnings

* rename `mrope` related function, params

* minor updates on debug util, bug fixs

* add `m-rope` testcase to `test-backend-ops`

* Apply suggestions from code review

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* fix traililng whitespce

* store `llama_hparams.rope_sections` with fixed size array

* update position id tensor size check in GGML_OP_ROPE

* minor updates

* update `ggml_backend_*_supports_op` of unsupported backends

* remote old `rope_section` compare operator

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-12-14 14:43:46 +02:00
..
.gitignore tests : gitignore ggml-common.h 2024-03-09 14:17:11 +02:00
CMakeLists.txt remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
get-model.cpp ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
get-model.h ci : add model tests + script wrapper (#4586) 2024-01-26 14:18:00 +02:00
run-json-schema-to-grammar.mjs server : revamp chat UI with vuejs and daisyui (#10175) 2024-11-07 17:31:10 -04:00
test-arg-parser.cpp speculative : refactor and add a simpler example (#10362) 2024-11-25 09:58:41 +02:00
test-autorelease.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-backend-ops.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
test-barrier.cpp ggml : move CPU backend to a separate file (#10144) 2024-11-03 19:34:08 +01:00
test-c.c Nomic Vulkan backend (#4456) 2024-01-29 15:50:50 -05:00
test-chat-template.cpp llama : add enum for built-in chat templates (#10623) 2024-12-02 22:10:19 +01:00
test-double-float.cpp ggml : minor naming changes (#8433) 2024-07-12 10:46:02 +03:00
test-grammar-integration.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-grammar-parser.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-json-schema-to-grammar.cpp grammar : fix JSON Schema for string regex with top-level alt. (#9903) 2024-10-16 19:03:24 +03:00
test-llama-grammar.cpp llama : refactor sampling v2 (#9294) 2024-09-07 15:16:19 +03:00
test-log.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-lora-conversion-inference.sh Fix HF repo commit to clone lora test models (#10649) 2024-12-04 10:45:48 +01:00
test-model-load-cancel.cpp ggml : add numa options (#5377) 2024-02-16 11:31:07 +02:00
test-opt.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
test-quantize-fns.cpp tests : fix compile warning 2024-11-25 15:17:32 +02:00
test-quantize-perf.cpp ggml : inttypes.h -> cinttypes (#0) 2024-11-17 08:30:29 +02:00
test-rope.cpp llama : add Qwen2VL support + multimodal RoPE (#10361) 2024-12-14 14:43:46 +02:00
test-sampling.cpp ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
test-tokenizer-0.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-0.py py : logging and flake8 suppression refactoring (#7081) 2024-05-05 08:07:48 +03:00
test-tokenizer-0.sh tests : fix test-tokenizer-0.sh 2024-05-28 15:04:09 +03:00
test-tokenizer-1-bpe.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-1-spm.cpp common : use common_ prefix for common library functions (#9805) 2024-10-10 22:57:42 +02:00
test-tokenizer-random.py llama : fix pre-tokenization of non-special added tokens (#8228) 2024-07-13 23:35:10 -04:00