From 076b08649ecc3b0e1c0709c2a086a63eddd1bf32 Mon Sep 17 00:00:00 2001 From: Georgi Gerganov Date: Wed, 3 Apr 2024 16:11:15 +0300 Subject: [PATCH] readme : update hot topics --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 1207763fc..affd0b814 100644 --- a/README.md +++ b/README.md @@ -18,6 +18,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) ### Hot topics +- **MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387** - Model sharding instructions using `gguf-split` https://github.com/ggerganov/llama.cpp/discussions/6404 - Fix major bug in Metal batched inference https://github.com/ggerganov/llama.cpp/pull/6225 - Multi-GPU pipeline parallelizm support https://github.com/ggerganov/llama.cpp/pull/6017