mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 14:20:31 +01:00
readme : update hot topics about new LoRA functionality
This commit is contained in:
parent
5af8e32238
commit
7faa7460f0
@ -9,6 +9,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
|||||||
|
|
||||||
**Hot topics:**
|
**Hot topics:**
|
||||||
|
|
||||||
|
- [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820)
|
||||||
- [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915)
|
- [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915)
|
||||||
- [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784)
|
- [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user