mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 22:08:46 +01:00
Update hot topics to mention Alpaca support
This commit is contained in:
parent
c494ed5b94
commit
160bfb217d
@ -7,7 +7,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
|||||||
|
|
||||||
**Hot topics:**
|
**Hot topics:**
|
||||||
|
|
||||||
- RMSNorm implementation / fixes: https://github.com/ggerganov/llama.cpp/issues/173
|
- [Added Alpaca support](https://github.com/ggerganov/llama.cpp#instruction-mode-with-alpaca)
|
||||||
- Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64
|
- Cache input prompts for faster initialization: https://github.com/ggerganov/llama.cpp/issues/64
|
||||||
- Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105
|
- Create a `llama.cpp` logo: https://github.com/ggerganov/llama.cpp/issues/105
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user