mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-27 06:39:25 +01:00
readme : add LLMUnity to UI projects (#9381)
* add LLMUnity to UI projects * add newline to examples/rpc/README.md to fix editorconfig-checker unit test
This commit is contained in:
parent
54f376d0b9
commit
5ed087573e
@ -163,6 +163,7 @@ Unless otherwise noted these projects are open-source with permissive licensing:
|
|||||||
- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
|
- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
|
||||||
- [AIKit](https://github.com/sozercan/aikit) (MIT)
|
- [AIKit](https://github.com/sozercan/aikit) (MIT)
|
||||||
- [LARS - The LLM & Advanced Referencing Solution](https://github.com/abgulati/LARS) (AGPL)
|
- [LARS - The LLM & Advanced Referencing Solution](https://github.com/abgulati/LARS) (AGPL)
|
||||||
|
- [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT)
|
||||||
|
|
||||||
*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
|
*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
|
||||||
|
|
||||||
|
@ -71,3 +71,4 @@ $ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name
|
|||||||
```
|
```
|
||||||
|
|
||||||
This way you can offload model layers to both local and remote devices.
|
This way you can offload model layers to both local and remote devices.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user