mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-11-01 07:30:17 +01:00
1c641e6aac
* `main`/`server`: rename to `llama` / `llama-server` for consistency w/ homebrew
* server: update refs -> llama-server
gitignore llama-server
* server: simplify nix package
* main: update refs -> llama
fix examples/main ref
* main/server: fix targets
* update more names
* Update build.yml
* rm accidentally checked in bins
* update straggling refs
* Update .gitignore
* Update server-llm.sh
* main: target name -> llama-cli
* Prefix all example bins w/ llama-
* fix main refs
* rename {main->llama}-cmake-pkg binary
* prefix more cmake targets w/ llama-
* add/fix gbnf-validator subfolder to cmake
* sort cmake example subdirs
* rm bin files
* fix llama-lookup-* Makefile rules
* gitignore /llama-*
* rename Dockerfiles
* rename llama|main -> llama-cli; consistent RPM bin prefixes
* fix some missing -cli suffixes
* rename dockerfile w/ llama-cli
* rename(make): llama-baby-llama
* update dockerfile refs
* more llama-cli(.exe)
* fix test-eval-callback
* rename: llama-cli-cmake-pkg(.exe)
* address gbnf-validator unused fread warning (switched to C++ / ifstream)
* add two missing llama- prefixes
* Updating docs for eval-callback binary to use new `llama-` prefix.
* Updating a few lingering doc references for rename of main to llama-cli
* Updating `run-with-preset.py` to use new binary names.
Updating docs around `perplexity` binary rename.
* Updating documentation references for lookup-merge and export-lora
* Updating two small `main` references missed earlier in the finetune docs.
* Update apps.nix
* update grammar/README.md w/ new llama-* names
* update llama-rpc-server bin name + doc
* Revert "update llama-rpc-server bin name + doc"
This reverts commit e474ef1df4
.
* add hot topic notice to README.md
* Update README.md
* Update README.md
* rename gguf-split & quantize bins refs in **/tests.sh
---------
Co-authored-by: HanClinto <hanclinto@gmail.com>
51 lines
1.7 KiB
YAML
51 lines
1.7 KiB
YAML
name: Low Severity Bugs
|
|
description: Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
|
|
title: "Bug: "
|
|
labels: ["bug-unconfirmed", "low severity"]
|
|
body:
|
|
- type: markdown
|
|
attributes:
|
|
value: |
|
|
Thanks for taking the time to fill out this bug report!
|
|
Please include information about your system, the steps to reproduce the bug,
|
|
and the version of llama.cpp that you are using.
|
|
If possible, please provide a minimal code example that reproduces the bug.
|
|
- type: textarea
|
|
id: what-happened
|
|
attributes:
|
|
label: What happened?
|
|
description: Also tell us, what did you expect to happen?
|
|
placeholder: Tell us what you see!
|
|
validations:
|
|
required: true
|
|
- type: textarea
|
|
id: version
|
|
attributes:
|
|
label: Name and Version
|
|
description: Which executable and which version of our software are you running? (use `--version` to get a version string)
|
|
placeholder: |
|
|
$./llama-cli --version
|
|
version: 2999 (42b4109e)
|
|
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
|
|
validations:
|
|
required: true
|
|
- type: dropdown
|
|
id: operating-system
|
|
attributes:
|
|
label: What operating system are you seeing the problem on?
|
|
multiple: true
|
|
options:
|
|
- Linux
|
|
- Mac
|
|
- Windows
|
|
- BSD
|
|
- Other? (Please let us know in description)
|
|
validations:
|
|
required: false
|
|
- type: textarea
|
|
id: logs
|
|
attributes:
|
|
label: Relevant log output
|
|
description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
|
|
render: shell
|