mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2025-01-25 10:58:56 +01:00
2e666832e6
* ic * migrate my eary work * add the belonging stuff: css,favicon etc * de prompts * chore: Update HTML meta tags in index.html file * add api-key css classes * some necessary fixes * Add API key CSS classes and update styling in style.css * clean the code * move API to the top, rearrange param sliders. update css * add tooltips to the parameters with comprehensible explanations * fix FloatField and BoolField tooltips * fix grammar field width * use template literales for promptFormats.js * update const ModelGenerationInfo * remove ms per token, since not relevant for most webui users and use cases * add phi-3 prompt template * add phi3 to dropdown * add css class * update forgotten css theme * add user message suffix * fix chatml & add llama3 format * fix llama3 prompt template * more prompt format fixes * add more comon stop tokens * add missing char * do not separate with new line or comma * move prompt style * add hacky llama2 prompt solution, reduce redundancy in promptFormats.js * fix toggle state localstorage * add cmd-r prompt et reduce redundancy * set default prompt to empty * move files, clean code * fix css path * add a button to the new ui * move new ui to "/public" due to otherwise problematic CORS behaviour * include new ui in cpp * fix wrong link to old ui * renaming to ensure consistency * fix typos "prompt-format" -> "prompt-formats" * use correct indent * add new ui files to makefile * fix typo |
||
---|---|---|
.. | ||
baby-llama | ||
batched | ||
batched-bench | ||
batched.swift | ||
beam-search | ||
benchmark | ||
convert-llama2c-to-ggml | ||
embedding | ||
eval-callback | ||
export-lora | ||
finetune | ||
gbnf-validator | ||
gguf | ||
gguf-split | ||
gritlm | ||
imatrix | ||
infill | ||
jeopardy | ||
llama-bench | ||
llama.android | ||
llama.swiftui | ||
llava | ||
lookahead | ||
lookup | ||
main | ||
main-cmake-pkg | ||
parallel | ||
passkey | ||
perplexity | ||
quantize | ||
quantize-stats | ||
retrieval | ||
rpc | ||
save-load-state | ||
server | ||
simple | ||
speculative | ||
sycl | ||
tokenize | ||
train-text-from-scratch | ||
alpaca.sh | ||
base-translate.sh | ||
chat-13B.bat | ||
chat-13B.sh | ||
chat-persistent.sh | ||
chat-vicuna.sh | ||
chat.sh | ||
CMakeLists.txt | ||
convert-legacy-llama.py | ||
gpt4all.sh | ||
json_schema_to_grammar.py | ||
json-schema-pydantic-example.py | ||
llama2-13b.sh | ||
llama2.sh | ||
llama.vim | ||
llm.vim | ||
Miku.sh | ||
pydantic_models_to_grammar.py | ||
pydantic-models-to-grammar-examples.py | ||
reason-act.sh | ||
regex-to-grammar.py | ||
server-embd.py | ||
server-llama2-13B.sh | ||
ts-type-to-grammar.sh |