mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-26 06:10:29 +01:00
readme : Remove outdated instructions from README.md (#7914) [no ci]
This commit is contained in:
parent
f578b86b21
commit
a55eb1bf0f
@ -622,9 +622,6 @@ python3 -m pip install -r requirements.txt
|
|||||||
# convert the model to ggml FP16 format
|
# convert the model to ggml FP16 format
|
||||||
python3 convert-hf-to-gguf.py models/mymodel/
|
python3 convert-hf-to-gguf.py models/mymodel/
|
||||||
|
|
||||||
# [Optional] for models using BPE tokenizers
|
|
||||||
python convert-hf-to-gguf.py models/mymodel/ --vocab-type bpe
|
|
||||||
|
|
||||||
# quantize the model to 4-bits (using Q4_K_M method)
|
# quantize the model to 4-bits (using Q4_K_M method)
|
||||||
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
|
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user