mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Update LLaMA-model.md (#2460)
Better approach of converting LLaMA model
This commit is contained in:
parent
eb2601a8c3
commit
084b006cfe
@ -30,7 +30,15 @@ pip install protobuf==3.20.1
|
|||||||
|
|
||||||
2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link:
|
2. Use the script below to convert the model in `.pth` format that you, a fellow academic, downloaded using Meta's official link:
|
||||||
|
|
||||||
### [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)
|
### Convert LLaMA to HuggingFace format
|
||||||
|
|
||||||
|
If you have `transformers` installed in place
|
||||||
|
|
||||||
|
```
|
||||||
|
python -m transformers.models.llama.convert_llama_weights_to_hf --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b
|
||||||
|
```
|
||||||
|
|
||||||
|
Otherwise download script [convert_llama_weights_to_hf.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py)
|
||||||
|
|
||||||
```
|
```
|
||||||
python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b
|
python convert_llama_weights_to_hf.py --input_dir /path/to/LLaMA --model_size 7B --output_dir /tmp/outputs/llama-7b
|
||||||
|
Loading…
Reference in New Issue
Block a user