mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2024-12-25 13:58:46 +01:00
chore: Fix markdown warnings (#6625)
This commit is contained in:
parent
ef21ce4ccb
commit
5c4d767ac0
@ -8,9 +8,9 @@
|
|||||||
- [Linux](#linux)
|
- [Linux](#linux)
|
||||||
- [Windows](#windows)
|
- [Windows](#windows)
|
||||||
- [Environment Variable](#environment-variable)
|
- [Environment Variable](#environment-variable)
|
||||||
- [Known Issue](#known-issue)
|
- [Known Issue](#known-issues)
|
||||||
- [Q&A](#q&a)
|
- [Q&A](#qa)
|
||||||
- [Todo](#todo)
|
- [TODO](#todo)
|
||||||
|
|
||||||
## Background
|
## Background
|
||||||
|
|
||||||
@ -54,10 +54,10 @@ It has the similar design of other llama.cpp BLAS-based paths such as *OpenBLAS,
|
|||||||
|
|
||||||
## OS
|
## OS
|
||||||
|
|
||||||
|OS|Status|Verified|
|
| OS | Status | Verified |
|
||||||
|-|-|-|
|
|---------|---------|------------------------------------|
|
||||||
|Linux|Support|Ubuntu 22.04, Fedora Silverblue 39|
|
| Linux | Support | Ubuntu 22.04, Fedora Silverblue 39 |
|
||||||
|Windows|Support|Windows 11|
|
| Windows | Support | Windows 11 |
|
||||||
|
|
||||||
|
|
||||||
## Hardware
|
## Hardware
|
||||||
@ -66,13 +66,13 @@ It has the similar design of other llama.cpp BLAS-based paths such as *OpenBLAS,
|
|||||||
|
|
||||||
**Verified devices**
|
**Verified devices**
|
||||||
|
|
||||||
|Intel GPU| Status | Verified Model|
|
| Intel GPU | Status | Verified Model |
|
||||||
|-|-|-|
|
|-------------------------------|---------|---------------------------------------|
|
||||||
|Intel Data Center Max Series| Support| Max 1550|
|
| Intel Data Center Max Series | Support | Max 1550 |
|
||||||
|Intel Data Center Flex Series| Support| Flex 170|
|
| Intel Data Center Flex Series | Support | Flex 170 |
|
||||||
|Intel Arc Series| Support| Arc 770, 730M|
|
| Intel Arc Series | Support | Arc 770, 730M |
|
||||||
|Intel built-in Arc GPU| Support| built-in Arc GPU in Meteor Lake|
|
| Intel built-in Arc GPU | Support | built-in Arc GPU in Meteor Lake |
|
||||||
|Intel iGPU| Support| iGPU in i5-1250P, i7-1260P, i7-1165G7|
|
| Intel iGPU | Support | iGPU in i5-1250P, i7-1260P, i7-1165G7 |
|
||||||
|
|
||||||
*Notes:*
|
*Notes:*
|
||||||
|
|
||||||
@ -89,10 +89,10 @@ The BLAS acceleration on Nvidia GPU through oneAPI can be obtained using the Nvi
|
|||||||
|
|
||||||
**Verified devices**
|
**Verified devices**
|
||||||
|
|
||||||
|Nvidia GPU| Status | Verified Model|
|
| Nvidia GPU | Status | Verified Model |
|
||||||
|-|-|-|
|
|--------------------------|---------|----------------|
|
||||||
|Ampere Series| Support| A100, A4000|
|
| Ampere Series | Support | A100, A4000 |
|
||||||
|Ampere Series *(Mobile)*| Support| RTX 40 Series|
|
| Ampere Series *(Mobile)* | Support | RTX 40 Series |
|
||||||
|
|
||||||
*Notes:*
|
*Notes:*
|
||||||
- Support for Nvidia targets through oneAPI is currently limited to Linux platforms.
|
- Support for Nvidia targets through oneAPI is currently limited to Linux platforms.
|
||||||
@ -167,7 +167,7 @@ Platform #0: Intel(R) OpenCL HD Graphics
|
|||||||
|
|
||||||
- **Nvidia GPU**
|
- **Nvidia GPU**
|
||||||
|
|
||||||
In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements *-found [here](README.md#cublas)-* are installed.
|
In order to target Nvidia GPUs through SYCL, please make sure the CUDA/CUBLAS native requirements *-found [here](README.md#cuda)-* are installed.
|
||||||
Installation can be verified by running the following:
|
Installation can be verified by running the following:
|
||||||
```sh
|
```sh
|
||||||
nvidia-smi
|
nvidia-smi
|
||||||
@ -313,10 +313,10 @@ found 6 SYCL devices:
|
|||||||
| 5| [opencl:acc:0]| Intel(R) FPGA Emulation Device| 1.2| 24|67108864| 64| 67064815616|
|
| 5| [opencl:acc:0]| Intel(R) FPGA Emulation Device| 1.2| 24|67108864| 64| 67064815616|
|
||||||
```
|
```
|
||||||
|
|
||||||
|Attribute|Note|
|
| Attribute | Note |
|
||||||
|-|-|
|
|------------------------|-------------------------------------------------------------|
|
||||||
|compute capability 1.3|Level-zero driver/runtime, recommended |
|
| compute capability 1.3 | Level-zero driver/runtime, recommended |
|
||||||
|compute capability 3.0|OpenCL driver/runtime, slower than level-zero in most cases|
|
| compute capability 3.0 | OpenCL driver/runtime, slower than level-zero in most cases |
|
||||||
|
|
||||||
4. Launch inference
|
4. Launch inference
|
||||||
|
|
||||||
@ -325,10 +325,10 @@ There are two device selection modes:
|
|||||||
- Single device: Use one device target specified by the user.
|
- Single device: Use one device target specified by the user.
|
||||||
- Multiple devices: Automatically select the devices with the same largest Max compute-units.
|
- Multiple devices: Automatically select the devices with the same largest Max compute-units.
|
||||||
|
|
||||||
|Device selection|Parameter|
|
| Device selection | Parameter |
|
||||||
|-|-|
|
|------------------|----------------------------------------|
|
||||||
|Single device|--split-mode none --main-gpu DEVICE_ID |
|
| Single device | --split-mode none --main-gpu DEVICE_ID |
|
||||||
|Multiple devices|--split-mode layer (default)|
|
| Multiple devices | --split-mode layer (default) |
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
|
|
||||||
@ -486,10 +486,10 @@ found 6 SYCL devices:
|
|||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|Attribute|Note|
|
| Attribute | Note |
|
||||||
|-|-|
|
|------------------------|-----------------------------------------------------------|
|
||||||
|compute capability 1.3|Level-zero running time, recommended |
|
| compute capability 1.3 | Level-zero running time, recommended |
|
||||||
|compute capability 3.0|OpenCL running time, slower than level-zero in most cases|
|
| compute capability 3.0 | OpenCL running time, slower than level-zero in most cases |
|
||||||
|
|
||||||
|
|
||||||
4. Launch inference
|
4. Launch inference
|
||||||
@ -499,10 +499,10 @@ There are two device selection modes:
|
|||||||
- Single device: Use one device assigned by user.
|
- Single device: Use one device assigned by user.
|
||||||
- Multiple devices: Automatically choose the devices with the same biggest Max compute units.
|
- Multiple devices: Automatically choose the devices with the same biggest Max compute units.
|
||||||
|
|
||||||
|Device selection|Parameter|
|
| Device selection | Parameter |
|
||||||
|-|-|
|
|------------------|----------------------------------------|
|
||||||
|Single device|--split-mode none --main-gpu DEVICE_ID |
|
| Single device | --split-mode none --main-gpu DEVICE_ID |
|
||||||
|Multiple devices|--split-mode layer (default)|
|
| Multiple devices | --split-mode layer (default) |
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
|
|
||||||
@ -540,20 +540,20 @@ use 1 SYCL GPUs: [0] with Max compute units:512
|
|||||||
|
|
||||||
#### Build
|
#### Build
|
||||||
|
|
||||||
|Name|Value|Function|
|
| Name | Value | Function |
|
||||||
|-|-|-|
|
|--------------------|-----------------------------------|---------------------------------------------|
|
||||||
|LLAMA_SYCL|ON (mandatory)|Enable build with SYCL code path.|
|
| LLAMA_SYCL | ON (mandatory) | Enable build with SYCL code path. |
|
||||||
|LLAMA_SYCL_TARGET | INTEL *(default)* \| NVIDIA|Set the SYCL target device type.|
|
| LLAMA_SYCL_TARGET | INTEL *(default)* \| NVIDIA | Set the SYCL target device type. |
|
||||||
|LLAMA_SYCL_F16|OFF *(default)* \|ON *(optional)*|Enable FP16 build with SYCL code path.|
|
| LLAMA_SYCL_F16 | OFF *(default)* \|ON *(optional)* | Enable FP16 build with SYCL code path. |
|
||||||
|CMAKE_C_COMPILER|icx|Set *icx* compiler for SYCL code path.|
|
| CMAKE_C_COMPILER | icx | Set *icx* compiler for SYCL code path. |
|
||||||
|CMAKE_CXX_COMPILER|icpx *(Linux)*, icx *(Windows)*|Set `icpx/icx` compiler for SYCL code path.|
|
| CMAKE_CXX_COMPILER | icpx *(Linux)*, icx *(Windows)* | Set `icpx/icx` compiler for SYCL code path. |
|
||||||
|
|
||||||
#### Runtime
|
#### Runtime
|
||||||
|
|
||||||
|Name|Value|Function|
|
| Name | Value | Function |
|
||||||
|-|-|-|
|
|-------------------|------------------|---------------------------------------------------------------------------------------------------------------------------|
|
||||||
|GGML_SYCL_DEBUG|0 (default) or 1|Enable log function by macro: GGML_SYCL_DEBUG|
|
| GGML_SYCL_DEBUG | 0 (default) or 1 | Enable log function by macro: GGML_SYCL_DEBUG |
|
||||||
|ZES_ENABLE_SYSMAN| 0 (default) or 1|Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.<br>Recommended to use when --split-mode = layer|
|
| ZES_ENABLE_SYSMAN | 0 (default) or 1 | Support to get free memory of GPU by sycl::aspect::ext_intel_free_memory.<br>Recommended to use when --split-mode = layer |
|
||||||
|
|
||||||
## Known Issues
|
## Known Issues
|
||||||
|
|
||||||
@ -591,6 +591,6 @@ use 1 SYCL GPUs: [0] with Max compute units:512
|
|||||||
### **GitHub contribution**:
|
### **GitHub contribution**:
|
||||||
Please add the **[SYCL]** prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay.
|
Please add the **[SYCL]** prefix/tag in issues/PRs titles to help the SYCL-team check/address them without delay.
|
||||||
|
|
||||||
## Todo
|
## TODO
|
||||||
|
|
||||||
- Support row layer split for multiple card runs.
|
- Support row layer split for multiple card runs.
|
||||||
|
38
README.md
38
README.md
@ -485,14 +485,14 @@ Building the program with BLAS support may lead to some performance improvements
|
|||||||
|
|
||||||
The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used. The following compilation options are also available to tweak performance:
|
The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars) can be used to specify which GPU(s) will be used. The following compilation options are also available to tweak performance:
|
||||||
|
|
||||||
| Option | Legal values | Default | Description |
|
| Option | Legal values | Default | Description |
|
||||||
|--------------------------------|------------------------|---------|-------------|
|
|--------------------------------|------------------------|---------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| LLAMA_CUDA_FORCE_DMMV | Boolean | false | Force the use of dequantization + matrix vector multiplication kernels instead of using kernels that do matrix vector multiplication on quantized data. By default the decision is made based on compute capability (MMVQ for 6.1/Pascal/GTX 1000 or higher). Does not affect k-quants. |
|
| LLAMA_CUDA_FORCE_DMMV | Boolean | false | Force the use of dequantization + matrix vector multiplication kernels instead of using kernels that do matrix vector multiplication on quantized data. By default the decision is made based on compute capability (MMVQ for 6.1/Pascal/GTX 1000 or higher). Does not affect k-quants. |
|
||||||
| LLAMA_CUDA_DMMV_X | Positive integer >= 32 | 32 | Number of values in x direction processed by the CUDA dequantization + matrix vector multiplication kernel per iteration. Increasing this value can improve performance on fast GPUs. Power of 2 heavily recommended. Does not affect k-quants. |
|
| LLAMA_CUDA_DMMV_X | Positive integer >= 32 | 32 | Number of values in x direction processed by the CUDA dequantization + matrix vector multiplication kernel per iteration. Increasing this value can improve performance on fast GPUs. Power of 2 heavily recommended. Does not affect k-quants. |
|
||||||
| LLAMA_CUDA_MMV_Y | Positive integer | 1 | Block size in y direction for the CUDA mul mat vec kernels. Increasing this value can improve performance on fast GPUs. Power of 2 recommended. |
|
| LLAMA_CUDA_MMV_Y | Positive integer | 1 | Block size in y direction for the CUDA mul mat vec kernels. Increasing this value can improve performance on fast GPUs. Power of 2 recommended. |
|
||||||
| LLAMA_CUDA_F16 | Boolean | false | If enabled, use half-precision floating point arithmetic for the CUDA dequantization + mul mat vec kernels and for the q4_1 and q5_1 matrix matrix multiplication kernels. Can improve performance on relatively recent GPUs. |
|
| LLAMA_CUDA_F16 | Boolean | false | If enabled, use half-precision floating point arithmetic for the CUDA dequantization + mul mat vec kernels and for the q4_1 and q5_1 matrix matrix multiplication kernels. Can improve performance on relatively recent GPUs. |
|
||||||
| LLAMA_CUDA_KQUANTS_ITER | 1 or 2 | 2 | Number of values processed per iteration and per CUDA thread for Q2_K and Q6_K quantization formats. Setting this value to 1 can improve performance for slow GPUs. |
|
| LLAMA_CUDA_KQUANTS_ITER | 1 or 2 | 2 | Number of values processed per iteration and per CUDA thread for Q2_K and Q6_K quantization formats. Setting this value to 1 can improve performance for slow GPUs. |
|
||||||
| LLAMA_CUDA_PEER_MAX_BATCH_SIZE | Positive integer | 128 | Maximum batch size for which to enable peer access between multiple GPUs. Peer access requires either Linux or NVLink. When using NVLink enabling peer access for larger batch sizes is potentially beneficial. |
|
| LLAMA_CUDA_PEER_MAX_BATCH_SIZE | Positive integer | 128 | Maximum batch size for which to enable peer access between multiple GPUs. Peer access requires either Linux or NVLink. When using NVLink enabling peer access for larger batch sizes is potentially beneficial. |
|
||||||
|
|
||||||
- #### hipBLAS
|
- #### hipBLAS
|
||||||
|
|
||||||
@ -534,11 +534,11 @@ Building the program with BLAS support may lead to some performance improvements
|
|||||||
If your GPU is not officially supported you can use the environment variable [`HSA_OVERRIDE_GFX_VERSION`] set to a similar GPU, for example 10.3.0 on RDNA2 (e.g. gfx1030, gfx1031, or gfx1035) or 11.0.0 on RDNA3.
|
If your GPU is not officially supported you can use the environment variable [`HSA_OVERRIDE_GFX_VERSION`] set to a similar GPU, for example 10.3.0 on RDNA2 (e.g. gfx1030, gfx1031, or gfx1035) or 11.0.0 on RDNA3.
|
||||||
The following compilation options are also available to tweak performance (yes, they refer to CUDA, not HIP, because it uses the same code as the cuBLAS version above):
|
The following compilation options are also available to tweak performance (yes, they refer to CUDA, not HIP, because it uses the same code as the cuBLAS version above):
|
||||||
|
|
||||||
| Option | Legal values | Default | Description |
|
| Option | Legal values | Default | Description |
|
||||||
|-------------------------|------------------------|---------|-------------|
|
|-------------------------|------------------------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| LLAMA_CUDA_DMMV_X | Positive integer >= 32 | 32 | Number of values in x direction processed by the HIP dequantization + matrix vector multiplication kernel per iteration. Increasing this value can improve performance on fast GPUs. Power of 2 heavily recommended. Does not affect k-quants. |
|
| LLAMA_CUDA_DMMV_X | Positive integer >= 32 | 32 | Number of values in x direction processed by the HIP dequantization + matrix vector multiplication kernel per iteration. Increasing this value can improve performance on fast GPUs. Power of 2 heavily recommended. Does not affect k-quants. |
|
||||||
| LLAMA_CUDA_MMV_Y | Positive integer | 1 | Block size in y direction for the HIP mul mat vec kernels. Increasing this value can improve performance on fast GPUs. Power of 2 recommended. Does not affect k-quants. |
|
| LLAMA_CUDA_MMV_Y | Positive integer | 1 | Block size in y direction for the HIP mul mat vec kernels. Increasing this value can improve performance on fast GPUs. Power of 2 recommended. Does not affect k-quants. |
|
||||||
| LLAMA_CUDA_KQUANTS_ITER | 1 or 2 | 2 | Number of values processed per iteration and per HIP thread for Q2_K and Q6_K quantization formats. Setting this value to 1 can improve performance for slow GPUs. |
|
| LLAMA_CUDA_KQUANTS_ITER | 1 or 2 | 2 | Number of values processed per iteration and per HIP thread for Q2_K and Q6_K quantization formats. Setting this value to 1 can improve performance for slow GPUs. |
|
||||||
|
|
||||||
- #### CLBlast
|
- #### CLBlast
|
||||||
|
|
||||||
@ -746,11 +746,11 @@ From the unzipped folder, open a terminal/cmd window here and place a pre-conver
|
|||||||
As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.
|
As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same.
|
||||||
|
|
||||||
| Model | Original size | Quantized size (Q4_0) |
|
| Model | Original size | Quantized size (Q4_0) |
|
||||||
|------:|--------------:|-----------------------:|
|
|------:|--------------:|----------------------:|
|
||||||
| 7B | 13 GB | 3.9 GB |
|
| 7B | 13 GB | 3.9 GB |
|
||||||
| 13B | 24 GB | 7.8 GB |
|
| 13B | 24 GB | 7.8 GB |
|
||||||
| 30B | 60 GB | 19.5 GB |
|
| 30B | 60 GB | 19.5 GB |
|
||||||
| 65B | 120 GB | 38.5 GB |
|
| 65B | 120 GB | 38.5 GB |
|
||||||
|
|
||||||
### Quantization
|
### Quantization
|
||||||
|
|
||||||
@ -758,7 +758,7 @@ Several quantization methods are supported. They differ in the resulting model d
|
|||||||
|
|
||||||
*(outdated)*
|
*(outdated)*
|
||||||
|
|
||||||
| Model | Measure | F16 | Q4_0 | Q4_1 | Q5_0 | Q5_1 | Q8_0 |
|
| Model | Measure | F16 | Q4_0 | Q4_1 | Q5_0 | Q5_1 | Q8_0 |
|
||||||
|------:|--------------|-------:|-------:|-------:|-------:|-------:|-------:|
|
|------:|--------------|-------:|-------:|-------:|-------:|-------:|-------:|
|
||||||
| 7B | perplexity | 5.9066 | 6.1565 | 6.0912 | 5.9862 | 5.9481 | 5.9070 |
|
| 7B | perplexity | 5.9066 | 6.1565 | 6.0912 | 5.9862 | 5.9481 | 5.9070 |
|
||||||
| 7B | file size | 13.0G | 3.5G | 3.9G | 4.3G | 4.7G | 6.7G |
|
| 7B | file size | 13.0G | 3.5G | 3.9G | 4.3G | 4.7G | 6.7G |
|
||||||
|
@ -49,11 +49,11 @@ If you intend to run multiple models in parallel with shared memory, it is your
|
|||||||
|
|
||||||
1. Tenant Isolation: Models should run separately with strong isolation methods to prevent unwanted data access. Separating networks is crucial for isolation, as it prevents unauthorized access to data or models and malicious users from sending graphs to execute under another tenant's identity.
|
1. Tenant Isolation: Models should run separately with strong isolation methods to prevent unwanted data access. Separating networks is crucial for isolation, as it prevents unauthorized access to data or models and malicious users from sending graphs to execute under another tenant's identity.
|
||||||
|
|
||||||
1. Resource Allocation: A denial of service caused by one model can impact the overall system health. Implement safeguards like rate limits, access controls, and health monitoring.
|
2. Resource Allocation: A denial of service caused by one model can impact the overall system health. Implement safeguards like rate limits, access controls, and health monitoring.
|
||||||
|
|
||||||
1. Model Sharing: In a multitenant model sharing design, tenants and users must understand the security risks of running code provided by others. Since there are no reliable methods to detect malicious models, sandboxing the model execution is the recommended approach to mitigate the risk.
|
3. Model Sharing: In a multitenant model sharing design, tenants and users must understand the security risks of running code provided by others. Since there are no reliable methods to detect malicious models, sandboxing the model execution is the recommended approach to mitigate the risk.
|
||||||
|
|
||||||
1. Hardware Attacks: GPUs or TPUs can also be attacked. [Researches](https://scholar.google.com/scholar?q=gpu+side+channel) has shown that side channel attacks on GPUs are possible, which can make data leak from other models or processes running on the same system at the same time.
|
4. Hardware Attacks: GPUs or TPUs can also be attacked. [Researches](https://scholar.google.com/scholar?q=gpu+side+channel) has shown that side channel attacks on GPUs are possible, which can make data leak from other models or processes running on the same system at the same time.
|
||||||
|
|
||||||
## Reporting a vulnerability
|
## Reporting a vulnerability
|
||||||
|
|
||||||
|
@ -22,7 +22,7 @@ After building, run: `./llava-cli` to see the usage. For example:
|
|||||||
|
|
||||||
## Model conversion
|
## Model conversion
|
||||||
|
|
||||||
- Clone `mobileVLM-1.7B` and `clip-vit-large-patch14-336` locally:
|
1. Clone `mobileVLM-1.7B` and `clip-vit-large-patch14-336` locally:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
git clone https://huggingface.co/mtgv/MobileVLM-1.7B
|
git clone https://huggingface.co/mtgv/MobileVLM-1.7B
|
||||||
|
@ -24,7 +24,7 @@ After building, run: `./llava-cli` to see the usage. For example:
|
|||||||
|
|
||||||
## LLaVA 1.5
|
## LLaVA 1.5
|
||||||
|
|
||||||
- Clone a LLaVA and a CLIP model ([available options](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)). For example:
|
1. Clone a LLaVA and a CLIP model ([available options](https://github.com/haotian-liu/LLaVA/blob/main/docs/MODEL_ZOO.md)). For example:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
|
git clone https://huggingface.co/liuhaotian/llava-v1.5-7b
|
||||||
|
@ -310,7 +310,7 @@ These options help improve the performance and memory usage of the LLaMA models.
|
|||||||
|
|
||||||
### Quantization
|
### Quantization
|
||||||
|
|
||||||
For information about 4-bit quantization, which can significantly improve performance and reduce memory usage, please refer to llama.cpp's primary [README](../../README.md#prepare-data--run).
|
For information about 4-bit quantization, which can significantly improve performance and reduce memory usage, please refer to llama.cpp's primary [README](../../README.md#prepare-and-quantize).
|
||||||
|
|
||||||
## Additional Options
|
## Additional Options
|
||||||
|
|
||||||
|
@ -3,19 +3,18 @@
|
|||||||
TODO
|
TODO
|
||||||
|
|
||||||
## Llama 2 70B Scorechart
|
## Llama 2 70B Scorechart
|
||||||
Quantization | Model size (GiB) | Perplexity | Delta to fp16
|
| Quantization | Model size (GiB) | Perplexity | Delta to fp16 |
|
||||||
-- | -- | -- | --
|
|--------------|------------------|------------|---------------|
|
||||||
Q4_0 | 36.20 | 3.5550 | 3.61%
|
| Q4_0 | 36.20 | 3.5550 | 3.61% |
|
||||||
Q4_1 | 40.20 | 3.5125 | 2.37%
|
| Q4_1 | 40.20 | 3.5125 | 2.37% |
|
||||||
Q5_0 | 44.20 | 3.4744 | 1.26%
|
| Q5_0 | 44.20 | 3.4744 | 1.26% |
|
||||||
Q2_K | 27.27 | 3.7339 | 8.82%
|
| Q2_K | 27.27 | 3.7339 | 8.82% |
|
||||||
Q3_K_S | 27.86 | 3.7019 | 7.89%
|
| Q3_K_S | 27.86 | 3.7019 | 7.89% |
|
||||||
Q3_K_M | 30.83 | 3.5932 | 4.72%
|
| Q3_K_M | 30.83 | 3.5932 | 4.72% |
|
||||||
Q3_K_L | 33.67 | 3.5617 | 3.80%
|
| Q3_K_L | 33.67 | 3.5617 | 3.80% |
|
||||||
Q4_K_S | 36.39 | 3.4852 | 1.57%
|
| Q4_K_S | 36.39 | 3.4852 | 1.57% |
|
||||||
Q4_K_M | 38.54 | 3.4725 | 1.20%
|
| Q4_K_M | 38.54 | 3.4725 | 1.20% |
|
||||||
Q5_K_S | 44.20 | 3.4483 | 0.50%
|
| Q5_K_S | 44.20 | 3.4483 | 0.50% |
|
||||||
Q5_K_M | 45.41 | 3.4451 | 0.40%
|
| Q5_K_M | 45.41 | 3.4451 | 0.40% |
|
||||||
Q6_K | 52.70 | 3.4367 | 0.16%
|
| Q6_K | 52.70 | 3.4367 | 0.16% |
|
||||||
fp16 | 128.5 | 3.4313 | -
|
| fp16 | 128.5 | 3.4313 | - |
|
||||||
|
|
||||||
|
@ -4,17 +4,17 @@ TODO
|
|||||||
|
|
||||||
## Llama 2 7B
|
## Llama 2 7B
|
||||||
|
|
||||||
Quantization | Bits per Weight (BPW)
|
| Quantization | Bits per Weight (BPW) |
|
||||||
-- | --
|
|--------------|-----------------------|
|
||||||
Q2_K | 3.35
|
| Q2_K | 3.35 |
|
||||||
Q3_K_S | 3.50
|
| Q3_K_S | 3.50 |
|
||||||
Q3_K_M | 3.91
|
| Q3_K_M | 3.91 |
|
||||||
Q3_K_L | 4.27
|
| Q3_K_L | 4.27 |
|
||||||
Q4_K_S | 4.58
|
| Q4_K_S | 4.58 |
|
||||||
Q4_K_M | 4.84
|
| Q4_K_M | 4.84 |
|
||||||
Q5_K_S | 5.52
|
| Q5_K_S | 5.52 |
|
||||||
Q5_K_M | 5.68
|
| Q5_K_M | 5.68 |
|
||||||
Q6_K | 6.56
|
| Q6_K | 6.56 |
|
||||||
|
|
||||||
## Llama 2 13B
|
## Llama 2 13B
|
||||||
Quantization | Bits per Weight (BPW)
|
Quantization | Bits per Weight (BPW)
|
||||||
|
Loading…
Reference in New Issue
Block a user