mirror of
https://github.com/oobabooga/text-generation-webui.git
synced 2024-11-22 08:07:56 +01:00
Update RWKV-model.md
This commit is contained in:
parent
cd3618d7fb
commit
b0845ae4e8
@ -46,7 +46,7 @@ No additional steps are required. Just launch it as you would with any other mod
|
|||||||
python server.py --listen --no-stream --model RWKV-4-Pile-169M-20220807-8023.pth
|
python server.py --listen --no-stream --model RWKV-4-Pile-169M-20220807-8023.pth
|
||||||
```
|
```
|
||||||
|
|
||||||
### Setting a custom strategy
|
#### Setting a custom strategy
|
||||||
|
|
||||||
It is possible to have very fine control over the offloading and precision for the model with the `--rwkv-strategy` flag. Possible values include:
|
It is possible to have very fine control over the offloading and precision for the model with the `--rwkv-strategy` flag. Possible values include:
|
||||||
|
|
||||||
@ -59,6 +59,6 @@ It is possible to have very fine control over the offloading and precision for t
|
|||||||
|
|
||||||
See the README for the PyPl package for more details: https://pypi.org/project/rwkv/
|
See the README for the PyPl package for more details: https://pypi.org/project/rwkv/
|
||||||
|
|
||||||
### Compiling the CUDA kernel
|
#### Compiling the CUDA kernel
|
||||||
|
|
||||||
You can compile the CUDA kernel for the model with `--rwkv-cuda-on`. This should improve the performance a lot but I haven't been able to get it to work yet.
|
You can compile the CUDA kernel for the model with `--rwkv-cuda-on`. This should improve the performance a lot but I haven't been able to get it to work yet.
|
||||||
|
Loading…
Reference in New Issue
Block a user