Minor doc change

This commit is contained in:
oobabooga 2023-08-10 08:38:10 -07:00
parent d6765bebc4
commit 16e2b117b4

View File

@ -64,7 +64,7 @@ python server.py --autogptq --gpu-memory 3000MiB 6000MiB --model model_name
### Using LoRAs with AutoGPTQ
Not supported yet.
Works fine for a single LoRA.
## GPTQ-for-LLaMa