From 16e2b117b415074afd2917a72496b776debfcd58 Mon Sep 17 00:00:00 2001 From: oobabooga <112222186+oobabooga@users.noreply.github.com> Date: Thu, 10 Aug 2023 08:38:10 -0700 Subject: [PATCH] Minor doc change --- docs/GPTQ-models-(4-bit-mode).md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/GPTQ-models-(4-bit-mode).md b/docs/GPTQ-models-(4-bit-mode).md index e8d983eb..b42f4224 100644 --- a/docs/GPTQ-models-(4-bit-mode).md +++ b/docs/GPTQ-models-(4-bit-mode).md @@ -64,7 +64,7 @@ python server.py --autogptq --gpu-memory 3000MiB 6000MiB --model model_name ### Using LoRAs with AutoGPTQ -Not supported yet. +Works fine for a single LoRA. ## GPTQ-for-LLaMa