Llamacpp_HF: Fix CFG cache init (#4219)

Documentation says that model.context_params should be sent when
a new context is created. The current code uses model.params which
doesn't exist.

Signed-off-by: kingbri <bdashore3@proton.me>
This commit is contained in:
Brian Dashore 2023-10-07 18:38:29 -04:00 committed by GitHub
parent 2a7cb346dd
commit 7743b5e9de
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -48,7 +48,7 @@ class LlamacppHF(PreTrainedModel):
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids.copy(),
'scores': self.model.scores.copy(),
'ctx': llama_cpp_lib().llama_new_context_with_model(model.model, model.params)
'ctx': llama_cpp_lib().llama_new_context_with_model(model.model, model.context_params)
}
def _validate_model_class(self):