Skip to content

Commit 07f0f3a

Browse files
authored
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
2 parents d8a3ddb + 8be7d67 commit 07f0f3a

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

llama_cpp/llama_cpp.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -423,7 +423,10 @@ def llama_backend_free():
423423
def llama_load_model_from_file(
424424
path_model: bytes, params: llama_context_params
425425
) -> llama_model_p:
426-
return _lib.llama_load_model_from_file(path_model, params)
426+
result = _lib.llama_load_model_from_file(path_model, params)
427+
if result is None:
428+
raise Exception(f"Failed to load model from {path_model}")
429+
return result
427430

428431

429432
_lib.llama_load_model_from_file.argtypes = [c_char_p, llama_context_params]

0 commit comments

Comments
 (0)