Closed
Description
Name and Version
>llama-cli --version
version: 4948 (00d53800)
built with MSVC 19.43.34808.0 for x64
>pip list
Package Version
---------------------------- ---------------
absl-py 0.11.0
accelerate 0.27.2
addict 2.4.0
aggdraw 1.3.18.post0
aiohttp 3.9.5
aiosignal 1.3.1
annotated-types 0.7.0
ansicon 1.89.0
antlr4-python3-runtime 4.9.3
anyio 4.3.0
async-timeout 4.0.3
attrs 23.2.0
beautifulsoup4 4.13.3
blend_modes 2.1.0
blessed 1.20.0
blind-watermark 0.4.4
blobfile 3.0.0
Brotli 1.1.0
bs4 0.0.2
cachetools 5.3.3
certifi 2024.2.2
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
colour-science 0.4.4
comfy_api_simplified 1.1.2
comfy-cli 1.3.5
contourpy 1.2.1
cssutils 2.11.1
cycler 0.12.1
datasets 3.4.1
diffusers 0.29.0
dill 0.3.8
distlib 0.3.8
distro 1.9.0
dnspython 2.6.1
docopt 0.6.2
EbookLib 0.17.1
editor 1.6.6
einops 0.7.0
email_validator 2.1.2
epub2txt 0.1.6
exceptiongroup 1.2.0
fairscale 0.4.13
fastapi 0.111.0
fastapi-cli 0.0.4
filelock 3.13.1
fire 0.7.0
flatbuffers 24.3.25
fontpls 0.1.0
fonttools 4.53.0
frozenlist 1.4.1
fsspec 2024.3.1
future 1.0.0
gdown 5.2.0
gguf 0.6.0
gitdb 4.0.12
GitPython 3.1.44
google-ai-generativelanguage 0.6.4
google-api-core 2.19.0
google-api-python-client 2.133.0
google-auth 2.30.0
google-auth-httplib2 0.2.0
google-generativeai 0.6.0
googleapis-common-protos 1.63.1
grpcio 1.64.1
grpcio-status 1.62.2
h11 0.14.0
httpcore 1.0.5
httplib2 0.22.0
httptools 0.6.1
httpx 0.27.0
huggingface-hub 0.29.3
idna 3.6
image-reward 1.5
imageio 2.34.1
importlib_metadata 7.1.0
inquirer 3.4.0
inquirerpy 0.3.4
intel-openmp 2021.4.0
jax 0.4.29
jaxlib 0.4.29
Jinja2 3.1.2
jinxed 1.3.0
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
kiwisolver 1.4.5
kornia 0.7.2
kornia_rs 0.1.3
lazy_loader 0.4
llama_models 0.1.3
llama_stack 0.1.3
llama_stack_client 0.1.3
llvmlite 0.43.0
lmdb 1.4.1
loguru 0.7.2
logzero 1.7.0
lpips 0.1.4
lxml 5.1.1
Markdown 3.5.2
markdown-it-py 3.0.0
MarkupSafe 2.1.3
matplotlib 3.9.0
mdurl 0.1.2
mediapipe 0.10.14
mergoo 0.0.10
mixpanel 4.10.1
mkl 2021.4.0
ml-dtypes 0.4.0
more-itertools 10.6.0
mpmath 1.3.0
multidict 6.0.5
multiprocess 0.70.16
networkx 3.2.1
numba 0.60.0
numpy 1.24.4
ollama 0.3.0
omegaconf 2.3.0
opencv-contrib-python 4.10.0.84
opencv-python 4.10.0.82
opt-einsum 3.3.0
orca-cli 0.1.0
orjson 3.10.5
packaging 24.0
pandas 2.2.2
pathspec 0.12.1
pdfkit 1.0.0
peft 0.14.0
pfzy 0.3.4
pillow 10.2.0
pip 24.0
pipenv 2023.12.1
platformdirs 4.2.0
prompt_toolkit 3.0.50
proto-plus 1.23.0
protobuf 4.25.3
psd-tools 1.9.33
psutil 5.9.8
py-cpuinfo 9.0.0
pyaml 25.1.0
pyarrow 16.1.0
pyarrow-hotfix 0.6
pyasn1 0.6.0
pyasn1_modules 0.4.0
pycparser 2.22
pycryptodomex 3.21.0
pydantic 2.7.4
pydantic_core 2.18.4
Pygments 2.18.0
PyMatting 1.1.12
pyparsing 3.1.2
pypng 0.20220715.0
PySocks 1.7.1
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.9
pytz 2024.1
PyWavelets 1.6.0
PyYAML 6.0.1
pyzbar 0.1.9
qrcode 7.4.2
questionary 2.1.0
readchar 4.2.1
referencing 0.36.2
regex 2024.4.28
requests 2.32.3
rich 13.9.4
rpds-py 0.23.0
rsa 4.9
ruff 0.9.4
runs 1.2.2
safetensors 0.4.3
scikit-image 0.23.2
scipy 1.13.1
seaborn 0.13.2
segment-anything 1.0
semver 3.0.4
sentencepiece 0.1.99
setuptools 69.1.1
shellingham 1.5.4
six 1.16.0
smmap 5.0.2
sniffio 1.3.1
sounddevice 0.4.7
soupsieve 2.5
starlette 0.37.2
sympy 1.12
tb-nightly 2.18.0a20240611
tbb 2021.12.0
tensorboard-data-server 0.7.2
termcolor 2.5.0
tifffile 2024.5.22
tiktoken 0.9.0
timm 0.6.13
tokenizers 0.21.0
tomli 2.0.1
tomlkit 0.13.2
torch 2.3.1
torchaudio 2.2.1+cu118
torchvision 0.18.1
tqdm 4.67.1
transformers 4.50.0.dev0
typer 0.15.2
typer-config 1.4.0
typing_extensions 4.8.0
tzdata 2024.1
ujson 5.10.0
ultralytics 8.2.35
ultralytics-thop 2.0.0
uritemplate 4.1.1
urllib3 2.2.1
uv 0.5.26
uvicorn 0.30.1
vastai 0.2.2
virtualenv 20.25.1
watchfiles 0.22.0
wcwidth 0.2.13
websocket-client 1.8.0
websockets 12.0
Werkzeug 3.0.3
wget 3.2
win32-setctime 1.1.0
xmod 1.8.1
xxhash 3.4.1
yapf 0.40.2
yarl 1.9.4
zipp 3.19.2
Operating systems
Windows
Which llama.cpp modules do you know to be affected?
Python/Bash scripts
Command line
python C:\tools\llama.cpp2\convert_lora_to_gguf.py .
Problem description & steps to reproduce
I can reproduce the issue the following way:
- I trained a peft adater using unsloth for the model
google/gemma3-4b-it
- I uploaded the files to https://huggingface.co/molbal/CRA-v1.2-Guided-4B/tree/main (Including adapter_config.json and adapter_model.safetensors)
- I installed the latest transformers via pip
- I pulled the latest llama.cpp version
- I executed the command
python C:\tools\llama.cpp2\convert_lora_to_gguf.py .
within the directory where the adapter files are placed - I see the following output:
INFO:lora-to-gguf:Loading base model from Hugging Face: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
Traceback (most recent call last):
File "C:\tools\llama.cpp2\convert_lora_to_gguf.py", line 446, in <module>
model_instance = LoraModel(
File "C:\tools\llama.cpp2\convert_lora_to_gguf.py", line 359, in __init__
super().__init__(*args, **kwargs)
File "C:\tools\llama.cpp2\convert_hf_to_gguf.py", line 3388, in __init__
hparams = Model.load_hparams(kwargs["dir_model"])
KeyError: 'dir_model'
Expected outcome:
I expected a gguf model to be written into the same directory.
First Bad Commit
I could never get Gemma3 GGUF conversion working. I tried version 4942 and version 4948 (which is the latest at the time of reporting this issue)
Relevant log output
INFO:lora-to-gguf:Loading base model from Hugging Face: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
Traceback (most recent call last):
File "C:\tools\llama.cpp2\convert_lora_to_gguf.py", line 446, in <module>
model_instance = LoraModel(
File "C:\tools\llama.cpp2\convert_lora_to_gguf.py", line 359, in __init__
super().__init__(*args, **kwargs)
File "C:\tools\llama.cpp2\convert_hf_to_gguf.py", line 3388, in __init__
hparams = Model.load_hparams(kwargs["dir_model"])
KeyError: 'dir_model'