Skip to content

convert : improve model arch handling #13122

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Apr 30, 2025
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 40 additions & 21 deletions convert_hf_to_gguf.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,8 +66,6 @@ class ModelBase:
part_names: list[str]
is_safetensors: bool
hparams: dict[str, Any]
block_count: int
tensor_map: gguf.TensorNameMap
tensor_names: set[str] | None
gguf_writer: gguf.GGUFWriter
model_name: str | None
Expand All @@ -78,6 +76,10 @@ class ModelBase:
# subclasses should define this!
model_arch: gguf.MODEL_ARCH

# subclasses should initialize this!
block_count: int
tensor_map: gguf.TensorNameMap

def __init__(self, dir_model: Path, ftype: gguf.LlamaFileType, fname_out: Path, is_big_endian: bool = False,
use_temp_file: bool = False, eager: bool = False,
metadata_override: Path | None = None, model_name: str | None = None,
Expand Down Expand Up @@ -113,8 +115,6 @@ def get_remote_tensors() -> Iterator[tuple[str, Tensor]]:
if not self.is_safetensors:
self.part_names = ModelBase.get_model_part_names(self.dir_model, "pytorch_model", ".bin")
self.hparams = ModelBase.load_hparams(self.dir_model) if hparams is None else hparams
self.block_count = self.find_hparam(["n_layers", "num_hidden_layers", "n_layer", "num_layers"])
self.tensor_map = gguf.get_tensor_name_map(self.model_arch, self.block_count)
self.tensor_names = None
self.metadata_override = metadata_override
self.model_name = model_name
Expand Down Expand Up @@ -418,14 +418,7 @@ def get_model_part_names(dir_model: Path, prefix: str, suffix: str) -> list[str]
@staticmethod
def load_hparams(dir_model: Path):
with open(dir_model / "config.json", "r", encoding="utf-8") as f:
hparams = json.load(f)
architectures = hparams.get("architectures")
if "text_config" in hparams:
hparams = {**hparams, **hparams["text_config"]}
if architectures is not None:
# preserve "architectures" from root level config
hparams["architectures"] = architectures
return hparams
return json.load(f)

@classmethod
def register(cls, *names: str) -> Callable[[AnyModel], AnyModel]:
Expand Down Expand Up @@ -454,6 +447,16 @@ def from_model_architecture(cls, arch: str, model_type = ModelType.TEXT) -> type


class TextModel(ModelBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

if "text_config" in self.hparams:
# move the text_config to the root level
self.hparams = {**self.hparams, **self.hparams["text_config"]}

self.block_count = self.find_hparam(["n_layers", "num_hidden_layers", "n_layer", "num_layers"])
self.tensor_map = gguf.get_tensor_name_map(self.model_arch, self.block_count)

@classmethod
def __init_subclass__(cls):
# can't use an abstract property, because overriding it without type errors
Expand Down Expand Up @@ -1078,8 +1081,12 @@ def __init__(self, *args, **kwargs):
raise TypeError("VisionModel must be subclassed with model_arch = gguf.MODEL_ARCH.CLIP_VISION")

# small hack to correct the number of layers
self.tensor_map = gguf.get_tensor_name_map(gguf.MODEL_ARCH.CLIP_VISION, 128)
self.n_embd_text = self.find_hparam(["hidden_size", "n_embd"])
self.block_count = 512 # vision models are small, this "ought to be enough for anybody"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"small" with up to 512 layers :)
What is the max you've seen in the wild?

I think this number could be taken from the config, since vision_config seems to contain num_hidden_layers at least for Llama-3.2-11B-Vision. (there's also num_global_layers, I guess the max of the layer counts should be used)

What model doesn't specify how many layers the vision part has?

Copy link
Collaborator Author

@ngxson ngxson Apr 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I never seen any model having more than 64 layers, but I'm just putting this number here for future proof.

We could get this number from the config, but the problem is that many config.json nowadays misses that number, as transformers library omit it if it's the same as default value. For example, this, this and this where you cannot find num_hidden_layers in vision_config

The frustrating thing is that this start to happen on some text_config too.

One way to fix this could be to use AutoConfig, but this won't work on models not transformers library. While I'm pretty sure this kind of model is rare nowadays, I can't know for sure if people still using it. WDYT?

Copy link
Collaborator

@compilade compilade Apr 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One way to fix this could be to use AutoConfig,

That could be the way to go, since convert_hf_to_gguf.py already has hf in the name and mostly already expects HF-like models which are supported by transformers. I don't know how much it would change how hparams is used in set_gguf_parameters, though.

but this won't work on models not transformers library. While I'm pretty sure this kind of model is rare nowadays, I can't know for sure if people still using it. WDYT?

I guess if this is a problem, (e.g. for very new architectures), it could always be possible to temporarily define a PreTrainedConfig and use AutoConfig.register.

But! Something which seems important is that AutoConfig uses the model_type field instead of archictectures field, which may change the assumptions in a bunch of places. I'm not sure if it would really be compatible with the idea of using sub-architectures like in this PR.

I guess it's probably fine to keep a high block count, but it makes the tensor map dict bigger than it needs to be.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how much it would change how hparams is used in set_gguf_parameters, though.

It should not change much, AutoConfig has a to_dict() function which basically returns the same config.json, but will all hparams pre-filled.

The simple plan is to replace load_hparams from open(...) to AutoConfig.from_pretrained(dir_model).to_dict()

Copy link
Collaborator Author

@ngxson ngxson Apr 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is what I mean: 4840d2f

It does work better now, num_hidden_layers is no longer missing. However, for smolvlm, some configs are still missing entirely in the to_dict(), like num_attention_heads or hidden_size. Though I think it's not very important for now. Alternative way, we can get them from AutoConfig object before to_dict()

self.tensor_map = gguf.get_tensor_name_map(gguf.MODEL_ARCH.CLIP_VISION, self.block_count)

# get n_embd of the text model
text_config = {**self.hparams, **self.hparams["text_config"]}
self.n_embd_text = text_config.get("hidden_size", text_config.get("n_embd", 0))
assert self.n_embd_text > 0, "n_embd not found in hparams"

if "vision_config" not in self.hparams:
Expand Down Expand Up @@ -1726,20 +1733,20 @@ def prepare_tensors(self):
"LlamaForCausalLM",
"MistralForCausalLM",
"MixtralForCausalLM",
"Idefics3ForConditionalGeneration",
"SmolVLMForConditionalGeneration",
"VLlama3ForCausalLM",
"LlavaForConditionalGeneration")
class LlamaModel(TextModel):
model_arch = gguf.MODEL_ARCH.LLAMA
undo_permute = True

def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
arch = get_model_architecture(self.dir_model, ModelType.TEXT, self.hparams)
# fix for SmolVLM2, missing `num_attention_heads` in config.json
if self.hparams["architectures"][0] == "SmolVLMForConditionalGeneration":
if arch == "VLlama3ForCausalLM":
self.hparams["num_attention_heads"] = self.hparams.get("num_attention_heads", 32)
# fix for Pixtral, missing `num_attention_heads` in config.json
if self.hparams["architectures"][0] == "LlavaForConditionalGeneration" \
if arch == "LlavaForConditionalGeneration" \
and self.hparams.get("model_type") == "mistral":
self.hparams["num_attention_heads"] = self.hparams.get("num_attention_heads", 32)

Expand Down Expand Up @@ -5805,6 +5812,19 @@ def split_str_to_n_bytes(split_str: str) -> int:
return n


def get_model_architecture(dir_model: Path, model_type: ModelType, hparams: Any = None) -> str:
hparams = ModelBase.load_hparams(dir_model) if hparams is None else hparams
text_config = hparams.get("text_config", {})
vision_config = hparams.get("vision_config", {})
arch = hparams["architectures"][0]
# if "architectures" is found in the sub-config, use that instead
if model_type == ModelType.TEXT and text_config.get("architectures") is not None:
arch = text_config["architectures"][0]
elif model_type == ModelType.VISION and vision_config.get("architectures") is not None:
arch = vision_config["architectures"][0]
return arch
Comment on lines +5856 to +5866
Copy link
Collaborator Author

@ngxson ngxson May 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@compilade In this case, maybe we can patch get_model_architecture to introduce an exception for Mamba? (so that you no longer need to manually add the "architectures" to config.json)

Maybe something like

if "ssm_cfg" in hparams and hparams.get("ssm_cfg").get("layer") == "Mamba":
    return "MambaForCausalLM"

And since "architectures" is not present in config.json, the AutoConfig will throw an error, which will trigger old method

Copy link
Collaborator

@compilade compilade May 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngxson I've tried this, and for some reason it doesn't work; it seems like AutoConfig doesn't fail when there is no architectures field. It still uses wrong default values for what Mamba2 needs.

The error I'm getting suggests hparams.n_groups defaults to 8, and hparams.intermediate_size defaults to 8192, which are the values for Mamba-Codestral-7B-v0.1, not the Mamba2-370m model I'm actually converting.

It's as if AutoConfig can still detect it's Mamba2, but doesn't use the correct values from the config.

Here's what I've tried
diff --git a/convert_hf_to_gguf.py b/convert_hf_to_gguf.py
index 532cc879d..4a3713e4d 100755
--- a/convert_hf_to_gguf.py
+++ b/convert_hf_to_gguf.py
@@ -5968,12 +5968,21 @@ def get_model_architecture(dir_model: Path, model_type: ModelType, hparams: Any
     hparams = ModelBase.load_hparams(dir_model) if hparams is None else hparams
     text_config = hparams.get("text_config", {})
     vision_config = hparams.get("vision_config", {})
-    arch = hparams["architectures"][0]
+    arch = None
+    if (arches := hparams.get("architectures")) is not None and len(arches) > 0:
+        arch = arches[0]
+    elif "ssm_cfg" in hparams:
+        # TODO: more general extra mappings
+        ssm_mapping = {"Mamba": "MambaForCausalLM", "Mamba2": "Mamba2ForCausalLM"}
+        arch = ssm_mapping.get(hparams["ssm_cfg"].get("layer", "Mamba"), None)
+
     # if "architectures" is found in the sub-config, use that instead
     if model_type == ModelType.TEXT and text_config.get("architectures") is not None:
         arch = text_config["architectures"][0]
     elif model_type == ModelType.VISION and vision_config.get("architectures") is not None:
         arch = vision_config["architectures"][0]
+    if arch is None:
+        raise ValueError("Failed to detect model architecture")
     return arch

And then converting https://huggingface.co/state-spaces/mamba2-370m.

(when using #9126)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's as if AutoConfig can still detect it's Mamba2, but doesn't use the correct values from the config.

Hmm this is quite magic tbh, I have no idea how AutoConfig works under the hood.

Another solution though, we can add an argument in load_hparams, let's say use_raw_config: bool. Then in the __init__, you can rewrite the self.hparams = load_hparams(..., use_raw_config=True)



def main() -> None:
args = parse_args()

Expand Down Expand Up @@ -5857,16 +5877,15 @@ def main() -> None:

logger.info(f"Loading model: {dir_model.name}")

hparams = ModelBase.load_hparams(dir_model)

if args.mmproj:
if "mmproj" not in fname_out.name:
fname_out = ModelBase.add_prefix_to_filename(fname_out, "mmproj-")

with torch.inference_mode():
output_type = ftype_map[args.outtype]
model_architecture = hparams["architectures"][0]
model_type = ModelType.VISION if args.mmproj else ModelType.TEXT
model_architecture = get_model_architecture(dir_model, model_type)
logger.info(f"Model architecture: {model_architecture}")
try:
model_class = ModelBase.from_model_architecture(model_architecture, model_type=model_type)
except NotImplementedError:
Expand Down