-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Support MiniCPM-2B-128k #6602
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support MiniCPM-2B-128k #6602
Conversation
if (!hparams.tie_lm_head){ | ||
model.output = ml.create_tensor(ctx_output_split, tn(LLM_TENSOR_OUTPUT, "weight"), {n_embd, n_vocab}, false); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already handle tied tensors few lines below. Maybe you simply have to remove the if (model.arch != LLM_ARCH_MINICPM){
check and this model would work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @ggerganov , we are currently encountering a problem when adapting our new model: the evaluation effect for models smaller than 4k is consistent with vllm, but the evaluation effect for models larger than 4k is not as good as vllm. What may be the reason for this? When evaluating, the example/server method is used, with the following startup and request parameters:
./server -m MiniCPM-2B-128k/ggml-model-f16.gguf --chat-template chatml --rope-freq-base 4129032.258 --host 0.0.0.0 -c 12000
request data:
data = {"stream": False,
"n_predict": max_token,
"temperature": 0.3,
"stop": ["<|im_end|>", "</s>"],
"repeat_last_n": 256,
"repeat_penalty": 1.0,
"top_k": 40,
"top_p": 0.5,
"min_p": 0.05,
"tfs_z": 1,
"typical_p": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"mirostat": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"grammar": "", "n_probs": 0, "min_keep": 0, "image_data": [], "cache_prompt": True,
"api_key": "",
"prompt": f"<|im_start|>user{prompt}<|im_end|><|im_start|>assistant\n"
}
vllm param:
params_dict = {
"n": 1,
"best_of": None,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"repetition_penalty": 1.0,
"temperature": 0.3,
"top_p": 0.5,
"top_k": -1,
"use_beam_search": False,
"length_penalty": 1.0,
"early_stopping": False,
"stop": None,
"stop_token_ids": None,
"ignore_eos": False,
"logprobs": None,
"prompt_logprobs": None,
"skip_special_tokens": False,
"stop": ["<|im_end|>", "</s>"]
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do you use --rope-freq-base 4129032.258
when the config specifies 1e6:
https://huggingface.co/openbmb/MiniCPM-2B-128k/blob/main/config.json#L34
Also, this model seems to use some rope scaling:
https://huggingface.co/openbmb/MiniCPM-2B-128k/blob/main/config.json#L25
You need to apply the same thing when starting the server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model currently uses DynamicNTKScalingRotaryEmbedding. How should I pass parameters?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can run up to 64k without NTK scaling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't support it. Anyway, isn't 64k context length enough?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. thanks for your quick reply.
@@ -1548,6 +1548,8 @@ def set_gguf_parameters(self): | |||
self.gguf_writer.add_head_count_kv(self.hparams["num_key_value_heads"]) | |||
self.gguf_writer.add_layer_norm_rms_eps(self.hparams["rms_norm_eps"]) | |||
self.gguf_writer.add_file_type(self.ftype) | |||
if "tie_lm_head" in self.hparams: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name should be tie_word_embeddings
in huggingface's config.
move to #6919 |
Support https://huggingface.co/openbmb/MiniCPM-2B-128k.