You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-2Lines changed: 3 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -16,8 +16,9 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
16
16
17
17
## Hot topics
18
18
19
-
-**How to use [MTLResidencySet](https://developer.apple.com/documentation/metal/mtlresidencyset?language=objc) to keep the GPU memory active?**https://github.com/ggml-org/llama.cpp/pull/11427
20
-
-**VS Code extension for FIM completions:**https://github.com/ggml-org/llama.vscode
19
+
-**GGML developer experience survey (organized and reviewed by NVIDIA):**[link](https://forms.gle/Gasw3cRgyhNEnrwK9)
20
+
- A new binary `llama-mtmd-cli` is introduced to replace `llava-cli`, `minicpmv-cli` and `gemma3-cli`https://github.com/ggml-org/llama.cpp/pull/13012, `libllava` will be deprecated
21
+
- VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
21
22
- Universal [tool call support](./docs/function-calling.md) in `llama-server`https://github.com/ggml-org/llama.cpp/pull/9639
22
23
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
Copy file name to clipboardExpand all lines: SECURITY.md
+2-1Lines changed: 2 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,8 @@ To protect sensitive data from potential leaks or unauthorized access, it is cru
40
40
### Untrusted environments or networks
41
41
42
42
If you can't run your models in a secure and isolated environment or if it must be exposed to an untrusted network, make sure to take the following security precautions:
43
-
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value
43
+
* Do not use the RPC backend, [rpc-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/rpc) and [llama-server](https://github.com/ggml-org/llama.cpp/tree/master/examples/server) functionality (see https://github.com/ggml-org/llama.cpp/pull/13061).
44
+
* Confirm the hash of any downloaded artifact (e.g. pre-trained model weights) matches a known-good value.
44
45
* Encrypt your data if sending it over the network.
* Allow getting the HF file from the HF repo with tag (like ollama), for example:
527
-
* - bartowski/Llama-3.2-3B-Instruct-GGUF:q4
528
-
* - bartowski/Llama-3.2-3B-Instruct-GGUF:Q4_K_M
529
-
* - bartowski/Llama-3.2-3B-Instruct-GGUF:q5_k_s
530
-
* Tag is optional, default to "latest" (meaning it checks for Q4_K_M first, then Q4, then if not found, return the first GGUF file in repo)
531
-
*
532
-
* Return pair of <repo, file> (with "repo" already having tag removed)
533
-
*
534
-
* Note: we use the Ollama-compatible HF API, but not using the blobId. Instead, we use the special "ggufFile" field which returns the value for "hf_file". This is done to be backward-compatible with existing cache files.
* Allow getting the HF file from the HF repo with tag (like ollama), for example:
580
+
* - bartowski/Llama-3.2-3B-Instruct-GGUF:q4
581
+
* - bartowski/Llama-3.2-3B-Instruct-GGUF:Q4_K_M
582
+
* - bartowski/Llama-3.2-3B-Instruct-GGUF:q5_k_s
583
+
* Tag is optional, default to "latest" (meaning it checks for Q4_K_M first, then Q4, then if not found, return the first GGUF file in repo)
584
+
*
585
+
* Return pair of <repo, file> (with "repo" already having tag removed)
586
+
*
587
+
* Note: we use the Ollama-compatible HF API, but not using the blobId. Instead, we use the special "ggufFile" field which returns the value for "hf_file". This is done to be backward-compatible with existing cache files.
"Hugging Face model repository; quant is optional, case-insensitive, default to Q4_K_M, or falls back to the first file in the repo if Q4_K_M doesn't exist.\n"
2463
+
"mmproj is also downloaded automatically if available. to disable, add --no-mmproj\n"
0 commit comments