llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.
References
Link | Resource |
---|---|
https://github.com/ggml-org/llama.cpp/commit/3cfbbdb44e08fd19429fed6cc85b982a91f0efd5 | Patch |
https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8wwf-w4qm-gpqr | Mitigation Vendor Advisory |
Configurations
History
27 Aug 2025, 13:48
Type | Values Removed | Values Added |
---|---|---|
CPE | cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:* | |
Summary |
|
|
References | () https://github.com/ggml-org/llama.cpp/commit/3cfbbdb44e08fd19429fed6cc85b982a91f0efd5 - Patch | |
References | () https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8wwf-w4qm-gpqr - Mitigation, Vendor Advisory | |
First Time |
Ggml
Ggml llama.cpp |
17 Jun 2025, 20:15
Type | Values Removed | Values Added |
---|---|---|
New CVE |
Information
Published : 2025-06-17 20:15
Updated : 2025-08-27 13:48
NVD link : CVE-2025-49847
Mitre link : CVE-2025-49847
CVE.ORG link : CVE-2025-49847
JSON object : View
Products Affected
ggml
- llama.cpp