mirror of
https://github.com/ollama/ollama.git
synced 2026-03-27 02:58:43 +07:00
On the llama runner, after the recent GGML bump a new log line reports incorrect 0 MiB free after our patch to remove memory from the props. This adjusts the llama.cpp code to fetch the actual free memory of the active device.