mirror of
https://github.com/ollama/ollama.git
synced 2026-03-27 02:58:43 +07:00
TeaCache: - Timestep embedding similarity caching for diffusion models - Polynomial rescaling with configurable thresholds - Reduces transformer forward passes by ~30-50% FP8 quantization: - Support for FP8 quantized models (8-bit weights with scales) - QuantizedMatmul on Metal, Dequantize on CUDA - Client-side quantization via ollama create --quantize fp8 Other bug fixes: - Fix `/api/show` API for image generation models - Server properly returns model info (architecture, parameters, quantization) - Memory allocation optimizations - CLI improvements for image generation
MLX Memory Management
| This package will get consolidated with x/ml/backend/mlx in the future.
Automatic Tracking
All arrays are automatically tracked when created. On Eval(), non-kept arrays are freed.
API
result := mlx.Matmul(x, w) // arrays automatically tracked
mlx.Eval(result) // free non-kept, eval result (auto-kept)
Key Functions
mlx.Eval(outputs...)- free non-kept arrays, then evaluate (outputs auto-kept)mlx.AsyncEval(outputs...)- async version of Eval (outputs auto-kept)mlx.Keep(arrays...)- mark arrays to survive cleanup (for weights, caches)array.Free()- mark array for cleanup on next Eval
Loop Pattern
for step := 0; step < maxTokens; step++ {
logits := model.Forward(token, caches)
oldToken := token
token = sample(logits)
// Keep cache state across iterations
for _, c := range caches {
mlx.Keep(c.State()...)
}
oldToken.Free() // mark for cleanup
mlx.AsyncEval(token) // frees old, evals new
}
Notes
Eval()andAsyncEval()auto-keep their outputsFree()marks for cleanup - actual free happens during next Eval- Use
Keep()for weights and cache state that must survive multiple Eval cycles - Arrays created inside compiled closures are managed by MLX, not tracked