Foundry Local
Run AI models locally on your device. Foundry Local provides on-device inference with complete data privacy, no Azure subscription required.
Run Models Locally
Free to Use
Data Privacy
Run Microsoft AI locally with complete control
Foundry Local brings the power of Azure AI to your environment, with flexible deployment options and enterprise-grade security.
On-Device Inference
Run AI models directly on your device with no cloud dependencies or Azure subscription required
Optimized Performance
Powered by ONNX Runtime with hardware acceleration for CPUs, GPUs, and NPUs
OpenAI-Compatible API
Easy integration with existing applications using familiar OpenAI patterns
Multi-Language SDKs
Simple SDKs for Python, JavaScript, C#, and Rust to get started quickly with your applications