Microsoft introduces Windows AI Foundry, a unified platform for local AI development

Microsoft introduces Windows AI Foundry, a unified platform for local AI development

Home » News » Microsoft introduces Windows AI Foundry, a unified platform for local AI development
Table of Contents

Microsoft already helps native AI apps on Home windows by way of Home windows Copilot Runtime, which presents numerous AI options by means of Home windows AI APIs and Home windows machine studying (ML). The fashions behind Home windows Copilot Runtime on Copilot+ PCs run regionally and constantly within the background.

At Construct 2025, Microsoft is introducing Home windows AI Foundry, a unified platform for native AI improvement on Home windows, by bringing collectively Home windows Copilot Runtime and a number of other new capabilities. Home windows AI Foundry will supply ready-to-use AI APIs powered by built-in AI fashions, instruments to customise Home windows built-in fashions, the power to convey open-source fashions from Azure AI Foundry, and an inference runtime enabling builders to convey their fashions.

App builders rely upon a big selection of AI fashions from numerous distributors. So, Home windows AI Foundry will combine AI fashions from Azure Foundry Native and even different mannequin catalogs like Ollama and NVIDIA NIMs. Microsoft’s personal Foundry Native mannequin catalog could have optimized AI fashions that may run throughout CPUs, GPUs, and NPUs. Builders can use the “winget set up Microsoft.FoundryLocal” command to browse, obtain, and check fashions primarily based on system compatibility. As soon as the mannequin is chosen, builders can use the Foundry Native SDK to simply combine Foundry Native into their app.

Home windows ML is the built-in AI inferencing runtime in Home windows that allows simplified and environment friendly mannequin deployment throughout CPUs, GPUs, and NPUs. It’s primarily based on DirectML and works on silicon from numerous suppliers, together with AMD, Intel, NVIDIA, and Qualcomm. App builders constructing on Home windows ML do not need to fret about future silicon updates since Home windows ML will be capable of maintain all required dependencies updated and can adapt to new silicon below the hood.

Microsoft additionally introduced help for LoRA for the Phi Silica mannequin. LoRA allows fine-tuning a small subset of the mannequin’s parameters with customized information. This environment friendly fine-tuning will enhance efficiency on sure forms of duties. LoRA is now out there in public preview with Home windows App SDK 1.8 Experimental 2 on Snapdragon X Sequence NPUs and can be out there on Intel and AMD Copilot+ PCs within the coming months.

Lastly, Microsoft introduced new Semantic Search APIs for builders to create AI-powered search experiences utilizing their app information. This AI-powered search can run regionally, and it helps RAG (Retrieval-Augmented Era). These Semantic Search APIs can be found in personal preview on all Copilot+ PCs.

author avatar
roosho Senior Engineer (Technical Services)
I am Rakib Raihan RooSho, Jack of all IT Trades. You got it right. Good for nothing. I try a lot of things and fail more than that. That's how I learn. Whenever I succeed, I note that in my cookbook. Eventually, that became my blog. 
share this article.

Enjoying my articles?

Sign up to get new content delivered straight to your inbox.

Please enable JavaScript in your browser to complete this form.
Name