Prompty simplifies managing AI prompts by enabling easy testing, templating, and configuration. Paired with Microsoft’s Foundry Local, developers can run AI models locally for enhanced performance, privacy, and cost efficiency—streamlining prompt testing and switching across models seamlessly. Unique :

Boost Your AI Development with Prompty and Foundry Local
If you’re diving into AI app development, managing prompts efficiently is key. Microsoft’s latest tools, Prompty and Foundry Local, make this easier than ever. Let’s break down what’s new and why you should care.
What’s New: Foundry Local Makes AI Models Run Locally
At Build ’25, Microsoft unveiled Foundry Local, a game-changer for developers. This tool lets you run AI models directly on your device, no cloud needed. The perks? Faster performance, enhanced privacy, and lower costs.
“Foundry Local offers developers several benefits, including performance, privacy, and cost savings.”
Running models locally means less latency and more control over sensitive data. It’s perfect for AI apps that demand speed and security.
Why Prompty is a Must-Have for Your AI Prompts
Managing prompts can get messy, especially when tweaking them during development. Prompty steps in as a powerful prompt manager. It stores prompts in separate files, so you can test and update without touching your main code.
Plus, Prompty supports templating. This means your prompts can dynamically adjust based on context or user input. It’s a neat way to keep your AI responses sharp and relevant.
“With Prompty, you store your prompts in separate files, making it easy to test and adjust them without changing your code.”
How to Use Prompty with Foundry Local: Step-by-Step
Getting started is straightforward. First, install the Prompty extension for Visual Studio Code and Foundry Local.
Next, launch Foundry Local via command line using foundry service start
. Note the URL it listens on, like http://localhost:5272
.
Create a new Prompty configuration for Foundry Local. This keeps your settings organized and lets you switch easily between different AI hosts.
In your settings.json
file, add Foundry Local’s config under prompty.modelConfigurations
. For example:
{
"name": "Phi-4-mini-instruct-generic-gpu",
"type": "openai",
"api_key": "local",
"base_url": "http://localhost:5272/v1"
}
Make sure the URL matches your Foundry Local instance’s port.
Testing Your Prompts Made Simple
With the configuration set, open your .prompty
file and select the Foundry Local config. Press F5
to test your prompt.
The first run might take a few seconds as the model loads. After that, you’ll see responses directly in your output pane. This quick feedback loop speeds up prompt tuning significantly.
Why This Matters
Combining Prompty with Foundry Local means you get local AI inference with easy prompt management. You can develop smarter, faster, and more private AI apps without juggling multiple tools.
In short, this duo streamlines AI prompt testing and model deployment on your own device.
Final Thoughts
Whether you’re building chatbots, assistants, or other AI-powered apps, Prompty and Foundry Local are worth exploring. They simplify prompt workflows and bring AI models closer to your users.
So, if you want better performance and control, give this setup a try. Your AI projects will thank you.
From the Microsoft Developer Community Blog articles