How To Use Structured Outputs With Azure Openai In Microsoft Foundry
Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. Structured outputs make a model follow a JSON Schema definition that you provide as part of your inference API call. This is in contrast to the older JSON mode feature, which guaranteed valid JSON would be generated, but was unable to ensure strict adherence to the supplied schema.
Structured outputs are recommended for function calling, extracting structured data, and building complex multi-step workflows. You can use Pydantic to define object schemas in Python. Depending on what version of the OpenAI and Pydantic libraries you're running you might need to upgrade to a newer version. These examples were tested against openai 1.42.0 and pydantic 2.8.2. If you are new to using Microsoft Entra ID for authentication see How to configure Azure OpenAI in Microsoft Foundry Models with Microsoft Entra ID authentication. Entity extraction is a powerful tool in natural language processing (NLP), enabling applications to identify and categorize data points such as names, dates, or locations from text.
Azure OpenAI Service now supports structured outputs, elevating the efficiency of entity extraction tasks by returning results in user-defined formats such as JSON, XML, or tabular data. This article explores how to implement and optimize entity extraction using Azure OpenAI’s structured outputs. Traditional entity extraction requires significant post-processing to organize raw outputs into usable formats. Azure OpenAI simplifies this by allowing developers to define the desired output structure within prompts. Benefits include: Define Prompt Structure: Use prompt engineering to guide the model in returning structured outputs.
For example: Custom Named Entity Recognition (NER): Use Azure OpenAI alongside Azure Cognitive Services to build domain-specific NER models. For example, extract technical terms from research papers or compliance-related entities from legal documents. Validation and Post-Processing: Implement additional validation checks in your application to handle edge cases where the model's output might deviate from the expected structure. Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories. This document refers to the Microsoft Foundry (classic) portal. 🔍 View the Microsoft Foundry (new) documentation to learn about the new portal. Azure AI Inference beta SDK is deprecated and will be retired on May 30, 2026. Switch to the generally available OpenAI/v1 API with a stable OpenAI SDK.
Follow the migration guide to switch to OpenAI/v1, using the SDK for your preferred programming language. Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. In this article, you explore several examples to extract different types of entities.
These examples demonstrate how to create an object schema and get a response from the Azure OpenAI model. It uses Python and the Azure OpenAI Structured Outputs Mode. This article uses one or more AI app templates for examples and guidance. AI app templates give you well-maintained, easy-to-deploy reference implementations, ensuring a high-quality starting point for your AI apps. The sample provides everything you need. It includes the infrastructure and Python files to set up an Azure OpenAI gpt-4o model deployment.
You can then use it to perform entity extraction with the Azure OpenAI structured outputs mode and the Python OpenAI SDK. This guide demonstrates how to use Azure OpenAI with instructor for structured outputs. Azure OpenAI provides the same powerful models as OpenAI but with enterprise-grade security and compliance features through Microsoft Azure. We can use the same installation as we do for OpenAI since the default openai client ships with an AzureOpenAI client. First, install the required dependencies: Next, make sure that you've enabled Azure OpenAI in your Azure account and have a deployment for the model you'd like to use.
Here is a guide to get started Once you've done so, you'll have an endpoint and a API key to be used to configure the client. Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories.
This tutorial step shows you how to produce structured output with an agent, where the agent is built on the Azure OpenAI Chat Completion service. Not all agent types support structured output natively. The ChatClientAgent supports structured output when used with compatible chat clients. For prerequisites and installing NuGet packages, see the Create and run a simple agent step in this tutorial. Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories. This article provides details on the inference REST API endpoints for Azure OpenAI. Managing and interacting with Azure OpenAI models and resources is divided across three primary API surfaces: Each API surface/specification encapsulates a different set of Azure OpenAI capabilities. Each API has its own unique set of preview and stable/generally available (GA) API releases.
Preview releases currently tend to follow a monthly cadence.
People Also Search
- How to use structured outputs with Azure OpenAI in Microsoft Foundry ...
- Best Practices for Structured Extraction from Documents Using Azure OpenAI
- azure-ai-docs/articles/foundry/openai/how-to/structured-outputs.md at ...
- Advanced Entity Extraction with Azure OpenAI: Harnessing Structured Outputs
- How to use structured outputs for chat models (classic) - Microsoft ...
- Extract entities using Azure OpenAI structured outputs mode
- Entity extraction with Azure OpenAI Structured Outputs | Microsoft ...
- Structured outputs with Azure OpenAI, a complete guide w/ instructor ...
- Producing Structured Output with agents | Microsoft Learn
- Azure OpenAI in Microsoft Foundry Models REST API reference
Access To This Page Requires Authorization. You Can Try Signing
Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. Structured outputs make a model follow a JSON Schema definition that you provide as part of your inference API call. This is in contrast to the older JSON mode feature, which guaranteed valid JSON would be generated, but was unabl...
Structured Outputs Are Recommended For Function Calling, Extracting Structured Data,
Structured outputs are recommended for function calling, extracting structured data, and building complex multi-step workflows. You can use Pydantic to define object schemas in Python. Depending on what version of the OpenAI and Pydantic libraries you're running you might need to upgrade to a newer version. These examples were tested against openai 1.42.0 and pydantic 2.8.2. If you are new to usin...
Azure OpenAI Service Now Supports Structured Outputs, Elevating The Efficiency
Azure OpenAI Service now supports structured outputs, elevating the efficiency of entity extraction tasks by returning results in user-defined formats such as JSON, XML, or tabular data. This article explores how to implement and optimize entity extraction using Azure OpenAI’s structured outputs. Traditional entity extraction requires significant post-processing to organize raw outputs into usable...
For Example: Custom Named Entity Recognition (NER): Use Azure OpenAI
For example: Custom Named Entity Recognition (NER): Use Azure OpenAI alongside Azure Cognitive Services to build domain-specific NER models. For example, extract technical terms from research papers or compliance-related entities from legal documents. Validation and Post-Processing: Implement additional validation checks in your application to handle edge cases where the model's output might devia...
Access To This Page Requires Authorization. You Can Try Changing
Access to this page requires authorization. You can try changing directories. This document refers to the Microsoft Foundry (classic) portal. 🔍 View the Microsoft Foundry (new) documentation to learn about the new portal. Azure AI Inference beta SDK is deprecated and will be retired on May 30, 2026. Switch to the generally available OpenAI/v1 API with a stable OpenAI SDK.