Metadata-Version: 2.4
Name: azure-ai-projects
Version: 2.1.0
Summary: Microsoft Corporation Azure AI Projects Client Library for Python
Author-email: Microsoft Corporation <azpysdkhelp@microsoft.com>
License-Expression: MIT
Project-URL: repository, https://aka.ms/azsdk/azure-ai-projects-v2/python/code
Keywords: azure,azure sdk
Classifier: Development Status :: 5 - Production/Stable
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: isodate>=0.6.1
Requires-Dist: azure-core>=1.37.0
Requires-Dist: typing-extensions>=4.11
Requires-Dist: azure-identity>=1.15.0
Requires-Dist: openai>=2.8.0
Requires-Dist: azure-storage-blob>=12.15.0
Dynamic: license-file

# Azure AI Projects client library for Python

The AI Projects client library (in preview) is part of the Microsoft Foundry SDK, and provides easy access to
resources in your Microsoft Foundry Project. Use it to:

* **Create and run Agents** using methods on the `.agents` client property.
* **Enhance Agents with specialized tools**:
  * Agent-to-Agent (A2A) (Preview)
  * Azure AI Search
  * Azure Functions
  * Bing Custom Search (Preview)
  * Bing Grounding
  * Browser Automation (Preview)
  * Code Interpreter
  * Computer Use (Preview)
  * File Search
  * Function Tool
  * Image Generation
  * Memory Search (Preview)
  * Microsoft Fabric (Preview)
  * Microsoft SharePoint (Preview)
  * Model Context Protocol (MCP)
  * OpenAPI
  * Web Search
  * Web Search (Preview)
* **Get an OpenAI client** using `.get_openai_client()` method to run Responses, Conversations, Evaluations and Fine-Tuning operations with your Agent.
* **Manage memory stores (preview)** for Agent conversations, using `.beta.memory_stores` operations.
* **Explore additional evaluation tools (some in preview)** to assess the performance of your generative AI application, using `.evaluation_rules`,
`.beta.evaluation_taxonomies`, `.beta.evaluators`, `.beta.insights`, and `.beta.schedules` operations.
* **Run Red Team scans (preview)** to identify risks associated with your generative AI application, using `.beta.red_teams` operations.
* **Fine tune** AI Models on your data.
* **Enumerate AI Models** deployed to your Foundry Project using `.deployments` operations.
* **Enumerate connected Azure resources** in your Foundry project using `.connections` operations.
* **Upload documents and create Datasets** to reference them using `.datasets` operations.
* **Create and enumerate Search Indexes** using `.indexes` operations.

The client library uses version `v1` of the Microsoft Foundry [data plane REST APIs](https://aka.ms/azsdk/azure-ai-projects-v2/api-reference-v1).

[Product documentation](https://aka.ms/azsdk/azure-ai-projects-v2/product-doc)
| [Samples][samples]
| [API reference](https://aka.ms/azsdk/azure-ai-projects-v2/python/api-reference)
| [Package (PyPI)](https://aka.ms/azsdk/azure-ai-projects-v2/python/package)
| [SDK source code](https://aka.ms/azsdk/azure-ai-projects-v2/python/code)
| [Release history](https://aka.ms/azsdk/azure-ai-projects-v2/python/release-history)

## Reporting issues

To report an issue with the client library, or request additional features, please open a [GitHub issue here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-projects" in the title or content.

## Getting started

### Prerequisite

* Python 3.9 or later.
* An [Azure subscription][azure_sub].
* A [project in Microsoft Foundry](https://learn.microsoft.com/azure/foundry/how-to/create-projects).
* A Foundry project endpoint URL of the form `https://your-ai-services-account-name.services.ai.azure.com/api/projects/your-project-name`. It can be found in your Microsoft Foundry Project home page. Below we will assume the environment variable `FOUNDRY_PROJECT_ENDPOINT` was defined to hold this value.
* To authenticate using API key, you will need the "Project API key" as shown in your Microsoft Foundry Project home page.
* To authenticate using Entra ID, your application needs an object that implements the [TokenCredential](https://learn.microsoft.com/python/api/azure-core/azure.core.credentials.tokencredential) interface. Code samples here use [DefaultAzureCredential](https://learn.microsoft.com/python/api/azure-identity/azure.identity.defaultazurecredential). To get that working, you will need:
  * An appropriate role assignment. See [Role-based access control in Microsoft Foundry portal](https://learn.microsoft.com/azure/foundry/concepts/rbac-foundry). Role assignment can be done via the "Access Control (IAM)" tab of your Azure AI Project resource in the Azure portal.
  * [Azure CLI](https://learn.microsoft.com/cli/azure/install-azure-cli) installed.
  * You are logged into your Azure account by running `az login`.

### Install the package

```bash
pip install azure-ai-projects
```

Verify that you have version 2.0.0 or above installed by running:

```bash
pip show azure-ai-projects
```

## Key concepts

### Create and authenticate the client with Entra ID

Entra ID is the only authentication method supported at the moment by the client.

To construct a synchronous client using a context manager:

```python
import os
from azure.ai.projects import AIProjectClient
from azure.identity import DefaultAzureCredential

with (
    DefaultAzureCredential() as credential,
    AIProjectClient(endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"], credential=credential) as project_client,
):
```

To construct an asynchronous client, install the additional package [aiohttp](https://pypi.org/project/aiohttp/):

```bash
pip install aiohttp
```

and run:

```python
import os
import asyncio
from azure.ai.projects.aio import AIProjectClient
from azure.identity.aio import DefaultAzureCredential

async with (
    DefaultAzureCredential() as credential,
    AIProjectClient(endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"], credential=credential) as project_client,
):
```

## Examples

For comprehensive examples covering Agents, tool usage, evaluation, fine-tuning, datasets, indexes, and more, see:

* **[Microsoft Foundry Agents overview](https://learn.microsoft.com/azure/foundry/agents/overview)** — concepts, setup, and quickstarts.
* **[Runtime components](https://learn.microsoft.com/azure/foundry/agents/concepts/runtime-components?tabs=python)** — deep-dive into agent architecture.
* **[Tool catalog](https://learn.microsoft.com/azure/foundry/agents/concepts/tool-catalog)** — all available tools and agent capabilities.
* **[SDK samples folder][samples]** — fully runnable Python code for synchronous and asynchronous clients covering all operations below.

The sections below cover SDK-specific behaviours (authentication variants, exception handling, logging, tracing) that are not documented in the above Learn pages.

### Performing Responses operations using OpenAI client

Use the `.get_openai_client()` method to obtain an authenticated [OpenAI](https://github.com/openai/openai-python) client and run Responses, Conversations, Evaluations, Files, and Fine-Tuning operations. See the **responses**, **agents**, **evaluations**, **files**, and **finetuning** folders in the [samples][samples] for complete working examples.

The code below assumes the environment variable `FOUNDRY_MODEL_NAME` is defined. It's the deployment name of an AI model in your Foundry Project. See "Build" menu, under "Models" (First column of the "Deployments" table).

<!-- SNIPPET:sample_responses_basic.responses -->

```python
with project_client.get_openai_client() as openai_client:
    response = openai_client.responses.create(
        model=os.environ["FOUNDRY_MODEL_NAME"],
        input="What is the size of France in square miles?",
    )
    print(f"Response output: {response.output_text}")

    response = openai_client.responses.create(
        model=os.environ["FOUNDRY_MODEL_NAME"],
        input="And what is the capital city?",
        previous_response_id=response.id,
    )
    print(f"Response output: {response.output_text}")
```

<!-- END SNIPPET -->

See the **responses** folder in the [samples][samples] for additional samples including streaming responses.

### Agents, Tools, Evaluation, Deployments, Connections, Datasets, Indexes, Files, and Fine-Tuning

Full descriptions and working code for all of the above are available in:

| Topic | Learn documentation | Samples folder |
|---|---|---|
| Agents (create, run, stream) | [Agents overview](https://learn.microsoft.com/azure/foundry/agents/overview) | `samples/agents/` |
| Hosted agents (preview) | [Hosted agents concepts](https://learn.microsoft.com/azure/foundry/agents/concepts/hosted-agents), [Deploy your first hosted agent](https://learn.microsoft.com/azure/foundry/agents/quickstarts/quickstart-hosted-agent) | `samples/hosted_agents/` |
| Agents tools (Code Interpreter, File Search, MCP, OpenAPI, Bing, A2A, etc.) | [Tool catalog](https://learn.microsoft.com/azure/foundry/agents/concepts/tool-catalog) | `samples/agents/tools/` |
| Evaluation | [Evaluate agents](https://learn.microsoft.com/azure/foundry/observability/how-to/evaluate-agent) | `samples/evaluations/` |
| Deployments | [Deployment types](https://learn.microsoft.com/azure/foundry/foundry-models/concepts/deployment-types) | `samples/deployments/` |
| Connections | [Connections operations](https://learn.microsoft.com/python/api/overview/azure/ai-projects-readme?view=azure-python#connections-operations) | `samples/connections/` |
| Datasets | [Dataset operations](https://learn.microsoft.com/python/api/overview/azure/ai-projects-readme?view=azure-python#dataset-operations) | `samples/datasets/` |
| Indexes | [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) | `samples/indexes/` |
| Files (upload, retrieve, list, delete) | [OpenAI Files API](https://platform.openai.com/docs/api-reference/files) | `samples/files/` |
| Fine-tuning | [Fine-Tuning in AI Foundry](https://github.com/microsoft-foundry/fine-tuning) | `samples/finetuning/` |

### Hosted agents (preview)

Hosted agents let you run your own containerized agent runtime while using Microsoft Foundry for managed hosting and scaling.

For product guidance, see:

* [Hosted agents concepts](https://learn.microsoft.com/azure/foundry/agents/concepts/hosted-agents)
* [Deploy your first hosted agent](https://learn.microsoft.com/azure/foundry/agents/quickstarts/quickstart-hosted-agent)

For SDK usage examples in this package, see `samples/hosted_agents/`, including CRUD, file upload/download, and skills scenarios.

## Tracing

### Experimental Feature Gate

**Important:** GenAI tracing instrumentation is an experimental preview feature. Spans, attributes, and events may be modified in future versions. To use it, you must explicitly opt in by setting the environment variable:

```bash
AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING=true
```

This environment variable must be set before calling `AIProjectInstrumentor().instrument()`. If the environment variable is not set or is set to any value other than `true` (case-insensitive), tracing instrumentation will not be enabled and a warning will be logged.

Only enable this feature after reviewing your requirements and understanding that the tracing behavior may change in future versions.

### Getting Started with Tracing

You can add an Application Insights Azure resource to your Microsoft Foundry project. See the Tracing tab in your Microsoft Foundry project. If one was enabled, you can get the Application Insights connection string, configure your AI Projects client, and observe traces in Azure Monitor. Typically, you might want to start tracing before you create a client or Agent.

For tracing concepts in Microsoft Foundry, see [Trace an agent](https://learn.microsoft.com/azure/foundry/observability/concepts/trace-agent-concept).

### Installation

Make sure to install OpenTelemetry and the Azure SDK tracing plugin via

```bash
pip install "azure-ai-projects>=2.0.0b4" opentelemetry-sdk azure-core-tracing-opentelemetry azure-monitor-opentelemetry
```

You will also need an exporter to send telemetry to your observability backend. You can print traces to the console or use a local viewer such as [Aspire Dashboard](https://learn.microsoft.com/dotnet/aspire/fundamentals/dashboard/standalone?tabs=bash).

To connect to Aspire Dashboard or another OpenTelemetry compatible backend, install OTLP exporter:

```bash
pip install opentelemetry-exporter-otlp
```

### How to enable tracing

**Remember:** Before enabling tracing, ensure you have set the `AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING=true` environment variable as described in the [Experimental Feature Gate](#experimental-feature-gate) section.

Here is a code sample that shows how to enable Azure Monitor tracing:

<!-- SNIPPET:sample_agent_basic_with_azure_monitor_tracing.setup_azure_monitor_tracing -->

```python
# Enable Azure Monitor tracing
application_insights_connection_string = project_client.telemetry.get_application_insights_connection_string()
configure_azure_monitor(connection_string=application_insights_connection_string)
```

<!-- END SNIPPET -->

You may also want to create a span for your scenario:

<!-- SNIPPET:sample_agent_basic_with_azure_monitor_tracing.create_span_for_scenario -->

```python
tracer = trace.get_tracer(__name__)
scenario = os.path.basename(__file__)

with tracer.start_as_current_span(scenario):
```

<!-- END SNIPPET -->

See the full sample in file `\agents\telemetry\sample_agent_basic_with_azure_monitor_tracing.py` in the [Samples][samples] folder.

**Note:** In order to view the traces in the Microsoft Foundry portal, the agent ID should be passed in as part of the response generation request.

In addition, you might find it helpful to see the tracing logs in the console. Remember to set `AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING=true` before running the following code:

<!-- SNIPPET:sample_agent_basic_with_console_tracing.setup_console_tracing -->

```python
# Setup tracing to console
# Requires opentelemetry-sdk
span_exporter = ConsoleSpanExporter()
tracer_provider = TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(span_exporter))
trace.set_tracer_provider(tracer_provider)
tracer = trace.get_tracer(__name__)

# Enable instrumentation with content tracing
AIProjectInstrumentor().instrument()
```

<!-- END SNIPPET -->

See the full sample in file `\agents\telemetry\sample_agent_basic_with_console_tracing.py` in the [Samples][samples] folder.

### Enabling trace context propagation

Trace context propagation allows client-side spans generated by the Projects SDK to be correlated with server-side spans from Azure OpenAI and other Azure services. When enabled, the SDK automatically injects W3C Trace Context headers (`traceparent` and `tracestate`) into HTTP requests made by OpenAI clients obtained via `get_openai_client()`.

This feature ensures that all operations within a distributed trace share the same trace ID, providing end-to-end visibility across your application and Azure services in your observability backend (such as Azure Monitor).

Trace context propagation is **enabled by default** when tracing is enabled (for example through `configure_azure_monitor` or the `AIProjectInstrumentor().instrument()` call). To disable it, set the `AZURE_TRACING_GEN_AI_ENABLE_TRACE_CONTEXT_PROPAGATION` environment variable to `false`, or pass `enable_trace_context_propagation=False` to the `AIProjectInstrumentor().instrument()` call.

**When does the change take effect?**
- Changes to `enable_trace_context_propagation` (whether via `instrument()` or the environment variable) only affect OpenAI clients obtained via `get_openai_client()` **after** the change is applied. Previously acquired clients are unaffected.
- To apply the new setting to all clients, call `AIProjectInstrumentor().instrument(enable_trace_context_propagation=<value>)` before acquiring your OpenAI clients, or re-acquire the clients after making the change.

**Security and Privacy Considerations:**
- **Trace IDs are sent to external services**: The `traceparent` and `tracestate` headers from your client-side originating spans are injected into requests sent to service. This enables end-to-end distributed tracing, but note that the trace identifier may be shared beyond the initial API call.
- **Enabled by Default**: If you have privacy or compliance requirements that prohibit sharing trace identifiers with services, disable trace context propagation by setting `enable_trace_context_propagation=False` or the environment variable to `false`.

#### Controlling baggage propagation

When trace context propagation is enabled, you can separately control whether the baggage header is included. By default, only `traceparent` and `tracestate` headers are propagated. To also include the `baggage` header, set the `AZURE_TRACING_GEN_AI_TRACE_CONTEXT_PROPAGATION_INCLUDE_BAGGAGE` environment variable to `true`:

If no value is provided for the `enable_baggage_propagation` parameter with the `AIProjectInstrumentor.instrument()` call and the environment variable is not set, the value defaults to `false` and baggage is not included.

**Note:** The `enable_baggage_propagation` flag is evaluated dynamically on each request, so changes take effect **immediately** for all clients that have the trace context propagation hook registered. However, the hook is only registered on clients acquired via `get_openai_client()` **while trace context propagation was enabled**. Clients acquired when trace context propagation was disabled will never propagate baggage, regardless of the `enable_baggage_propagation` value.

**Why is baggage propagation separate?**

The baggage header can contain arbitrary key-value pairs added anywhere in your application's trace context. Unlike trace IDs (which are randomly generated identifiers), baggage may contain:

- User identifiers or session information
- Authentication tokens or credentials
- Business-specific data or metadata
- Personally identifiable information (PII)

Baggage is automatically propagated through your entire application's call chain, meaning data added in one part of your application will be included in requests to Azure OpenAI unless explicitly controlled.

**Important Security Considerations:**

- **Review Baggage Contents**: Before enabling baggage propagation, audit what data your application (and any third-party libraries) adds to OpenTelemetry baggage.
- **Sensitive Data Risk**: Baggage is sent to Azure OpenAI and may be logged or processed by Microsoft services. Never add sensitive information to baggage when baggage propagation is enabled.
- **Opt-in by Design**: Baggage propagation is disabled by default (even when trace context propagation is enabled) to prevent accidental exposure of sensitive data.
- **Minimal Propagation**: `traceparent` and `tracestate` headers are generally sufficient for distributed tracing. Only enable baggage propagation if your specific observability requirements demand it.

### Enabling content recording

Content recording controls whether message contents and tool call related details, such as parameters and return values, are captured with the traces. This data may include sensitive user information.

To enable content recording, set the `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` environment variable to `true`. If the environment variable is not set  and no value is provided with the `AIProjectInstrumentor().instrument()` call for the content recording parameter, content recording defaults to `false`.

**Important:** The environment variable only controls content recording for built-in traces. When you use custom tracing decorators on your own functions, all parameters and return values are always traced.

### Disabling automatic instrumentation

The AI Projects client library automatically instruments OpenAI responses and conversations operations through `AiProjectInstrumentation`. You can disable this instrumentation by setting the environment variable `AZURE_TRACING_GEN_AI_INSTRUMENT_RESPONSES_API` to `false`. If the environment variable is not set, the responses and conversations APIs will be instrumented by default.

### Tracing Binary Data

Binary data are images and files sent to the service as input messages. When you enable content recording (`OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` set to `true`), by default you only trace file IDs and filenames. To enable full binary data tracing, set `AZURE_TRACING_GEN_AI_INCLUDE_BINARY_DATA` to `true`. In this case:

* **Images**: Image URLs (including data URIs with base64-encoded content) are included
* **Files**: File data is included if sent via the API

**Important:** Binary data can contain sensitive information and may significantly increase trace size. Some trace backends and tracing implementations may have limitations on the maximum size of trace data that can be sent to and/or supported by the backend. Ensure your observability backend and tracing implementation support the expected trace payload sizes when enabling binary data tracing.

### How to trace your own functions

The decorator `trace_function` is provided for tracing your own function calls using OpenTelemetry. By default the function name is used as the name for the span. Alternatively you can provide the name for the span as a parameter to the decorator.

**Note:** The `OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT` environment variable does not affect custom function tracing. When you use the `trace_function` decorator, all parameters and return values are always traced by default.

This decorator handles various data types for function parameters and return values, and records them as attributes in the trace span. The supported data types include:

* Basic data types: str, int, float, bool
* Collections: list, dict, tuple, set
  * Special handling for collections:
    * If a collection (list, dict, tuple, set) contains nested collections, the entire collection is converted to a string before being recorded as an attribute.
    * Sets and dictionaries are always converted to strings to ensure compatibility with span attributes.

Object types are omitted, and the corresponding parameter is not traced.

The parameters are recorded in attributes `code.function.parameter.<parameter_name>` and the return value is recorder in attribute `code.function.return.value`

#### Adding custom attributes to spans

You can add custom attributes to spans by creating a custom span processor. Here's how to define one:

<!-- SNIPPET:sample_agent_basic_with_console_tracing_custom_attributes.custom_attribute_span_processor -->

```python
class CustomAttributeSpanProcessor(SpanProcessor):
    def __init__(self) -> None:
        pass

    def on_start(self, span: Span, parent_context=None):
        # Add this attribute to all spans
        span.set_attribute("trace_sample.sessionid", "123")

        # Add another attribute only to create_thread spans
        if span.name == "create_thread":
            span.set_attribute("trace_sample.create_thread.context", "abc")

    def on_end(self, span: ReadableSpan):
        # Clean-up logic can be added here if necessary
        pass
```

<!-- END SNIPPET -->

Then add the custom span processor to the global tracer provider:

<!-- SNIPPET:sample_agent_basic_with_console_tracing_custom_attributes.add_custom_span_processor_to_tracer_provider -->

```python
provider = cast(TracerProvider, trace.get_tracer_provider())
provider.add_span_processor(CustomAttributeSpanProcessor())
```

<!-- END SNIPPET -->

See the full sample in file `\agents\telemetry\sample_agent_basic_with_console_tracing_custom_attributes.py` in the [Samples][samples] folder.

### Additional resources

For more information see [Agent tracing overview (preview)](https://learn.microsoft.com/azure/foundry/observability/concepts/trace-agent-concept).

## Troubleshooting

### Exceptions

Client methods that make service calls raise an [HttpResponseError](https://learn.microsoft.com/python/api/azure-core/azure.core.exceptions.httpresponseerror) exception for a non-success HTTP status code response from the service. The exception's `status_code` will hold the HTTP response status code (with `reason` showing the friendly name). The exception's `error.message` contains a detailed message that may be helpful in diagnosing the issue:

```python
from azure.core.exceptions import HttpResponseError

...

try:
    result = project_client.connections.list()
except HttpResponseError as e:
    print(f"Status code: {e.status_code} ({e.reason})")
    print(e.message)
```

For example, when you provide wrong credentials:

```text
Status code: 401 (Unauthorized)
Operation returned an invalid status 'Unauthorized'
```

### Logging

The client uses the standard [Python logging library](https://docs.python.org/3/library/logging.html). The logs include HTTP request and response headers and body, which are often useful when troubleshooting or reporting an issue to Microsoft.

#### Default console logging

To turn on client console logging define the environment variable `AZURE_AI_PROJECTS_CONSOLE_LOGGING=true` before running your Python script. Authentication bearer tokens are automatically redacted from the log. Your log may contain other sensitive information, so be sure to remove it before sharing the log with others.

#### Customizing your log

Instead of using the above-mentioned environment variable, you can configure logging yourself and control the log level, format and destination. To log to `stdout`, add the following at the top of your Python script:

```python
import sys
import logging

# Acquire the logger for this client library. Use 'azure' to affect both
# 'azure.core` and `azure.ai.inference' libraries.
logger = logging.getLogger("azure")

# Set the desired logging level. logging.INFO or logging.DEBUG are good options.
logger.setLevel(logging.DEBUG)

# Direct logging output to stdout:
handler = logging.StreamHandler(stream=sys.stdout)
# Or direct logging output to a file:
# handler = logging.FileHandler(filename="sample.log")
logger.addHandler(handler)

# Optional: change the default logging format. Here we add a timestamp.
#formatter = logging.Formatter("%(asctime)s:%(levelname)s:%(name)s:%(message)s")
#handler.setFormatter(formatter)
```

By default logs redact the values of URL query strings, the values of some HTTP request and response headers (including `Authorization` which holds the key or token), and the request and response payloads. To create logs without redaction, add `logging_enable=True` to the client constructor:

```python
project_client = AIProjectClient(
    credential=DefaultAzureCredential(),
    endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"],
    logging_enable=True
)
```

Note that the log level must be set to `logging.DEBUG` (see above code). Logs will be redacted with any other log level.

Be sure to protect non redacted logs to avoid compromising security.

For more information, see [Configure logging in the Azure libraries for Python](https://aka.ms/azsdk/python/logging)

### Reporting issues

To report an issue with the client library, or request additional features, please open a [GitHub issue here](https://github.com/Azure/azure-sdk-for-python/issues). Mention the package name "azure-ai-projects" in the title or content.

## Next steps

Have a look at the [Samples][samples] folder, containing fully runnable Python code for synchronous and asynchronous clients.

## Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information, see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

<!-- LINKS -->
[samples]: https://aka.ms/azsdk/azure-ai-projects-v2/python/samples/
[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
[azure_sub]: https://azure.microsoft.com/free/

# Release History

## 2.1.0 (2026-04-20)

### Features Added

* `get_openai_client()` on `AIProjectClient` now takes an optional input argument `agent_name`. If provided, the returned OpenAI
client will use a base URL of Agent endpoint instead of Foundry Project endpoint. As Agent endpoints are a preview feature, you
need to set `allow_preview=True` on the `AIProjectClient` constructor.
* New `.beta.agents` sub-client added, with Session operations (those only work with Hosted Agents)
  * `create_session()`
  * `delete_session()`
  * `delete_session_file()`
  * `download_session_file()`
  * `get_session()`
  * `get_session_files()`
  * `list_sessions()`
  * `upload_session_file()`
* Also on `.beta.agents` sub-client, a new method `patch_agent_details()`.
* New `beta.skills` sub-client added, with Skills operations:
  * `create()`
  * `create_from_package()`
  * `delete()`
  * `download()`
  * `get()`
  * `list()`
  * `update()`
* New `beta.toolboxes` sub-client added, with Toolboxes operations:
  * `create_version()`
  * `delete()`
  * `delete_version()`
  * `get()`
  * `get_version()`
  * `list()`
  * `list_versions()`
  * `update()`
* Type hinting support for OpenAI client operations `.evals.create()` and `.evals.runs.create()`, when you
get the OpenAI client using `get_openai_client()` method of `AIProjectClient`. This includes new TypedDicts
classes to help you author the input to these methods. See new TypedDict classes `ModelSamplingConfigParam`, 
`ToolDescriptionParam`, `AzureAIAgentTargetParam`, `AzureAIModelTargetParam`,
`ResponseRetrievalItemGenerationParams`, `AzureAIResponsesEvalRunDataSource`, `AzureAIDataSourceConfig`,
`TargetCompletionEvalRunDataSource`, `TestingCriterionAzureAIEvaluator`, `AzureAIBenchmarkPreviewEvalRunDataSource`,
`EvalCsvFileIdSource`, `EvalCsvRunDataSource`, `RedTeamEvalRunDataSource`, `TracesPreviewEvalRunDataSource`.


### Breaking Changes

* Tracing: trace context propagation is enabled by default when tracing is enabled.

### Bugs Fixed

* Fix missing type hinting on the returned OpenAI client from method 'get_openai_client()`.

### Sample updates

* Evaluation samples updated to use TypedDicts to specify inputs to `.evals.create()` and `.evals.runs.create()` methods.
* Renamed environment variable `AZURE_AI_PROJECT_ENDPOINT` to `FOUNDRY_PROJECT_ENDPOINT` in all samples.
* Renamed environment variable `AZURE_AI_MODEL_DEPLOYMENT_NAME` to `FOUNDRY_MODEL_NAME` in all samples.
* Renamed environment variable `AZURE_AI_MODEL_AGENT_NAME` to `FOUNDRY_AGENT_NAME` in all samples.
* Added Hosted Agents related samples: `sample_agent_endpoint.py`, `sample_agent_endpoint_async.py`, `sample_sessions_crud.py`, `sample_sessions_crud_async.py`, `sample_sessions_files_upload_download.py`, `sample_sessions_files_upload_download_async.py`, `sample_skills_crud.py`, `sample_skills_crud_async.py`, `sample_skills_upload_and_download.py`, `sample_skills_upload_and_download_async.py`, `sample_toolboxes_crud.py`, and `sample_toolboxes_crud_async.py`.
* Added structured inputs + file upload sample (`sample_agent_structured_inputs_file_upload.py`) demonstrating passing an uploaded file ID to an agent at runtime.
* Added structured inputs + File Search sample (`sample_agent_file_search_structured_inputs.py`) demonstrating configuring File Search tool resources via structured inputs.
* Added structured inputs + Code Interpreter sample (`sample_agent_code_interpreter_structured_inputs.py`) demonstrating passing an uploaded file ID to Code Interpreter via structured inputs.
* Added CSV evaluation sample (`sample_evaluations_builtin_with_csv.py`) demonstrating evaluation with an uploaded CSV dataset.
* Added synthetic data evaluation samples (`sample_synthetic_data_agent_evaluation.py`) and (`sample_synthetic_data_model_evaluation.py`).
* Added Chat Completions basic samples (`sample_chat_completions_basic.py`, `sample_chat_completions_basic_async.py`) demonstrating chat completions calls using `AIProjectClient` + the OpenAI-compatible client.
* Added Toolboxes CRUD samples (`sample_toolboxes_crud.py`, `sample_toolboxes_crud_async.py`) demonstrating `project_client.beta.toolboxes` create/get/update/list/delete.
* Simplified `sample_memory_basic.py` and `sample_agent_memory_search.py` (and their async equivalent) by removing 
`options=MemoryStoreDefaultOptions(user_profile_enabled=True, chat_summary_enabled=True)` when constructing `MemoryStoreDefaultDefinition`,
since this is now redundant (it's the service default).


## 2.0.1 (2026-03-12)

### Bugs Fixed

* Fix custom Memory Stores LRO poller operation to add the missing
  required `"Foundry-Features": "MemoryStores=V1Preview"` HTTP request header.

## 2.0.0 (2026-03-06)

First stable release of the client library that uses the Generally Available (GA) version "v1" of the Foundry REST APIs.

### Features Added

* To enable preview (beta) operations, a new optional boolean input argument named `allow_preview` was added
to the constructor of `AIProjectClient`. Caller must set it to True to opt-in to preview features.
This includes creating an Hosted Agent or Workflow Agent. Methods on the `.beta` sub-client (for example
`.beta.memory_stores.create()`) do not require setting `allow_preview=True` since it's implied by the sub-client name.
When preview features are enabled, the client libraries sends the HTTP request header `Foundry-Features`
with the appropriate value in all relevant calls to the service.

### Breaking Changes

* Input argument `foundry_features` was removed from all methods that supported it. Use the new `allow_preview`
instead on client constructor (see above).
* Class `TextResponseFormatConfiguration` renamed to `TextResponseFormat`.
* Class `TextResponseFormatConfigurationResponseFormatText` renamed to `TextResponseFormatTest`.
* Class `TextResponseFormatConfigurationResponseFormatJsonObject` renamed to `TextResponseFormatJsonObject`.
* Class `CodeInterpreterContainerAuto` was renamed to `AutoCodeInterpreterToolParam`,
  and has a new optional property `network_policy` of type `ContainerNetworkPolicyParam`.
* class `ImageGenActionEnum` was renamed to `ImageGenAction`.
* Rename `ToolChoiceParamType.WEB_SEARCH_PREVIEW2025_03_11` to `ToolChoiceParamType.WEB_SEARCH_PREVIEW_2025_03_11`.
* Rename `RankerVersionType.DEFAULT2024_11_15` to `RankerVersionType.DEFAULT_2024_11_15`.
* Rename method `.beta.evaluators.list_latest_versions()` to `.beta.evaluators.list()`.
* Rename property `id` on class `Insight` to `insight_id`.
* Rename property `id` on class `Schedule` to `schedule_id`.
* Rename input argument `id` to `insight_id` in `.beta.insights.get()` method.
* Rename input argument `id` to `schedule_id` in `.beta.schedules` methods.
* Updated datetime-typed fields (`start_time`, `end_time`, `trigger_at`, `trigger_time`, `created_at`, `modified_at`) 
across `CronTrigger`, `RecurrenceTrigger`, `OneTimeTrigger`, `ScheduleRun`, and `EvaluatorVersion` classes from `str`
to `datetime.datetime` with format="rfc3339".

### Other Changes

* The input `items` argument in the methods `.beta.memory_stores.begin_update_memories()` and `.beta.memory_stores.search_memories`
was changed from type `Optional[List[dict[str, Any]]]` to `Optional[Union[str, ResponseInputParam]]`, where `ResponseInputParam`
is defined in the openai package. This allows passing in, for example, a list of `EasyInputMessageParam`. Import it using
`from openai.types.responses import EasyInputMessageParam`. This is not a breaking change, since the caller
can still pass in `List[dict[str, Any]`.

## 2.0.0b4 (2026-02-24)

This is the first release that uses the Generally Available (GA) version "v1" of the Foundry REST APIs.

### Features Added

* Tracing: included agent ID in response generation traces when available.
* Tracing: Added support for opt-in trace context propagation.

### Breaking changes

* A Responses call on OpenAPI client (`openai_client.responses.create()`) that uses an Agent reference, now needs to specify
`extra_body={"agent_reference": {"name": agent_name, "type": "agent_reference"}}` instead of `extra_body={"agent": {"name": agent_name, "type": "agent_reference"}}`.
* Agent methods `.agents.create()`, `.agents.create_from_manifest()`, `.agents.update()` and `.agents.update_from_manifest()` were removed. Use
the remaining methods `.agents.create_version()` and `.agents.create_version_from_manifest()` instead.
* To align with OpenAI naming conventions, use "Tool" suffix for class names describing Azure tools that are generally available (stable release):
  * Rename class `AzureAISearchAgentTool` to `AzureAISearchTool`.
  * Rename class `AzureFunctionAgentTool` to `AzureFunctionTool`.
  * Rename class `BingGroundingAgentTool` to `BingGroundingTool`.
  * Rename class `OpenApiAgentTool` to `OpenApiTool`.
* To align with OpenAI naming conventions, use "PreviewTool" suffix for class names describing Azure tools in preview:
  * Rename class `A2ATool` to `A2APreviewTool`.
  * Rename class `BingCustomSearchAgentTool` to `BingCustomSearchPreviewTool`.
  * Rename class `BrowserAutomationAgentTool` to `BrowserAutomationPreviewTool`.
  * Rename class `MemorySearchTool` to `MemorySearchPreviewTool`.
  * Rename class `MicrosoftFabricAgentTool` to `MicrosoftFabricPreviewTool`.
  * Rename class `SharepointAgentTool` to `SharepointPreviewTool`.
* Other class renames:
  * Rename class `PromptAgentDefinitionText` to `PromptAgentDefinitionTextOptions`
  * Rename class `EvaluationComparisonRequest` to `InsightRequest`
* To use Workflow Agents, which are still in preview, you now need to set an additional input
argument `foundry_features=FoundryFeaturesOptInKeys.WORKFLOW_AGENTS_V1_PREVIEW` when calling
`.agents.create_version()`.
* To use Hosted Agents, which are still in preview, you now need to set an additional input
argument `foundry_features=FoundryFeaturesOptInKeys.HOSTED_AGENTS_V1_PREVIEW` when calling
`.agents.create_version()`.
* To use `.evaluation_rules.create_or_update()` with `HumanEvaluationPreviewRuleAction`, you now
need to set an additional input argument `foundry_features=FoundryFeaturesOptInKeys.EVALUATIONS_V1_PREVIEW`.
* Operation sets that are still in preview now have the ".beta" subclient in their call path. So for example
`project_client.memory_stores.create()` has changed to `project_client.beta.memory_stores.create()`.
Similarly for the operation sets: `evaluators`, `insights`, `evaluation_taxonomies`, `schedules` and `red_teams`.
* The method `begin_update_memories()` in Memory Stores operation now accept optional `items` of type `List[dict[str, Any]]`
instead of `List[ItemParam]`. Similarly for `items` in method `search_memories()`. As a result around 100 classes
that are derived from `ItemParam` were removed as they are no longer used by the client library.
* Tracing instrumentation, is an experimental preview feature, now requires explicitly opt in by setting the environment variable:
`AZURE_EXPERIMENTAL_ENABLE_GENAI_TRACING=true`
* Tracing: workflow actions in conversation item listings are now emitted as "gen_ai.conversation.item" events
(with role="workflow") instead of "gen_ai.workflow.action" events in the list_conversation_items span.
* Tracing: response generation span names changed from "responses {model_name}" to "chat {model_name}" for model
calls and from "responses {agent_name}" to "invoke_agent {agent_name}" for agent calls.
* Tracing: response generation operation names changed from "responses" to "chat" for model calls and from "responses"
to "invoke_agent" for agent calls.
* Tracing: response generation uses gen_ai.input.messages and gen_ai.output.messages attributes directly under the
span instead of events.
* Tracing: agent creation uses gen_ai.system_instructions attribute directly under the span instead of an event.
Note that the attribute name is gen_ai.system_instructions not gen_ai.system.instructions.
* Tracing: "gen_ai.provider.name" attribute value changed to "microsoft.foundry".
* Tracing: the format of the function tool call related traces in input and output messages changed to
{"type": "tool_call", "id": "...", "name": "...", "arguments": {...}} and {"type": "tool_call_response", "id": "...", "result": "..."}

### Sample updates

* Add and update samples for `AzureFunctionTool`, `WebSearchTool`, and `WebSearchPreviewTool`
* All samples for agent tools call `responses.create` API with `agent_reference` instead of `agent`

## 2.0.0b3 (2026-01-06)

### Features Added

* The package now takes dependency on openai and azure-identity packages. No need to install them separately.
* Tracing: support for tracing the schema when an Agent is created with structured output definition.

### Breaking changes

* Rename class `AgentObject` to `AgentDetails`
* Rename class `AgentVersionObject` to `AgentVersionDetails`
* Rename class `MemoryStoreObject` to `MemoryStoreDetails`
* Tracing: removed outer "content" from event content format wrapper and unified type-specific keys (e.g., "text", "image_url") to generic "content" key.
* Tracing: replaced "gen_ai.request.assistant_name" attribute with gen_ai.agent.name.
* Tracing: removed "gen_ai.system" - the "gen_ai.provider.name" provides same information.
* Tracing: changed "gen_ai.user.message" and "gen_ai.tool.message" to "gen_ai.input.messages". Changed "gen_ai.assistant.message" to "gen_ai.output.messages".
* Tracing: changed "gen_ai.system.instruction" to "gen_ai.system.instructions".
* Tracing: added the "parts" array to "gen_ai.input.messages" and "gen_ai.output.messages".
* Tracing: removed "role" as a separate attribute and added "role" to "gen_ai.input.messages" and "gen_ai.output.messages" content.
* Tracing: added "finish_reason" as part of "gen_ai.output.messages" content.
* Tracing: changed the tool calls to use the api definitions as the types in traces. For example "function_call" instead of "function" and "function_call_output" instead of "function"

### Bugs Fixed

* Tracing: fixed a bug with computer use tool call output including screenshot binary data even when binary data tracing is off.

### Sample updates

* Added OpenAPI tool sample. See `sample_agent_openapi.py`.
* Added OpenAPI with Project Connection sample. See `sample_agent_openapi_with_project_connection.py`.
* Added SharePoint grounding tool sample. See `sample_agent_sharepoint.py`.
* Improved MCP client sample showing direct MCP tool invocation. See `samples/mcp_client/sample_mcp_tool_async.py`.
* Samples that download generated files (code interpreter and image generation) now save files to the system temp directory instead of the current working directory. See `sample_agent_code_interpreter.py`, `sample_agent_code_interpreter_async.py`, `sample_agent_image_generation.py`, and `sample_agent_image_generation_async.py`.
* The Agent to Agent sample was updated to allow "Custom keys" connection type.
* Update Fine-Tuning supervised job samples to show waiting for model result instead of polling
* Add evaluations sample `samples/evaluations/sample_evaluations_score_model_grader_with_image.py`.
* Add basic steam event samples `samples/agents/sample_agent_stream_events.py` and `samples/responses/sample_responses_stream_events.py`

## 2.0.0b2 (2025-11-14)

### Features Added

* Tracing: support for workflow agent tracing.
* Agent Memory operations, including code for custom LRO poller. See methods on the ".memory_store"
property of `AIProjectClient`.

### Breaking changes

* `get_openai_client()` method on the asynchronous AIProjectClient is no longer an "async" method.
* Tracing: tool call output event content format updated to be in line with other events.

### Bugs Fixed

* Tracing: operation name attribute added to create agent span, token usage added to streaming response generation span.

### Sample updates

* Added samples to show usage of the Memory Search Tool (see sample_agent_memory_search.py) and its async equivalent.
* Added samples to show Memory management. See samples in the folder `samples\memories`.
* Added `finetuning` samples for operations create, retrieve, list, list_events, list_checkpoints, cancel, pause and resume. Also, these samples includes various finetuning techniques like Supervised (SFT), Reinforcement (RFT) and Direct performance optimization (DPO).
* In all most samples, credential, project client, and openai client are combined into one context manager.
* Remove `await` while calling `get_openai_client()` for samples using asynchronous clients. 

## 2.0.0b1 (2025-11-11)

### Features added

* The client library now uses version `2025-11-15-preview` of the Microsoft Foundry [data plane REST APIs](https://aka.ms/azsdk/azure-ai-projects-v2/api-reference-2025-11-15-preview).
* New Agent operations (now built on top of OpenAI's `Responses` protocol) were added to the `AIProjectClient`.
This package no longer depends on `azure-ai-agents` package. See `samples\agents` folder.
* New Evaluation operations. See methods on properties `.evaluation_rules`, `.evaluation_taxonomies`, `.evaluators`, `.insights`, and `.schedules`.
* New Memory Store operations. See methods on the property `.memory_store`.

### Breaking changes

* The implementation of `.get_openai_client()` method was updated to return an authenticated
OpenAI client from the openai package, configure to run Responses operations on your Foundry Project endpoint.

### Sample updates

* Added new Agent samples. See `samples\agents` folder.
* Added new Evaluation samples. See `samples\evaluations` folder.
* Added `files` samples for operations create, delete, list, retrieve and content. See `samples\files` folder.

## 1.1.0b4 (2025-09-12)

### Bugs Fixed

* Fix getting secret keys for connections of type "Custom Keys" ([GitHub issue 52355](https://github.com/Azure/azure-sdk-for-net/issues/52355))

## 1.1.0b3 (2025-08-26)

### Features added

* File `setup.py` was updated to indicate the dependency `azure-ai-agents>=1.2.0b3`
instead of `azure-ai-agents>=1.0.0`. This means that in a clean environment, installing
via `pip install --pre azure-ai-projects` will install latest beta version of `azure-ai-agents`
(which has features in preview) instead of latest stable version (which does
not include preview features).

## 1.1.0b2 (2025-08-05)

### Bugs Fixed

Fix regression in Red-Team operations, in the definition of the class `AzureOpenAIModelConfiguration`.

## 1.1.0b1 (2025-08-01)

First beta version following the 1.0.0 stable release. It brings back the Evaluation and Red-Team operations which are still in preview.

### Features added

* Evaluation and Red-Team operations (in preview) were restored.

## 1.0.0 (2025-07-31)

First stable version of the client library. The client library now uses version `v1` of the
AI Foundry [data plane REST APIs](https://aka.ms/azsdk/azure-ai-projects/ga-rest-api-reference).

### Breaking changes

* Features that are still in preview were removed from this stable release. This includes:
  * Evaluation operations (property `.evaluations`)
  * Red-Team operations (property `.red_teams`)
  * Class `PromptTemplate`.
  * Package function `enable_telemetry()`
* Classes were renamed:
  * Class `Sku` was renamed `ModelDeploymentSku`
  * Class `SasCredential` was renamed `BlobReferenceSasCredential`
  * Class `AssetCredentialResponse` was renamed `DatasetCredential`
* Method `.inference.get_azure_openai_client()` was renamed `.get_openai_client()`. The `.inference` property was removed.
  The method is documented as returning an object of type `OpenAI`, but it still returns an object of the derived type `AzureOpenAI`.
  The function implementation has not changed.
* Method `.telemetry.get_connection_string()` was renamed `.telemetry.get_application_insights_connection_string()`

### Sample updates

* Added a new Dataset sample named `sample_datasets_download.py` to show how you can download all files referenced by a certain Dataset (following a question in [this GitHub issue](https://github.com/Azure/azure-sdk-for-python/issues/41960))
* Two samples added showing how to do a `responses` operation using an authenticated Azure OpenAI client created
using `get_openai_client()`.
* Existing inference samples that used the package function `enable_telemetry()` were updated to remove this call,
and instead add the necessary tracing configuration calls to the sample.

## 1.0.0b12 (2025-06-23)

### Breaking changes

* These 3 methods on `AIProjectClient` were removed: `.inference.get_chat_completions_client()`,
`.inference.get_embeddings_client()` and `.inference.get_image_embeddings_client()`.
For guidance on obtaining an authenticated `azure-ai-inference` client for your AI Foundry Project,
refer to the updated samples in the `samples\inference` directory. For example,
`sample_chat_completions_with_azure_ai_inference_client.py`. Alternatively, use the `.inference.get_azure_openai_client()` method to perform chat completions with an Azure OpenAI client.
* Method argument name changes:
  * In method `.indexes.create_or_update()` argument `body` was renamed `index`.
  * In method `.datasets.create_or_update()` argument `body` was renamed `dataset_version`.
  * In method `.datasets.pending_upload()` argument `body` was renamed `pending_upload_request`.

### Bugs Fixed

* Fix to package function `enable_telemetry()` to correctly instrument `azure-ai-agents`.
* Updated RedTeam target type visibility to allow for type being sent in the JSON for redteam run creation.

### Other

* Set dependency on `azure-ai-agents` version `1.0.0` or above,
now that we have a stable release of the Agents package.

## 1.0.0b11 (2025-05-15)

There have been significant updates with the release of version 1.0.0b11, including breaking changes.
Please see new samples and package README.md file.

### Features added

* `.deployments` methods to enumerate AI models deployed to your AI Foundry Project.
* `.datasets` methods to upload documents and reference them. To be used with Evaluations.
* `.indexes` methods to handle your Search Indexes.

### Breaking changes

* Azure AI Foundry Project endpoint is now required to construct the `AIProjectClient`. It has the form
`https://<your-ai-services-account-name>.services.ai.azure.com/api/projects/<your-project-name>`. Find it in your AI Foundry Project
Overview page. The factory method `from_connection_string` was removed. Support for project connection string and hub-based projects has been discontinued. We recommend creating a new Azure AI Foundry resource utilizing project endpoint. If this is not possible, please pin the version of or pin the version of `azure-ai-projects` to `1.0.0b10` or earlier.
* Agents are now implemented in a separate package `azure-ai-agents`. Continue using the ".agents" operations on the
`AIProjectsClient` to create, run and delete agents, as before. However there have been some breaking changes in these operations.
See [Agents package document and samples](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-projects_1.0.0b11/sdk/ai/azure-ai-agents) for more details.
* Several changes to the `.connections` methods, including the response object (now simply called `Connection`)
* The method `.inference.get_azure_openai_client()` now supports returning an authenticated `AzureOpenAI` client to be used with
AI models deployed to the Project's AI Services. This is in addition to the existing option to get an `AzureOpenAI` client for one of the connected Azure OpenAI services.
* Import `PromptTemplate` from `azure.ai.projects` instead of `azure.ai.projects.prompts`.
* The class ConnectionProperties was renamed to Connection, and its properties have changed.
* The method `.to_evaluator_model_config` on `ConnectionProperties` is no longer required and does not have an equivalent method on `Connection`. When constructing the EvaluatorConfiguration class, the `init_params` element now requires `deployment_name` instead of `model_config`.
* The method `upload_file` on `AIProjectClient` had been removed, use `datasets.upload_file` instead.
* Evaluator Ids are available using the Enum `EvaluatorIds` and no longer require `azure-ai-evaluation` package to be installed.
* Property `scope` on `AIProjectClient` is removed, use AI Foundry Project endpoint instead.
* Property `id` on Evaluation is replaced with `name`.
* Please see the [agents migration guide](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-projects_1.0.0/sdk/ai/azure-ai-projects/AGENTS_MIGRATION_GUIDE.md) on how to use the new `azure-ai-projects` with `azure-ai-agents` package.

### Sample updates

* All samples have been updated. New ones added for Deployments, Datasets and Indexes.

## 1.0.0b10 (2025-04-23)

### Features added

* Added `ConnectedAgentTool` class for better connected Agent support.
* Added Agent tool call tracing for all tool call types when streaming with `AgentEventHandler` based event handler.
* Added tracing for listing Agent run steps.
* Add a `max_retry` argument to the Agent's `enable_auto_function_calls` function to cancel the run if the maximum number of retries for auto function calls is reached.

### Sample updates

* Added connected Agent tool sample.

### Bugs Fixed

* Fix for filtering of Agent messages by run ID (see [GitHub issue 49513](https://github.com/Azure/azure-sdk-for-net/issues/49513)).

## 1.0.0b9 (2025-04-16)

### Features added

* Utilities to load prompt template strings and Prompty file content
* Added BingCustomSearchTool class with sample
* Added list_threads API to agents namespace
* Added image input support for agents create_message

### Sample updates

* Added `project_client.agents.enable_auto_function_calls(toolset=toolset)` to all samples that has `toolcalls` executed by `azure-ai-project` SDK
* New BingCustomSearchTool sample
* New samples added for image input from url, file and base64

### Breaking Changes

Redesigned automatic function calls because agents retrieved by `update_agent` and `get_agent` do not support them.  With the new design, the toolset parameter in `create_agent` no longer executes toolcalls automatically during `create_and_process_run` or `create_stream`. To retain this behavior, call `enable_auto_function_calls` without additional changes.

## 1.0.0b8 (2025-03-28)

### Features added

* New parameters added for Azure AI Search tool, with corresponding sample update.
* Fabric tool REST name updated, along with convenience code.

### Sample updates

* Sample update demonstrating new parameters added for Azure AI Search tool.
* Sample added using OpenAPI tool against authenticated TripAdvisor API spec.

### Bugs Fixed

* Fix for a bug in Agent tracing causing event handler return values to not be returned when tracing is enabled.
* Fix for a bug in Agent tracing causing tool calls not to be recorded in traces.
* Fix for a bug in Agent tracing causing function tool calls to not work properly when tracing is enabled.
* Fix for a bug in Agent streaming, where `agent_id` was not included in the response. This caused the SDK not to make function calls when the thread run status is `requires_action`.

## 1.0.0b7 (2025-03-06)

### Features added

* Add support for parsing URL citations in Agent text messages. See new classes `MessageTextUrlCitationAnnotation` and `MessageDeltaTextUrlCitationAnnotation`.
* Add enum value `ConnectionType.API_KEY` to support enumeration of generic connections that uses API Key authentication.

### Sample updates

* Update sample `sample_agents_bing_grounding.py` with printout of URL citation.
* Add new samples `sample_agents_stream_eventhandler_with_bing_grounding.py` and `sample_agents_stream_iteration_with_bing_grounding.py` with printout of URL citation.

### Bugs Fixed

* Fix a bug in deserialization of `RunStepDeltaFileSearchToolCall` returned during Agent streaming (see [GitHub issue 48333](https://github.com/Azure/azure-sdk-for-net/issues/48333)).
* Fix for Exception raised while parsing Agent streaming response, in some rare cases, for multibyte UTF-8 languages like Chinese.

### Breaking Changes

* Rename input argument `assistant_id` to `agent_id` in all Agent methods to align with the "Agent" terminology. Similarly, rename all `assistant_id` properties on classes.

## 1.0.0b6 (2025-02-14)

### Features added

* Added `trace_function` decorator for conveniently tracing function calls in Agents using OpenTelemetry. Please see the README.md for updated documentation.

### Sample updates

* Added AzureLogicAppTool utility and Logic App sample under `samples/agents`, folder to make Azure Logic App integration with Agents easier.
* Added better observability for Azure AI Search sample for Agents via improved run steps information from the service.
* Added sample to demonstrate how to add custom attributes to telemetry span.

### Bugs Fixed

* Lowered the logging level of "Toolset is not available in the client" from `warning` to `debug` to prevent unnecessary log entries in agent application runs.

## 1.0.0b5 (2025-01-17)

### Features added

* Add method `.inference.get_image_embeddings_client` on `AIProjectClient` to get an authenticated
`ImageEmbeddingsClient` (from the package azure-ai-inference). You need to have azure-ai-inference package
version 1.0.0b7 or above installed for this method to work.

### Bugs Fixed

* Fix for events dropped in streamed Agent response (see [GitHub issue 39028](https://github.com/Azure/azure-sdk-for-python/issues/39028)).
* In Agents, incomplete status thread run event is now deserialized into a ThreadRun object, during stream iteration, and invokes the correct function `on_thread_run` (instead of the wrong function `on_unhandled_event`).
* Fix an error when calling the `to_evaluator_model_config` method of class `ConnectionProperties`. See new input
argument `include_credentials`.

### Breaking Changes

* `submit_tool_outputs_to_run` returns `None` instead of `ThreadRun` (see [GitHub issue 39028](https://github.com/Azure/azure-sdk-for-python/issues/39028)).

## 1.0.0b4 (2024-12-20)

### Bugs Fixed

* Fix for Agent streaming issue (see [GitHub issue 38918](https://github.com/Azure/azure-sdk-for-python/issues/38918))
* Fix for Agent async function `send_email_async` is not called (see [GitHub issue 38898](https://github.com/Azure/azure-sdk-for-python/issues/38898))
* Fix for Agent streaming with event handler fails with "AttributeError: 'MyEventHandler' object has no attribute 'buffer'" (see [GitHub issue 38897](https://github.com/Azure/azure-sdk-for-python/issues/38897))

### Features Added

* Add optional input argument `connection_name` to methods `.inference.get_chat_completions_client`,
 `.inference.get_embeddings_client` and `.inference.get_azure_openai_client`.

## 1.0.0b3 (2024-12-13)

### Features Added

* Add support for Structured Outputs for Agents.
* Add option to include file contents, when index search is used for Agents.
* Added objects to inform Agents about Azure Functions.
* Redesigned streaming and event handlers for agents.
* Add `parallel_tool_calls` parameter to allow parallel tool execution for Agents.
* Added `BingGroundingTool` for Agents to use against a Bing API Key connection.
* Added `AzureAiSearchTool` for Agents to use against an Azure AI Search resource.
* Added `OpenApiTool` for Agents, which creates and executes a REST function defined by an OpenAPI spec.
* Added new helper properties in `OpenAIPageableListOfThreadMessage`, `MessageDeltaChunk`, and `ThreadMessage`.
* Rename "AI Studio" to "AI Foundry" in package documents and samples, following recent rebranding.

### Breaking Changes

* The method `.agents.get_messages` was removed. Please use `.agents.list_messages` instead.

## 1.0.0b2 (2024-12-03)

### Bugs Fixed

* Fix a bug in the `.inference` operations when Entra ID authentication is used by the default connection.
* Fixed bugs occurring during streaming in function tool calls by asynchronous agents.
* Fixed bugs that were causing issues with tracing agent asynchronous functionality.
* Fix a bug causing warning about unclosed session, shown when using asynchronous credentials to create agent.
* Fix a bug that would cause agent function tool related function names and parameters to be included in traces even when content recording is not enabled.

## 1.0.0b1 (2024-11-15)

### Features Added

First beta version
