Reach out

Command Palette

Search for a command to run...

[Agents]

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

The Model Context Protocol (MCP) is an open standard designed to streamline the integration of AI models with various data sources and tools. By providing a standardized interface, MCP enables seamless and secure connections, allowing AI systems to access and utilize contextual information efficiently. It simplifies the development process, making it easier to build robust and interconnected AI applications.

By replacing fragmented integrations with a single protocol, MCP helps AI models produce better, more relevant responses by connecting them to live data and real-world systems.

For more information on configuring and deploying your own MCP Server, refer to the Model Context Protocol documentation.

Our Python SDK enables seamless integration of our agents with MCP Clients.

MCP Client Usage

How to Use a Local MCP Server

Here is how to create an agent that uses a local MCP server to fetch weather information based on a user's location, combining MCP integration.

Step 1: Initialize the Mistral Client

First, we import everything needed. Most of the required modules are available with our mistralai package, but you will also need mcp. All the MCP Clients will be run asynchronously, so we will create an async main function where the main code will reside.

1#!/usr/bin/env python
2import asyncio
3import os
4
5from mistralai import Mistral
6from mistralai.extra.run.context import RunContext
7from mcp import StdioServerParameters
8from mistralai.extra.mcp.stdio import MCPClientSTDIO
9from pathlib import Path
10
11from mistralai.types import BaseModel
12
13# Set the current working directory and model to use
14cwd = Path(__file__).parent
15MODEL = "mistral-medium-latest"
16
17async def main() -> None:
18    # Initialize the Mistral client with your API key
19    api_key = os.environ["MISTRAL_API_KEY"]
20    client = Mistral(api_key)

Step 2: Define Server Parameters and Create an Agent

We can now define the server parameters, which will point to a specific path. For more information, we recommend visiting the Model Context Protocol documentation. Once the server is defined, we can create our agent.

1    # Define parameters for the local MCP server
2    server_params = StdioServerParameters(
3        command="python",
4        args=[str((cwd / "mcp_servers/stdio_server.py").resolve())],
5        env=None,
6    )
7
8    # Create an agent to tell the weather
9    weather_agent = client.beta.agents.create(
10        model=MODEL,
11        name="weather teller",
12        instructions="You are able to tell the weather.",
13        description="",
14    )

Step 3: Define Output Format and Create a Run Context

The next step is to create a Run Context where everything will happen between the MCP Client and our Agent. You can also leverage structured outputs!

1    # Define the expected output format for weather results
2    class WeatherResult(BaseModel):
3        user: str
4        location: str
5        temperature: float
6
7    # Create a run context for the agent
8    async with RunContext(
9        agent_id=weather_agent.id,
10        output_format=WeatherResult,
11        continue_on_fn_error=True,
12    ) as run_ctx:

Step 4: Register MCP Client

The next step is to create and register the MCP Client.

1        # Create and register an MCP client with the run context
2        mcp_client = MCPClientSTDIO(stdio_params=server_params)
3        await run_ctx.register_mcp_client(mcp_client=mcp_client)

You can also leverage the MCP Orchestration to use Function Calling locally directly.

1        import random
2        # Register a function to get a random location for a user, it will be an available tool
3        @run_ctx.register_func
4        def get_location(name: str) -> str:
5            """Function to get location of a user.
6
7            Args:
8                name: name of the user.
9            """
10            return random.choice(["New York", "London", "Paris", "Tokyo", "Sydney"])
11
12        # Create and register an MCP client with the run context
13        mcp_client = MCPClientSTDIO(stdio_params=server_params)
14        await run_ctx.register_mcp_client(mcp_client=mcp_client)

Step 5: Run the Agent and Print Results

Everything is ready; you can run our Agent and get the output results!

1        # Run the agent with a query
2        run_result = await client.beta.conversations.run_async(
3            run_ctx=run_ctx,
4            inputs="Tell me the weather in John's location currently.",
5        )
6
7        # Print the results
8        print("All run entries:")
9        for entry in run_result.output_entries:
10            print(f"{entry}")
11            print()
12        print(f"Final model: {run_result.output_as_model}")
13
14if __name__ == "__main__":
15    asyncio.run(main())

How to Use a Remote MCP Server Without Authentication

Here is how to use a remote MCP server without authentication.

Step 1: Initialize the Mistral Client

First, we import everything needed. Most of the required modules are available with our mistralai package. All the MCP Clients will be run asynchronously, so we will create an async main function where the main code will reside.

1#!/usr/bin/env python
2import asyncio
3import os
4
5from mistralai import Mistral
6from mistralai.extra.run.context import RunContext
7from mistralai.extra.mcp.sse import MCPClientSSE, SSEServerParams
8from pathlib import Path
9
10# Set the current working directory and model to use
11cwd = Path(__file__).parent
12MODEL = "mistral-medium-latest"
13
14async def main():
15    # Initialize the Mistral client with your API key
16    api_key = os.environ["MISTRAL_API_KEY"]
17    client = Mistral(api_key)

Step 2: Define Server URL and Create MCP Client

Next, we define the URL for the remote MCP server and create an MCP client to connect to it.

1    # Define the URL for the remote MCP server
2    server_url = "https://mcp.semgrep.ai/sse"
3    mcp_client = MCPClientSSE(sse_params=SSEServerParams(url=server_url, timeout=100))

Step 3: Create a Run Context and Register MCP Client

We create a Run Context for the agent and register the MCP client with it.

1    # Create a run context for the agent
2    async with RunContext(
3        model=MODEL,
4    ) as run_ctx:
5        # Register the MCP client with the run context
6        await run_ctx.register_mcp_client(mcp_client=mcp_client)

Step 4: Run the Agent and Print Results

Finally, we run the agent with a query and print the results.

1        # Run the agent with a query
2        run_result = await client.beta.conversations.run_async(
3            run_ctx=run_ctx,
4            inputs="Can you write a hello_world.py and check for security vulnerabilities",
5        )
6
7        # Print the results
8        print("All run entries:")
9        for entry in run_result.output_entries:
10            print(f"{entry}")
11            print()
12        print(f"Final Response: {run_result.output_as_text}")
13
14if __name__ == "__main__":
15    asyncio.run(main())

How to Use a Remote MCP Server with Authentication

Here is how to use a remote MCP server with authentication.

Step 1: Initialize the Mistral Client

First, we import everything needed. Most of the required modules are available with our mistralai package. All the MCP Clients will be run asynchronously, so we will create an async main function where the main code will reside.

1#!/usr/bin/env python
2import asyncio
3from http.server import BaseHTTPRequestHandler, HTTPServer
4import os
5import threading
6import webbrowser
7
8from mistralai import Mistral
9from mistralai.extra.run.context import RunContext
10from mistralai.extra.mcp.sse import MCPClientSSE, SSEServerParams
11from mistralai.extra.mcp.auth import build_oauth_params
12
13# Set the model to use and callback port for OAuth
14MODEL = "mistral-medium-latest"
15CALLBACK_PORT = 16010

Step 2: Set Up Callback Server

We set up a callback server to handle OAuth responses.

1def run_callback_server(callback_func):
2    # Set up a callback server to handle OAuth responses
3    auth_response: dict = {"url": ""}
4
5    class OAuthCallbackHandler(BaseHTTPRequestHandler):
6        server_version = "HTTP"
7        code = None
8
9        def do_GET(self):
10            if "/callback" in self.path:
11                try:
12                    auth_response["url"] = self.path
13                    self.send_response(200)
14                    self.send_header("Content-type", "text/html")
15                    self.end_headers()
16                    callback_func()
17                    response_html = "<html><body><p>You may now close this window.</p></body></html>"
18                    self.wfile.write(response_html.encode())
19                    threading.Thread(target=httpd.shutdown).start()
20                except Exception:
21                    self.send_response(500)
22                    self.end_headers()
23
24    server_address = ("localhost", CALLBACK_PORT)
25    httpd = HTTPServer(server_address, OAuthCallbackHandler)
26    threading.Thread(target=httpd.serve_forever).start()
27    redirect_url = f"http://localhost:{CALLBACK_PORT}/oauth/callback"
28    return httpd, redirect_url, auth_response

Step 3: Define Server URL and Create MCP Client

We define the URL for the remote MCP server and create an MCP client to connect to it.

1async def main():
2    # Initialize the Mistral client with your API key
3    api_key = os.environ["MISTRAL_API_KEY"]
4    client = Mistral(api_key)
5
6    # Define the URL for the remote MCP server
7    server_url = "https://mcp.linear.app/sse"
8    mcp_client = MCPClientSSE(sse_params=SSEServerParams(url=server_url))

Step 4: Handle Authentication

We handle the authentication process, including setting up a callback event and event loop, checking if authentication is required, and managing the OAuth flow.

1    # Set up a callback event and event loop
2    callback_event = asyncio.Event()
3    event_loop = asyncio.get_event_loop()
4
5    # Check if authentication is required
6    if await mcp_client.requires_auth():
7        # Set up a callback server and handle OAuth flow
8        httpd, redirect_url, auth_response = run_callback_server(
9            callback_func=lambda: event_loop.call_soon_threadsafe(callback_event.set)
10        )
11        try:
12            # Build OAuth parameters and get the login URL
13            oauth_params = await build_oauth_params(
14                mcp_client.base_url, redirect_url=redirect_url
15            )
16            mcp_client.set_oauth_params(oauth_params=oauth_params)
17            login_url, state = await mcp_client.get_auth_url_and_state(redirect_url)
18
19            # Open the login URL in a web browser
20            print("Please go to this URL and authorize the application:", login_url)
21            webbrowser.open(login_url, new=2)
22            await callback_event.wait()
23
24            # Exchange the authorization code for a token
25            mcp_client = MCPClientSSE(
26                sse_params=SSEServerParams(url=server_url),
27                oauth_params=oauth_params,
28            )
29
30            token = await mcp_client.get_token_from_auth_response(
31                auth_response["url"], redirect_url=redirect_url, state=state
32            )
33            mcp_client.set_auth_token(token)
34
35        except Exception as e:
36            print(f"Error during authentication: {e}")
37        finally:
38            httpd.shutdown()
39            httpd.server_close()

Step 5: Create a Run Context and Register MCP Client

We create a Run Context for the agent and register the MCP client with it.

1    # Create a run context for the agent
2    async with RunContext(
3        model=MODEL,
4    ) as run_ctx:
5        # Register the MCP client with the run context
6        await run_ctx.register_mcp_client(mcp_client=mcp_client)

Step 6: Run the Agent and Print Results

Finally, we run the agent with a query and print the results.

1        # Run the agent with a query
2        run_result = await client.beta.conversations.run_async(
3            run_ctx=run_ctx,
4            inputs="Tell me which projects do I have in my workspace?",
5        )
6
7        # Print the final response
8        print(f"Final Response: {run_result.output_as_text}")
9
10if __name__ == "__main__":
11    asyncio.run(main())

Streaming Conversations

Streaming conversations with an agent using a local MCP server is similar to non-streaming, but instead of waiting for the entire response, you process the results as they arrive.

Here is a brief example of how to stream conversations:

1    # Stream the agent's responses
2    events = await client.beta.conversations.run_stream_async(
3        run_ctx=run_ctx,
4        inputs="Tell me the weather in John's location currently.",
5    )
6
7    # Process the streamed events
8    run_result = None
9    async for event in events:
10        if isinstance(event, RunResult):
11            run_result = event
12        else:
13            print(event)
14
15    if not run_result:
16        raise RuntimeError("No run result found")
17
18    # Print the results
19    print("All run entries:")
20    for entry in run_result.output_entries:
21        print(f"{entry}")
22    print(f"Final model: {run_result.output_as_model}")