Reach out
← Back to Cookbook

pydantic bank support agent

Details

File: third_party/PydanticAI/pydantic_bank_support_agent.ipynb

Type: Jupyter Notebook

Use Cases: Agents

Integrations: Pydantic AI

Content

Notebook content (JSON format):

{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "b6nD321MT1A0"
   },
   "source": [
    "# Build a bank support agent with Pydantic AI and Mistral AI\n",
    "\n",
    "In this cookbook, we'll demonstrate how to build a bank support agent using Mistral AI and PydanticAI, which offers the following features:\n",
    "- Structured Responses: Pydantic ensures that outputs conform to a predefined schema, providing consistent and validated responses.\n",
    "- External Dependencies: Enhance AI interactions by integrating external dependencies, such as databases, through a type-safe dependency injection system.\n",
    "- Dynamic Context: System prompt functions allow the injection of runtime information, like a customer's name, into the agent's context, enabling personalized interactions.\n",
    "- Tool Integration: The agent can invoke tools for real-time information retrieval, enhancing its capabilities beyond static responses.\n",
    "\n",
    "Example in this cookbook is adapted from https://ai.pydantic.dev/.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "id": "KyTap9cNXeYZ",
    "outputId": "07f80a9a-4f84-4a07-dfff-fd7fb95dbcfd"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Collecting pydantic-ai\n",
      "  Downloading pydantic_ai-0.0.14-py3-none-any.whl.metadata (11 kB)\n",
      "Requirement already satisfied: nest_asyncio in /usr/local/lib/python3.10/dist-packages (1.6.0)\n",
      "Collecting pydantic-ai-slim==0.0.14 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading pydantic_ai_slim-0.0.14-py3-none-any.whl.metadata (2.8 kB)\n",
      "Requirement already satisfied: eval-type-backport>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.2.0)\n",
      "Collecting griffe>=1.3.2 (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading griffe-1.5.1-py3-none-any.whl.metadata (5.1 kB)\n",
      "Requirement already satisfied: httpx>=0.27.2 in /usr/local/lib/python3.10/dist-packages (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.28.1)\n",
      "Collecting logfire-api>=1.2.0 (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading logfire_api-2.9.0-py3-none-any.whl.metadata (947 bytes)\n",
      "Requirement already satisfied: pydantic>=2.10 in /usr/local/lib/python3.10/dist-packages (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2.10.3)\n",
      "Collecting groq>=0.12.0 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading groq-0.13.1-py3-none-any.whl.metadata (14 kB)\n",
      "Collecting json-repair>=0.30.3 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading json_repair-0.32.0-py3-none-any.whl.metadata (11 kB)\n",
      "Collecting mistralai>=1.2.5 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading mistralai-1.2.5-py3-none-any.whl.metadata (27 kB)\n",
      "Requirement already satisfied: openai>=1.54.3 in /usr/local/lib/python3.10/dist-packages (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.57.4)\n",
      "Collecting anthropic>=0.40.0 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading anthropic-0.42.0-py3-none-any.whl.metadata (23 kB)\n",
      "Collecting google-auth>=2.36.0 (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading google_auth-2.37.0-py2.py3-none-any.whl.metadata (4.8 kB)\n",
      "Requirement already satisfied: requests>=2.32.3 in /usr/local/lib/python3.10/dist-packages (from pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2.32.3)\n",
      "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.10/dist-packages (from anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (3.7.1)\n",
      "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.10/dist-packages (from anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.9.0)\n",
      "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.10/dist-packages (from anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.8.2)\n",
      "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.3.1)\n",
      "Requirement already satisfied: typing-extensions<5,>=4.10 in /usr/local/lib/python3.10/dist-packages (from anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (4.12.2)\n",
      "Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth>=2.36.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (5.5.0)\n",
      "Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth>=2.36.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.4.1)\n",
      "Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth>=2.36.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (4.9)\n",
      "Collecting colorama>=0.4 (from griffe>=1.3.2->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)\n",
      "Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx>=0.27.2->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2024.12.14)\n",
      "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.10/dist-packages (from httpx>=0.27.2->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.0.7)\n",
      "Requirement already satisfied: idna in /usr/local/lib/python3.10/dist-packages (from httpx>=0.27.2->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (3.10)\n",
      "Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/lib/python3.10/dist-packages (from httpcore==1.*->httpx>=0.27.2->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.14.0)\n",
      "Collecting httpx>=0.27.2 (from pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading httpx-0.27.2-py3-none-any.whl.metadata (7.1 kB)\n",
      "Collecting jsonpath-python<2.0.0,>=1.0.6 (from mistralai>=1.2.5->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading jsonpath_python-1.0.6-py3-none-any.whl.metadata (12 kB)\n",
      "Requirement already satisfied: python-dateutil<3.0.0,>=2.8.2 in /usr/local/lib/python3.10/dist-packages (from mistralai>=1.2.5->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2.8.2)\n",
      "Collecting typing-inspect<0.10.0,>=0.9.0 (from mistralai>=1.2.5->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading typing_inspect-0.9.0-py3-none-any.whl.metadata (1.5 kB)\n",
      "Requirement already satisfied: tqdm>4 in /usr/local/lib/python3.10/dist-packages (from openai>=1.54.3->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (4.67.1)\n",
      "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.10->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.7.0)\n",
      "Requirement already satisfied: pydantic-core==2.27.1 in /usr/local/lib/python3.10/dist-packages (from pydantic>=2.10->pydantic-ai-slim==0.0.14->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2.27.1)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (3.4.0)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.32.3->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (2.2.3)\n",
      "Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5,>=3.5.0->anthropic>=0.40.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.2.2)\n",
      "Requirement already satisfied: pyasn1<0.7.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth>=2.36.0->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (0.6.1)\n",
      "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil<3.0.0,>=2.8.2->mistralai>=1.2.5->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai) (1.17.0)\n",
      "Collecting mypy-extensions>=0.3.0 (from typing-inspect<0.10.0,>=0.9.0->mistralai>=1.2.5->pydantic-ai-slim[anthropic,groq,mistral,openai,vertexai]==0.0.14->pydantic-ai)\n",
      "  Downloading mypy_extensions-1.0.0-py3-none-any.whl.metadata (1.1 kB)\n",
      "Downloading pydantic_ai-0.0.14-py3-none-any.whl (9.7 kB)\n",
      "Downloading pydantic_ai_slim-0.0.14-py3-none-any.whl (77 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m77.1/77.1 kB\u001b[0m \u001b[31m3.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading anthropic-0.42.0-py3-none-any.whl (203 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m203.4/203.4 kB\u001b[0m \u001b[31m7.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading google_auth-2.37.0-py2.py3-none-any.whl (209 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m209.8/209.8 kB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading griffe-1.5.1-py3-none-any.whl (127 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m127.1/127.1 kB\u001b[0m \u001b[31m9.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading groq-0.13.1-py3-none-any.whl (109 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m109.1/109.1 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading json_repair-0.32.0-py3-none-any.whl (19 kB)\n",
      "Downloading logfire_api-2.9.0-py3-none-any.whl (71 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.9/71.9 kB\u001b[0m \u001b[31m5.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading mistralai-1.2.5-py3-none-any.whl (260 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m260.0/260.0 kB\u001b[0m \u001b[31m17.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading httpx-0.27.2-py3-none-any.whl (76 kB)\n",
      "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m76.4/76.4 kB\u001b[0m \u001b[31m5.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
      "\u001b[?25hDownloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)\n",
      "Downloading jsonpath_python-1.0.6-py3-none-any.whl (7.6 kB)\n",
      "Downloading typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)\n",
      "Downloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)\n",
      "Installing collected packages: mypy-extensions, logfire-api, jsonpath-python, json-repair, colorama, typing-inspect, httpx, griffe, google-auth, pydantic-ai-slim, mistralai, groq, anthropic, pydantic-ai\n",
      "  Attempting uninstall: httpx\n",
      "    Found existing installation: httpx 0.28.1\n",
      "    Uninstalling httpx-0.28.1:\n",
      "      Successfully uninstalled httpx-0.28.1\n",
      "  Attempting uninstall: google-auth\n",
      "    Found existing installation: google-auth 2.27.0\n",
      "    Uninstalling google-auth-2.27.0:\n",
      "      Successfully uninstalled google-auth-2.27.0\n",
      "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
      "google-colab 1.0.0 requires google-auth==2.27.0, but you have google-auth 2.37.0 which is incompatible.\u001b[0m\u001b[31m\n",
      "\u001b[0mSuccessfully installed anthropic-0.42.0 colorama-0.4.6 google-auth-2.37.0 griffe-1.5.1 groq-0.13.1 httpx-0.27.2 json-repair-0.32.0 jsonpath-python-1.0.6 logfire-api-2.9.0 mistralai-1.2.5 mypy-extensions-1.0.0 pydantic-ai-0.0.14 pydantic-ai-slim-0.0.14 typing-inspect-0.9.0\n"
     ]
    },
    {
     "data": {
      "application/vnd.colab-display-data+json": {
       "id": "21bdc3026ad54c5ca89c684e895bddcf",
       "pip_warning": {
        "packages": [
         "google"
        ]
       }
      }
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "!pip install pydantic-ai==0.0.14 nest_asyncio"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "F7iykFfabeE-"
   },
   "source": [
    "If you're running pydantic-ai in a jupyter notebook or Colab, you will need nest-asyncio to manage conflicts between event loops that occur between jupyter's event loops and pydantic-ai's:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "id": "CZNHh28AZJvR"
   },
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "LesgrD9_YUUB",
    "outputId": "255f055f-d0c0-4b97-e935-6cdcf94cab31"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Type your API Key··········\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "from getpass import getpass\n",
    "\n",
    "os.environ[\"MISTRAL_API_KEY\"] = getpass(\"Type your API Key\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sfICbJ5qUQG2"
   },
   "source": [
    "## Example 1: Basic Q&A with Mistral\n",
    "\n",
    "Let's start with a straightforward example using Pydantic AI for a basic Q&A with Mistral.\n",
    "\n",
    "We'll define an agent powered by Mistral with a system prompt designed to ensure concise responses. When we ask about the origin of “hello world,” the model will provide a brief, one-sentence answer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "BpwrdPz4Xjp6",
    "outputId": "09f41d1d-82c4-4796-ffd1-6c96c8e0ec6c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\"Hello, World!\" originated from a 1974 Bell Labs internal memorandum by Brian Kernighan, demonstrating the C programming language.\n"
     ]
    }
   ],
   "source": [
    "from pydantic_ai import Agent\n",
    "from pydantic_ai.models.mistral import MistralModel\n",
    "\n",
    "model = MistralModel('mistral-small-latest')\n",
    "\n",
    "agent = Agent(\n",
    "    model,\n",
    "    system_prompt='Be concise, reply with one sentence.',\n",
    ")\n",
    "\n",
    "result = agent.run_sync('Where does \"hello world\" come from?')\n",
    "print(result.data)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "GurMrsNfUyUF"
   },
   "source": [
    "# Example 2: Bank support agent\n",
    "\n",
    "In this more complex example, we build a bank support agent.\n",
    "\n",
    "\n",
    "## Step 1: Define a database\n",
    "\n",
    "In more advanced AI workflows, your model may require external data, such as information from a database. Here, we define a fictional database class that retrieves a customer's name and balance. In a real-world scenario, this class could connect to a live database. The agent can use these methods to respond to customer inquiries effectively.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "id": "1bS9OxDTU4Rj"
   },
   "outputs": [],
   "source": [
    "from dataclasses import dataclass\n",
    "from pydantic import BaseModel, Field\n",
    "from pydantic_ai import Agent, RunContext\n",
    "\n",
    "\n",
    "class DatabaseConn:\n",
    "    \"\"\"This is a fake database for example purposes.\n",
    "\n",
    "    In reality, you'd be connecting to an external database\n",
    "    (e.g. PostgreSQL) to get information about customers.\n",
    "    \"\"\"\n",
    "\n",
    "    @classmethod\n",
    "    async def customer_name(cls, *, id: int) -> str | None:\n",
    "        if id == 123:\n",
    "            return 'John'\n",
    "\n",
    "    @classmethod\n",
    "    async def customer_balance(cls, *, id: int, include_pending: bool) -> float:\n",
    "        if id == 123:\n",
    "            return 123.45\n",
    "        else:\n",
    "            raise ValueError('Customer not found')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SeTdUa08Vdh8"
   },
   "source": [
    "## Step 2: Define the bank support agent\n",
    "\n",
    "In this step, we define how the support agent works by setting up its input dependencies and expected output format.\n",
    "\n",
    "Code Breakdown:\n",
    "\n",
    "\t1.\tInput Dependencies:\n",
    "\t•\tSupportDependencies specifies what the agent needs to function:\n",
    "\t•\tcustomer_id (the ID of the customer being helped).\n",
    "\t•\tdb (a connection to the database).\n",
    "\n",
    "\t2.\tExpected Response Format:\n",
    "\t•\tSupportResult defines the response structure, ensuring consistency:\n",
    "\t•\tsupport_advice: A string containing advice given to the customer.\n",
    "\t•\tblock_card: A boolean indicating whether the customer’s card should be blocked.\n",
    "\t•\trisk: An integer from 0 to 10, representing the assessed risk level.\n",
    "\n",
    "\t3.\tAgent Initialization:\n",
    "\t•\tWe create the support_agent using the Agent class:\n",
    "\t•\tmodel: The underlying AI model.\n",
    "\t•\tdeps_type: Specifies the required input dependencies (SupportDependencies).\n",
    "\t•\tresult_type: Defines the expected output structure (SupportResult).\n",
    "\t•\tsystem_prompt: A prompt guiding the agent to act as a bank support representative. The prompt ensures customer-specific responses by including the customer’s name.\n",
    "\n",
    "\n",
    "Why This Design Matters:\n",
    "\n",
    "- By defining input dependencies and output formats, we guarantee that the agent always receives the correct data and produces predictable results. This makes integration into larger systems easier and supports clear, actionable responses."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "id": "AY9RR8RYVc94"
   },
   "outputs": [],
   "source": [
    "@dataclass\n",
    "class SupportDependencies:\n",
    "    customer_id: int\n",
    "    db: DatabaseConn\n",
    "\n",
    "\n",
    "class SupportResult(BaseModel):\n",
    "    support_advice: str = Field(description='Advice returned to the customer')\n",
    "    block_card: bool = Field(description='Whether to block their')\n",
    "    risk: int = Field(description='Risk level of query', ge=0, le=10)\n",
    "\n",
    "\n",
    "support_agent = Agent(\n",
    "    model,\n",
    "    deps_type=SupportDependencies,\n",
    "    result_type=SupportResult,\n",
    "    system_prompt=(\n",
    "        'You are a support agent in our bank, give the '\n",
    "        'customer support and judge the risk level of their query. '\n",
    "        \"Reply using the customer's name.\"\n",
    "    ),\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "qP8FlhdPXCmE"
   },
   "source": [
    "## Step 3: Add a dynamic system prompt\n",
    "\n",
    "This code attaches a dynamic system prompt function. Before the model sees the user's query, it gets a special system prompt enriched with the customer's name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "SgIfK_vjXQoU"
   },
   "outputs": [],
   "source": [
    "@support_agent.system_prompt\n",
    "async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:\n",
    "    customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)\n",
    "    return f\"The customer's name is {customer_name!r}\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iIAtWm7KXroG"
   },
   "source": [
    "## Step 4: Defining tools that the agent can use\n",
    "\n",
    "By decorating customer_balance with @support_agent.tool, we're telling the model it can call this function to retrieve the customer's balance. This transforms the model from a passive text generator into an active problem solver that can interact with external resources.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "id": "tugJ-nDEXv2x"
   },
   "outputs": [],
   "source": [
    "@support_agent.tool\n",
    "async def customer_balance(\n",
    "    ctx: RunContext[SupportDependencies], include_pending: bool\n",
    ") -> str:\n",
    "    \"\"\"Returns the customer's current account balance.\"\"\"\n",
    "    balance = await ctx.deps.db.customer_balance(\n",
    "        id=ctx.deps.customer_id,\n",
    "        include_pending=include_pending,\n",
    "    )\n",
    "    return f'${balance:.2f}'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dj_m-C-8XRub"
   },
   "source": [
    "## Step 5: Run agent\n",
    "\n",
    "When asked about the customer’s balance, the agent uses the injected dependencies and tools to return a structured response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "LFcl0e8qX3qY",
    "outputId": "6cd21281-43a5-457b-9113-48b30a587986"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "support_advice='Hi John, your current balance is $123.45.' block_card=False risk=0\n"
     ]
    }
   ],
   "source": [
    "deps = SupportDependencies(customer_id=123, db=DatabaseConn())\n",
    "result = support_agent.run_sync('What is my balance?', deps=deps)\n",
    "print(result.data)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "EIBk69qnYBV0",
    "outputId": "3cab1e98-495b-4e3c-abdf-fa19859d059a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "support_advice='Hello John, Your account balance is $123.45. We will block your card immediately and send you a new one. You can call us at 1-800-123-4567 if you need further assistance.' block_card=True risk=5\n"
     ]
    }
   ],
   "source": [
    "result = support_agent.run_sync('I just lost my card!', deps=deps)\n",
    "print(result.data)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "l2NpC9TYYSGe"
   },
   "source": [
    "You can check the results and message history including the system prompt and the tool usage:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "dnK99n4sz9Cd",
    "outputId": "850bbf86-b079-4f09-b44c-1a9c0274566b"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'_all_messages': [ModelRequest(parts=[SystemPromptPart(content=\"You are a support agent in our bank, give the customer support and judge the risk level of their query. Reply using the customer's name.\", part_kind='system-prompt'), SystemPromptPart(content=\"The customer's name is 'John'\", part_kind='system-prompt'), UserPromptPart(content='I just lost my card!', timestamp=datetime.datetime(2024, 12, 20, 23, 21, 25, 269653, tzinfo=datetime.timezone.utc), part_kind='user-prompt')], kind='request'),\n",
       "  ModelResponse(parts=[ToolCallPart(tool_name='customer_balance', args=ArgsJson(args_json='{\"include_pending\": false}'), tool_call_id='nli5z3jgK', part_kind='tool-call')], timestamp=datetime.datetime(2024, 12, 20, 23, 21, 25, tzinfo=datetime.timezone.utc), kind='response'),\n",
       "  ModelRequest(parts=[ToolReturnPart(tool_name='customer_balance', content='$123.45', tool_call_id='nli5z3jgK', timestamp=datetime.datetime(2024, 12, 20, 23, 21, 25, 835008, tzinfo=datetime.timezone.utc), part_kind='tool-return')], kind='request'),\n",
       "  ModelResponse(parts=[ToolCallPart(tool_name='final_result', args=ArgsJson(args_json='{\"risk\": 5, \"support_advice\": \"Hello John, Your account balance is $123.45. We will block your card immediately and send you a new one. You can call us at 1-800-123-4567 if you need further assistance.\", \"block_card\": true}'), tool_call_id='pSBlsz5SY', part_kind='tool-call')], timestamp=datetime.datetime(2024, 12, 20, 23, 21, 26, tzinfo=datetime.timezone.utc), kind='response'),\n",
       "  ModelRequest(parts=[ToolReturnPart(tool_name='final_result', content='Final result processed.', tool_call_id='pSBlsz5SY', timestamp=datetime.datetime(2024, 12, 20, 23, 21, 27, 162246, tzinfo=datetime.timezone.utc), part_kind='tool-return')], kind='request')],\n",
       " '_new_message_index': 0,\n",
       " 'data': SupportResult(support_advice='Hello John, Your account balance is $123.45. We will block your card immediately and send you a new one. You can call us at 1-800-123-4567 if you need further assistance.', block_card=True, risk=5),\n",
       " '_usage': Usage(requests=2, request_tokens=673, response_tokens=110, total_tokens=783, details=None)}"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "result.__dict__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "QnhC_InJZyUY"
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}