Reach out
← Back to Cookbook

Chainlit Mistral reasoning

Details

File: third_party/Chainlit/Chainlit_Mistral_reasoning.ipynb

Type: Jupyter Notebook

Use Cases: Reasoning

Integrations: Chainlit

Content

Notebook content (JSON format):

{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "789577ac-4396-484f-a2ca-776bb1a128a8",
   "metadata": {},
   "source": [
    "<center>\n",
    "    <p style=\"text-align:center\">\n",
    "        <img alt=\"chainlit logo\" src=\"public/logo_light.svg\" width=\"200\"/>\n",
    "        <br>\n",
    "        <a href=\"https://docs.chainlit.io/\">Documentation</a>\n",
    "        |\n",
    "        <a href=\"https://discord.com/invite/k73SQ3FyUh\">Discord</a>\n",
    "    </p>\n",
    "</center>\n",
    "\n",
    "# Build a Chainlit App with Mistral AI\n",
    "The goal of this cookbook is to show how one can build a **Chainlit** application on top of **Mistral AI**'s APIs!\n",
    "\n",
    "We will highlight the reasoning capabilities of Mistral's LLMs by letting a self-reflective agent assess whether it has gathered enough information to answer _nested_ user questions, such as **\"What is the weather in Napoleon's hometown?\"**\n",
    "\n",
    "To answer such questions, our application should go through multiple-step reasoning: first get Napoleon's hometown, then fetch the weather for that location.\n",
    "\n",
    "You can read through this notebook or simply go with `chainlit run app.py` since the whole code is in `app.py`. \n",
    "You will find here a split of the whole application code with explanations:\n",
    "\n",
    "- [Setup](#setup)\n",
    "- [Define available tools](#define-tools)\n",
    "- [Agent logic](#agent-logic)\n",
    "- [On message callback](#on-message)\n",
    "- [Starter questions](#starter-questions)\n",
    "\n",
    "Here's a visual of what we will have built in a few minutes:\n",
    "\n",
    "<center>\n",
    "    <p style=\"text-align:center\">\n",
    "        <img alt=\"chat visual\" src=\"public/chat-visual.jpg\" width=\"600\"/>\n",
    "        <br>\n",
    "    </p>\n",
    "</center>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4e42fd5-ec5d-4038-b2d2-f6cfa1762819",
   "metadata": {},
   "source": [
    "<a id=\"setup\"></a>\n",
    "## Setup\n",
    "\n",
    "### Requirements\n",
    "We will install `mistralai`, `chainlit` and `python-dotenv`. \n",
    "\n",
    "Be sure to create a `.env` file with the line `MISTRAL_API_KEY=` followed by your Mistral AI API key."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6285fa2b-dbb7-4e37-80d0-c1a905b3e1d2",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install mistralai chainlit python-dotenv"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "baae9a2a-f4ee-4f27-b9da-73c7ac73b3c3",
   "metadata": {},
   "source": [
    "### Optional - Tracing\n",
    "\n",
    "You can get a `LITERAL_API_KEY` from [Literal AI](https://docs.getliteral.ai/get-started/installation#how-to-get-my-api-key) to setup tracing and visualize the flow of your application. \n",
    "\n",
    "Within the code, Chainlit offers the `@chainlit.step` decorators to trace your functions, along with an automatic instrumentation of Mistral's API via `chainlit.instrument_mistralai()`.\n",
    "\n",
    "The trace for this notebook example is: https://cloud.getliteral.ai/thread/ea173d7d-a53f-4eaf-a451-82090b07e6ff."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "659f4eff-e67a-4241-954b-04c29ba2dc45",
   "metadata": {},
   "source": [
    "<a id=\"define-tools\"></a>\n",
    "## Define available tools\n",
    "\n",
    "In the next cell, we define the tools, and their JSON definitions, which we will provide to the agent. We have two tools:\n",
    "- `get_current_weather` -> takes in a location\n",
    "- `get_home_town` -> takes in a person's name\n",
    "\n",
    "Optionally, you can decorate your tool definitions with `@cl.step()`, specifying a type and name to organize the traces you can visualize from [Literal AI](https://literalai.com).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "2fa631f5-0ef0-4c75-91ff-34e4a8a4204e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import chainlit as cl\n",
    "\n",
    "@cl.step(type=\"tool\", name=\"get_current_weather\")\n",
    "async def get_current_weather(location):\n",
    "    # Make an actual API call! To open-meteo.com for instance.\n",
    "    return json.dumps({\n",
    "        \"location\": location,\n",
    "        \"temperature\": \"29\",\n",
    "        \"unit\": \"celsius\",\n",
    "        \"forecast\": [\"sunny\"],\n",
    "    })\n",
    "\n",
    "@cl.step(type=\"tool\", name=\"get_home_town\")\n",
    "async def get_home_town(person: str) -> str:\n",
    "    \"\"\"Get the hometown of a person\"\"\"\n",
    "    return \"Ajaccio, Corsica\"\n",
    "\n",
    "\"\"\"\n",
    "JSON tool definitions provided to the LLM.\n",
    "\"\"\"\n",
    "tools = [\n",
    "    {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"get_home_town\",\n",
    "            \"description\": \"Get the home town of a specific person\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"person\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": \"The name of a person (first and last names) to identify.\"\n",
    "                    }\n",
    "                },\n",
    "                \"required\": [\"person\"]\n",
    "            },\n",
    "        },\n",
    "    },\n",
    "    {\n",
    "        \"type\": \"function\",\n",
    "        \"function\": {\n",
    "            \"name\": \"get_current_weather\",\n",
    "            \"description\": \"Get the current weather in a given location\",\n",
    "            \"parameters\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"location\": {\n",
    "                        \"type\": \"string\",\n",
    "                        \"description\": \"The city and state, e.g. San Francisco, CA\",\n",
    "                    },\n",
    "                },\n",
    "                \"required\": [\"location\"],\n",
    "            },\n",
    "        },\n",
    "    }\n",
    "]\n",
    "\n",
    "# This helper function runs multiple tool calls in parallel, asynchronously.\n",
    "async def run_multiple(tool_calls):\n",
    "    \"\"\"\n",
    "    Execute multiple tool calls asynchronously.\n",
    "    \"\"\"\n",
    "    available_tools = {\n",
    "        \"get_current_weather\": get_current_weather,\n",
    "        \"get_home_town\": get_home_town\n",
    "    }\n",
    "\n",
    "    async def run_single(tool_call):\n",
    "        function_name = tool_call.function.name\n",
    "        function_to_call = available_tools[function_name]\n",
    "        function_args = json.loads(tool_call.function.arguments)\n",
    "\n",
    "        function_response = await function_to_call(**function_args)\n",
    "        return {\n",
    "            \"tool_call_id\": tool_call.id,\n",
    "            \"role\": \"tool\",\n",
    "            \"name\": function_name,\n",
    "            \"content\": function_response,\n",
    "        }\n",
    "\n",
    "    # Run tool calls in parallel.\n",
    "    tool_results = await asyncio.gather(\n",
    "        *(run_single(tool_call) for tool_call in tool_calls)\n",
    "    )\n",
    "    return tool_results"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9968ae44-e024-47d2-9b96-ac732921f67e",
   "metadata": {},
   "source": [
    "<a id=\"agent-logic\"></a>\n",
    "## Agent logic\n",
    "\n",
    "For the agent logic, we simply repeat the following pattern (max. 5 times):\n",
    "- ask the user question to Mistral, making both tools available\n",
    "- execute tools if Mistral asks for it, otherwise return message\n",
    "\n",
    "You will notice that we added an optional `@cl.step` of type `run` and with optional tags to trace the call accordingly in [Literal AI](https://literalai.com). \n",
    "\n",
    "Visual trace: https://cloud.getliteral.ai/thread/ea173d7d-a53f-4eaf-a451-82090b07e6ff\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "b34cce66-9c2b-4b98-9981-5207928b1764",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import chainlit as cl\n",
    "\n",
    "from mistralai.client import MistralClient\n",
    "\n",
    "mai_client = MistralClient(api_key=os.environ[\"MISTRAL_API_KEY\"])\n",
    "\n",
    "@cl.step(type=\"run\", tags=[\"to_score\"])\n",
    "async def run_agent(user_query: str):\n",
    "    messages = [\n",
    "        {\n",
    "            \"role\": \"system\",\n",
    "            \"content\": \"If needed, leverage the tools at your disposal to answer the user question, otherwise provide the answer.\"\n",
    "        },\n",
    "        {\n",
    "            \"role\": \"user\", \n",
    "            \"content\": user_query\n",
    "        }\n",
    "    ]\n",
    "\n",
    "    number_iterations = 0\n",
    "    answer_message_content = None\n",
    "\n",
    "    while number_iterations < 5:\n",
    "        completion = mai_client.chat(\n",
    "            model=\"mistral-large-latest\",\n",
    "            messages=messages,\n",
    "            tool_choice=\"auto\", # use `any` to force a tool call\n",
    "            tools=tools,\n",
    "        )\n",
    "        message = completion.choices[0].message\n",
    "        messages.append(message)\n",
    "        answer_message_content = message.content\n",
    "\n",
    "        if not message.tool_calls:\n",
    "            # The LLM deemed no tool calls necessary,\n",
    "            # we break out of the loop and display the returned message\n",
    "            break\n",
    "\n",
    "        tool_results = await run_multiple(message.tool_calls)\n",
    "        messages.extend(tool_results)\n",
    "\n",
    "        number_iterations += 1\n",
    "\n",
    "    return answer_message_content\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28daf2a9-22d7-48a0-abf9-f45ba16c3896",
   "metadata": {},
   "source": [
    "<a id=\"on-message\"></a>\n",
    "## On message callback\n",
    "\n",
    "The callback below, properly annotated with `@cl.on_message`, ensures our `run_agent` function is called upon every new user message."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "9c9960b4-d7cd-4d27-85fb-4bac26a21635",
   "metadata": {},
   "outputs": [],
   "source": [
    "import chainlit as cl\n",
    "\n",
    "@cl.on_message\n",
    "async def main(message: cl.Message):\n",
    "    \"\"\"\n",
    "    Main message handler for incoming user messages.\n",
    "    \"\"\"\n",
    "    answer_message = await run_agent(message.content)\n",
    "    await cl.Message(content=answer_message).send()\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "226ad995-3dd8-4205-b118-c44d200d0908",
   "metadata": {},
   "source": [
    "<a id=\"starter-questions\"></a>\n",
    "## Starter questions\n",
    "\n",
    "You can define starter questions for your users to easily try out your application, which will look like this:\n",
    "<center>\n",
    "    <p style=\"text-align:center\">\n",
    "        <img alt=\"starters\" src=\"public/starters.jpg\" width=\"500\"/>\n",
    "        <br>\n",
    "    </p>\n",
    "</center>\n",
    "\n",
    "We have got many more Chainlit features in store (authentication, feedback, Slack/Discord integrations, etc.) to let you build custom LLM applications and really take advantage of Mistral's LLM capabilities.\n",
    "\n",
    "Please visit the <a href=\"https://docs.chainlit.io/\">Chainlit documentation</a> to learn more!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "161fa63d-1465-436f-8076-280b7c70e12e",
   "metadata": {},
   "outputs": [],
   "source": [
    "async def set_starters():\n",
    "    return [\n",
    "        cl.Starter(\n",
    "            label=\"What's the weather in Napoleon's hometown\",\n",
    "            message=\"What's the weather in Napoleon's hometown?\",\n",
    "            icon=\"/images/idea.svg\",\n",
    "        ),\n",
    "        cl.Starter(\n",
    "            label=\"What's the weather in Paris, TX?\",\n",
    "            message=\"What's the weather in Paris, TX?\",\n",
    "            icon=\"/images/learn.svg\",\n",
    "        ),\n",
    "        cl.Starter(\n",
    "            label=\"What's the weather in Michel-Angelo's hometown?\",\n",
    "            message=\"What's the weather in Michel-Angelo's hometown?\",\n",
    "            icon=\"/images/write.svg\",\n",
    "        ),\n",
    "    ]"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}