← Back to Cookbook
batch ocr
Details
File: mistral/ocr/batch_ocr.ipynb
Type: Jupyter Notebook
Use Cases: OCR, Batch
Content
Notebook content (JSON format):
{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "ahC1s4OldYyM" }, "source": [ "# OCR at scale via Mistral's Batch API\n", "\n", "---\n", "\n", "## Apply OCR to Convert Images into Text\n", "\n", "Optical Character Recognition (OCR) allows you to retrieve text data from images. With Mistral OCR, you can do this extremely fast and effectively, extracting text from hundreds and thousands of images (or PDFs).\n", "\n", "In this simple cookbook, we will extract text from a set of images using two methods:\n", "- [Without Batch Inference](#scrollTo=qmXyB3rPlXQW): Looping through the dataset, extracting text from each image, and saving the result.\n", "- [With Batch Inference](#scrollTo=jYfWYjzTmixB): Leveraging Batch Inference to extract text with a 50% cost reduction.\n", "\n", "---\n", "\n", "### Used\n", "\n", "- OCR\n", "- Batch Inference\n" ] }, { "cell_type": "markdown", "metadata": { "id": "Sf84okJJmm7M" }, "source": [ "### Setup\n", "First, let's install `mistralai` and `datasets`" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "X1EBW_a6gRUD", "outputId": "fb38d445-466b-447f-e49f-a991526d29fc" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Requirement already satisfied: mistralai in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (1.5.0)\n", "Requirement already satisfied: datasets in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (3.2.0)\n", "Requirement already satisfied: eval-type-backport>=0.2.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (0.2.0)\n", "Requirement already satisfied: httpx>=0.27.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (0.27.0)\n", "Requirement already satisfied: jsonpath-python>=1.0.6 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (1.0.6)\n", "Requirement already satisfied: pydantic>=2.9.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (2.9.2)\n", "Requirement already satisfied: python-dateutil>=2.8.2 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (2.8.2)\n", "Requirement already satisfied: typing-inspect>=0.9.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from mistralai) (0.9.0)\n", "Requirement already satisfied: filelock in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (3.13.1)\n", "Requirement already satisfied: numpy>=1.17 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (1.26.2)\n", "Requirement already satisfied: pyarrow>=15.0.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (15.0.0)\n", "Requirement already satisfied: dill<0.3.9,>=0.3.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (0.3.7)\n", "Requirement already satisfied: pandas in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (2.1.4)\n", "Requirement already satisfied: requests>=2.32.2 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (2.32.3)\n", "Requirement already satisfied: tqdm>=4.66.3 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (4.67.1)\n", "Requirement already satisfied: xxhash in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (3.4.1)\n", "Requirement already satisfied: multiprocess<0.70.17 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (0.70.15)\n", "Requirement already satisfied: fsspec<=2024.9.0,>=2023.1.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from fsspec[http]<=2024.9.0,>=2023.1.0->datasets) (2024.9.0)\n", "Requirement already satisfied: aiohttp in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (3.9.3)\n", "Requirement already satisfied: huggingface-hub>=0.23.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (0.28.1)\n", "Requirement already satisfied: packaging in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (23.2)\n", "Requirement already satisfied: pyyaml>=5.1 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from datasets) (6.0.1)\n", "Requirement already satisfied: aiosignal>=1.1.2 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from aiohttp->datasets) (1.3.1)\n", "Requirement already satisfied: attrs>=17.3.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from aiohttp->datasets) (23.2.0)\n", "Requirement already satisfied: frozenlist>=1.1.1 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from aiohttp->datasets) (1.4.1)\n", "Requirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from aiohttp->datasets) (6.0.5)\n", "Requirement already satisfied: yarl<2.0,>=1.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from aiohttp->datasets) (1.9.4)\n", "Requirement already satisfied: anyio in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27.0->mistralai) (3.7.1)\n", "Requirement already satisfied: certifi in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27.0->mistralai) (2024.2.2)\n", "Requirement already satisfied: httpcore==1.* in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27.0->mistralai) (1.0.4)\n", "Requirement already satisfied: idna in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27.0->mistralai) (2.10)\n", "Requirement already satisfied: sniffio in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpx>=0.27.0->mistralai) (1.3.0)\n", "Requirement already satisfied: h11<0.15,>=0.13 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from httpcore==1.*->httpx>=0.27.0->mistralai) (0.14.0)\n", "Requirement already satisfied: typing-extensions>=3.7.4.3 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from huggingface-hub>=0.23.0->datasets) (4.12.2)\n", "Requirement already satisfied: annotated-types>=0.6.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9.0->mistralai) (0.6.0)\n", "Requirement already satisfied: pydantic-core==2.23.4 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pydantic>=2.9.0->mistralai) (2.23.4)\n", "Requirement already satisfied: six>=1.5 in c:\\users\\di-co\\appdata\\roaming\\python\\python312\\site-packages (from python-dateutil>=2.8.2->mistralai) (1.16.0)\n", "Requirement already satisfied: charset-normalizer<4,>=2 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests>=2.32.2->datasets) (3.3.2)\n", "Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from requests>=2.32.2->datasets) (2.2.1)\n", "Requirement already satisfied: colorama in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from tqdm>=4.66.3->datasets) (0.4.6)\n", "Requirement already satisfied: mypy-extensions>=0.3.0 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from typing-inspect>=0.9.0->mistralai) (1.0.0)\n", "Requirement already satisfied: pytz>=2020.1 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pandas->datasets) (2023.3.post1)\n", "Requirement already satisfied: tzdata>=2022.1 in c:\\users\\di-co\\appdata\\local\\programs\\python\\python312\\lib\\site-packages (from pandas->datasets) (2023.3)\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "\n", "[notice] A new release of pip is available: 24.0 -> 25.0.1\n", "[notice] To update, run: python.exe -m pip install --upgrade pip\n" ] } ], "source": [ "!pip install mistralai datasets" ] }, { "cell_type": "markdown", "metadata": { "id": "nTpiGWkpmvSb" }, "source": [ "We can now set up our client. You can create an API key on our [Plateforme](https://console.mistral.ai/api-keys/)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "AwG2kwfTlbW1" }, "outputs": [], "source": [ "from mistralai import Mistral\n", "\n", "api_key = \"API_KEY\"\n", "client = Mistral(api_key=api_key)\n", "ocr_model = \"mistral-ocr-latest\"" ] }, { "cell_type": "markdown", "metadata": { "id": "qmXyB3rPlXQW" }, "source": [ "## Without Batch\n", "\n", "As an example, let's use Mistral OCR to extract text from multiple images.\n", "\n", "We will use a dataset containing raw image data. To send this data via an image URL, we need to encode it in base64. For more information, please visit our [Vision Documentation](https://docs.mistral.ai/capabilities/vision/#passing-a-base64-encoded-image)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "sNo8DZ7WbaBq" }, "outputs": [], "source": [ "import base64\n", "from io import BytesIO\n", "from PIL import Image\n", "\n", "def encode_image_data(image_data):\n", " try:\n", " # Ensure image_data is bytes\n", " if isinstance(image_data, bytes):\n", " # Directly encode bytes to base64\n", " return base64.b64encode(image_data).decode('utf-8')\n", " else:\n", " # Convert image data to bytes if it's not already\n", " buffered = BytesIO()\n", " image_data.save(buffered, format=\"JPEG\")\n", " return base64.b64encode(buffered.getvalue()).decode('utf-8')\n", " except Exception as e:\n", " print(f\"Error encoding image: {e}\")\n", " return None" ] }, { "cell_type": "markdown", "metadata": { "id": "3m7d2yBImCfO" }, "source": [ "For this demo, we will use a simple dataset containing numerous documents and scans in image format. Specifically, we will use the `HuggingFaceM4/DocumentVQA` dataset, loaded via the `datasets` library.\n", "\n", "We will download only 100 samples for this demonstration." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZJ55QEifcgUq" }, "outputs": [], "source": [ "from datasets import load_dataset\n", "\n", "n_samples = 100\n", "dataset = load_dataset(\"HuggingFaceM4/DocumentVQA\", split=\"train\", streaming=True)\n", "subset = list(dataset.take(n_samples))" ] }, { "cell_type": "markdown", "metadata": { "id": "8TEAOOVUmVNu" }, "source": [ "With our subset of 100 samples ready, we can loop through each image to extract the text.\n", "\n", "We will save the results in a new dataset and export it as a JSONL file." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "OOUGkkrfce63", "outputId": "58510dc0-8191-4ad6-fea4-b005f5198926" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ " 0%| | 0/100 [00:00<?, ?it/s]" ] }, { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████| 100/100 [02:13<00:00, 1.33s/it]\n" ] } ], "source": [ "from tqdm import tqdm\n", "\n", "ocr_dataset = []\n", "for sample in tqdm(subset):\n", " image_data = sample['image'] # 'image' contains the actual image data\n", "\n", " # Encode the image data to base64\n", " base64_image = encode_image_data(image_data)\n", " image_url = f\"data:image/jpeg;base64,{base64_image}\"\n", "\n", " # Process the image using Mistral OCR\n", " response = client.ocr.process(\n", " model=ocr_model,\n", " document={\n", " \"type\": \"image_url\",\n", " \"image_url\": image_url,\n", " }\n", " )\n", "\n", " # Store the image data and OCR content in the new dataset\n", " ocr_dataset.append({\n", " 'image': base64_image,\n", " 'ocr_content': response.pages[0].markdown # Since we are dealing with single images, there will be only one page\n", " })" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8bcncJL0dAFk" }, "outputs": [], "source": [ "import json\n", "\n", "with open('ocr_dataset.json', 'w') as f:\n", " json.dump(ocr_dataset, f, indent=4)" ] }, { "cell_type": "markdown", "metadata": { "id": "jYfWYjzTmixB" }, "source": [ "Perfect, we have extracted all text from the 100 samples. However, this process can be made more cost-efficient using Batch Inference.\n", "\n", "## With Batch\n", "\n", "To use Batch Inference, we need to create a JSONL file containing all the image data and request information for our batch.\n", "\n", "Let's create a function called `create_batch_file` to handle this task by generating a file in the proper format." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "EHKPWyqVhFn_" }, "outputs": [], "source": [ "def create_batch_file(image_urls, output_file):\n", " with open(output_file, 'w') as file:\n", " for index, url in enumerate(image_urls):\n", " entry = {\n", " \"custom_id\": str(index),\n", " \"body\": {\n", " \"document\": {\n", " \"type\": \"image_url\",\n", " \"image_url\": url\n", " },\n", " \"include_image_base64\": True\n", " }\n", " }\n", " file.write(json.dumps(entry) + '\\n')" ] }, { "cell_type": "markdown", "metadata": { "id": "-gmWu5dGm79P" }, "source": [ "The next step involves encoding the data of each image into base64 and saving the URL of each image that will be used." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "v_Fg_t-shHgj", "outputId": "8fbe8b03-7d8a-4c13-96b0-98d56a4c4c82" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ " 48%|████▊ | 48/100 [00:00<00:01, 41.07it/s]" ] }, { "name": "stderr", "output_type": "stream", "text": [ "100%|██████████| 100/100 [00:04<00:00, 24.48it/s]\n" ] } ], "source": [ "image_urls = []\n", "for sample in tqdm(subset):\n", " image_data = sample['image'] # 'image' contains the actual image data\n", "\n", " # Encode the image data to base64 and add the url to the list\n", " base64_image = encode_image_data(image_data)\n", " image_url = f\"data:image/jpeg;base64,{base64_image}\"\n", " image_urls.append(image_url)" ] }, { "cell_type": "markdown", "metadata": { "id": "h6q1JR73nIhe" }, "source": [ "We can now create our batch file." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "7A3cRUP0gf6W" }, "outputs": [], "source": [ "batch_file = \"batch_file.jsonl\"\n", "create_batch_file(image_urls, batch_file)" ] }, { "cell_type": "markdown", "metadata": { "id": "Z4ME_pJCnM6-" }, "source": [ "With everything ready, we can upload it to the API." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "YGBd3kRfgnyq" }, "outputs": [], "source": [ "batch_data = client.files.upload(\n", " file={\n", " \"file_name\": batch_file,\n", " \"content\": open(batch_file, \"rb\")},\n", " purpose = \"batch\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "9Q7dpJ07nVOO" }, "source": [ "The file is uploaded, but the batch inference has not started yet. To initiate it, we need to create a job." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Y2iBH3Ikgzb7" }, "outputs": [], "source": [ "created_job = client.batch.jobs.create(\n", " input_files=[batch_data.id],\n", " model=ocr_model,\n", " endpoint=\"/v1/ocr\",\n", " metadata={\"job_type\": \"testing\"}\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "JE_DBRu4nbXz" }, "source": [ "Our batch is ready and running!\n", "\n", "We can retrieve information using the following method:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "0dIph-xtg94n", "outputId": "cf8adcc4-310c-435f-eab0-7fd2c706d4b9" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Status: QUEUED\n", "Total requests: 100\n", "Failed requests: 0\n", "Successful requests: 0\n", "Percent done: 0.0%\n" ] } ], "source": [ "retrieved_job = client.batch.jobs.get(job_id=created_job.id)\n", "print(f\"Status: {retrieved_job.status}\")\n", "print(f\"Total requests: {retrieved_job.total_requests}\")\n", "print(f\"Failed requests: {retrieved_job.failed_requests}\")\n", "print(f\"Successful requests: {retrieved_job.succeeded_requests}\")\n", "print(\n", " f\"Percent done: {round((retrieved_job.succeeded_requests + retrieved_job.failed_requests) / retrieved_job.total_requests, 4) * 100}%\"\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "ZWMupK2Sng-5" }, "source": [ "Let's automate this feedback loop and download the results once they are ready!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "oarRyxv4jV6B", "outputId": "1639a6a0-8a3a-450e-e11e-da9974cdded0" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Status: SUCCESS\n", "Total requests: 100\n", "Failed requests: 0\n", "Successful requests: 100\n", "Percent done: 100.0%\n" ] } ], "source": [ "import time\n", "from IPython.display import clear_output\n", "\n", "while retrieved_job.status in [\"QUEUED\", \"RUNNING\"]:\n", " retrieved_job = client.batch.jobs.get(job_id=created_job.id)\n", "\n", " clear_output(wait=True) # Clear the previous output ( User Friendly )\n", " print(f\"Status: {retrieved_job.status}\")\n", " print(f\"Total requests: {retrieved_job.total_requests}\")\n", " print(f\"Failed requests: {retrieved_job.failed_requests}\")\n", " print(f\"Successful requests: {retrieved_job.succeeded_requests}\")\n", " print(\n", " f\"Percent done: {round((retrieved_job.succeeded_requests + retrieved_job.failed_requests) / retrieved_job.total_requests, 4) * 100}%\"\n", " )\n", " time.sleep(2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "UKH-dYmxhBjX", "outputId": "36f505eb-e04e-45fe-cee4-abb9e3e24433" }, "outputs": [ { "data": { "text/plain": [ "<Response [200 OK]>" ] }, "execution_count": 49, "metadata": {}, "output_type": "execute_result" } ], "source": [ "client.files.download(file_id=retrieved_job.output_file)" ] }, { "cell_type": "markdown", "metadata": { "id": "uGyDanavnq0C" }, "source": [ "Done! With this method, you can perform OCR tasks in bulk in a very cost-effective way." ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.1" } }, "nbformat": 4, "nbformat_minor": 0 }