Reach out
← Back to Cookbook

llamaindex mistralai finetuning

Details

File: third_party/LlamaIndex/llamaindex_mistralai_finetuning.ipynb

Type: Jupyter Notebook

Use Cases: Finetuning

Integrations: Llamaindex

Content

Notebook content (JSON format):

{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/mistralai/cookbook/blob/main/llamaindex_mistralai_finetuning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fine Tuning MistralAI models using Finetuning API and LlamaIndex\n",
    "\n",
    "In this notebook, we walk through an example of fine-tuning `open-mistral-7b` using MistralAI finetuning API.\n",
    "\n",
    "Specifically, we attempt to distill `mistral-large-latest`'s knowledge, by generating training data with `mistral-large-latest` to then fine-tune `open-mistral-7b`.\n",
    "\n",
    "All training data is generated using two different sections of our index data, creating both a training and evalution set.\n",
    "\n",
    "We will use `mistral-small-largest` to create synthetic training and evaluation questions to avoid any biases towards `open-mistral-7b` and `mistral-large-latest`.\n",
    "\n",
    "We then finetune with our `MistraAIFinetuneEngine` wrapper abstraction.\n",
    "\n",
    "Evaluation is done using the `ragas` library, which we will detail later on.\n",
    "\n",
    "We can monitor the metrics on `Weights & Biases`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install llama-index-finetuning\n",
    "%pip install llama-index-finetuning-callbacks\n",
    "%pip install llama-index-llms-mistralai\n",
    "%pip install llama-index-embeddings-mistralai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# !pip install llama-index pypdf sentence-transformers ragas"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# NESTED ASYNCIO LOOP NEEDED TO RUN ASYNC IN A NOTEBOOK\n",
    "import nest_asyncio\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Set API Key"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ[\"MISTRAL_API_KEY\"] = \"<YOUR MISTRALAI API KEY>\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Download Data\n",
    "\n",
    "Here, we first down load the PDF that we will use to generate training data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n",
      "                                 Dload  Upload   Total   Spent    Left  Speed\n",
      "100 20.7M  100 20.7M    0     0   249k      0  0:01:25  0:01:25 --:--:--  266k0  0:01:31  0:00:25  0:01:06  267k 251k\n"
     ]
    }
   ],
   "source": [
    "!curl https://www.ipcc.ch/report/ar6/wg2/downloads/report/IPCC_AR6_WGII_Chapter03.pdf --output IPCC_AR6_WGII_Chapter03.pdf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next step is generating a training and eval dataset.\n",
    "\n",
    "We will generate 40 training and 40 evaluation questions on different sections of the PDF we downloaded.\n",
    "\n",
    "We can use `open-mistral-7b` on the eval questions to get our baseline performance.\n",
    "\n",
    "Then, we will use `mistral-large-latest` on the train questions to generate our training data. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load Data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import SimpleDirectoryReader\n",
    "from llama_index.core.evaluation import DatasetGenerator\n",
    "\n",
    "documents = SimpleDirectoryReader(\n",
    "    input_files=[\"IPCC_AR6_WGII_Chapter03.pdf\"]\n",
    ").load_data()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Setup LLM and Embedding Model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.mistralai import MistralAI\n",
    "from llama_index.embeddings.mistralai import MistralAIEmbedding\n",
    "\n",
    "open_mistral = MistralAI(\n",
    "    model=\"open-mistral-7b\", temperature=0.1\n",
    ")  # model to be finetuning\n",
    "mistral_small = MistralAI(\n",
    "    model=\"mistral-small-latest\", temperature=0.1\n",
    ")  # model for question generation\n",
    "embed_model = MistralAIEmbedding()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training and Evaluation Data Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ravithejad/Desktop/llamaindex/lib/python3.9/site-packages/llama_index/core/evaluation/dataset_generation.py:215: DeprecationWarning: Call to deprecated class DatasetGenerator. (Deprecated in favor of `RagDatasetGenerator` which should be used instead.)\n",
      "  return cls(\n"
     ]
    }
   ],
   "source": [
    "question_gen_query = (\n",
    "    \"You are a Teacher/ Professor. Your task is to setup \"\n",
    "    \"a quiz/examination. Using the provided context, formulate \"\n",
    "    \"a single question that captures an important fact from the \"\n",
    "    \"context. Restrict the question to the context information provided.\"\n",
    "    \"You should generate only question and nothing else.\"\n",
    ")\n",
    "\n",
    "dataset_generator = DatasetGenerator.from_documents(\n",
    "    documents[:80],\n",
    "    question_gen_query=question_gen_query,\n",
    "    llm=mistral_small,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will generate 40 training and 40 evaluation questions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generated  40  questions\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ravithejad/Desktop/llamaindex/lib/python3.9/site-packages/llama_index/core/evaluation/dataset_generation.py:312: DeprecationWarning: Call to deprecated class QueryResponseDataset. (Deprecated in favor of `LabelledRagDataset` which should be used instead.)\n",
      "  return QueryResponseDataset(queries=queries, responses=responses_dict)\n"
     ]
    }
   ],
   "source": [
    "# Note: This might take sometime.\n",
    "questions = dataset_generator.generate_questions_from_nodes(num=40)\n",
    "print(\"Generated \", len(questions), \" questions\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['What is the estimated relative human dependence on marine ecosystems for coastal protection, nutrition, fisheries economic benefits, and overall, as depicted in Figure 3.1?',\n",
       " 'What are the limitations of the overall index mentioned in the context, and how were values for reference regions computed?',\n",
       " 'What are the primary non-climate drivers that alter marine ecosystems and their services, as mentioned in the context?',\n",
       " 'What are the main challenges in detecting and attributing climate impacts on marine-dependent human systems, according to the provided context?',\n",
       " 'What new insights have been gained from experimental evidence about evolutionary adaptation, particularly in relation to eukaryotic organisms and their limited adaptation options to climate change, as mentioned in Section 3.3.4 of the IPCC AR6 WGII Chapter 03?']"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "questions[10:15]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with open(\"train_questions.txt\", \"w\") as f:\n",
    "    for question in questions:\n",
    "        f.write(question + \"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, lets generate questions on a completely different set of documents, in order to create our eval dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ravithejad/Desktop/llamaindex/lib/python3.9/site-packages/llama_index/core/evaluation/dataset_generation.py:215: DeprecationWarning: Call to deprecated class DatasetGenerator. (Deprecated in favor of `RagDatasetGenerator` which should be used instead.)\n",
      "  return cls(\n"
     ]
    }
   ],
   "source": [
    "dataset_generator = DatasetGenerator.from_documents(\n",
    "    documents[80:],\n",
    "    question_gen_query=question_gen_query,\n",
    "    llm=mistral_small,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Generated  40  questions\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/ravithejad/Desktop/llamaindex/lib/python3.9/site-packages/llama_index/core/evaluation/dataset_generation.py:312: DeprecationWarning: Call to deprecated class QueryResponseDataset. (Deprecated in favor of `LabelledRagDataset` which should be used instead.)\n",
      "  return QueryResponseDataset(queries=queries, responses=responses_dict)\n"
     ]
    }
   ],
   "source": [
    "# Note: This might take sometime.\n",
    "questions = dataset_generator.generate_questions_from_nodes(num=40)\n",
    "print(\"Generated \", len(questions), \" questions\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with open(\"eval_questions.txt\", \"w\") as f:\n",
    "    for question in questions:\n",
    "        f.write(question + \"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Initial Eval with `open-mistral-7b` Query Engine\n",
    "\n",
    "For this eval, we will be using the [`ragas` evaluation library](https://github.com/explodinggradients/ragas).\n",
    "\n",
    "Ragas has a ton of evaluation metrics for RAG pipelines, and you can read about them [here](https://github.com/explodinggradients/ragas/blob/main/docs/metrics.md).\n",
    "\n",
    "For this notebook, we will be using the following two metrics\n",
    "\n",
    "- `answer_relevancy` - This measures how relevant is the generated answer to the prompt. If the generated answer is incomplete or contains redundant information the score will be low. This is quantified by working out the chance of an LLM generating the given question using the generated answer. Values range (0,1), higher the better.\n",
    "- `faithfulness` - This measures the factual consistency of the generated answer against the given context. This is done using a multi step paradigm that includes creation of statements from the generated answer followed by verifying each of these statements against the context. The answer is scaled to (0,1) range. Higher the better."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "questions = []\n",
    "with open(\"eval_questions.txt\", \"r\") as f:\n",
    "    for line in f:\n",
    "        questions.append(line.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "from llama_index.embeddings.mistralai import MistralAIEmbedding\n",
    "\n",
    "# limit the context window to 2048 tokens so that refine is used\n",
    "from llama_index.core import Settings\n",
    "\n",
    "Settings.context_window = 2048\n",
    "Settings.llm = open_mistral\n",
    "Settings.embed_model = MistralAIEmbedding()\n",
    "\n",
    "index = VectorStoreIndex.from_documents(\n",
    "    documents,\n",
    ")\n",
    "\n",
    "query_engine = index.as_query_engine(similarity_top_k=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "contexts = []\n",
    "answers = []\n",
    "\n",
    "for question in questions:\n",
    "    response = query_engine.query(question)\n",
    "    contexts.append([x.node.get_content() for x in response.source_nodes])\n",
    "    answers.append(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# We will use OpenAI LLM for evaluation using RAGAS\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"<YOUR OPENAI API KEY>\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating: 100%|██████████| 80/80 [01:12<00:00,  1.10it/s]\n"
     ]
    }
   ],
   "source": [
    "from datasets import Dataset\n",
    "from ragas import evaluate\n",
    "from ragas.metrics import answer_relevancy, faithfulness\n",
    "\n",
    "ds = Dataset.from_dict(\n",
    "    {\n",
    "        \"question\": questions,\n",
    "        \"answer\": answers,\n",
    "        \"contexts\": contexts,\n",
    "    }\n",
    ")\n",
    "\n",
    "result = evaluate(ds, [answer_relevancy, faithfulness])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check the results before finetuning."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'answer_relevancy': 0.8151, 'faithfulness': 0.8360}\n"
     ]
    }
   ],
   "source": [
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `mistral-large-latest` to Collect Training Data\n",
    "\n",
    "Here, we use `mistral-large-latest` to collect data that we want `open-mistral-7b` to finetune on."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.mistralai import MistralAI\n",
    "from llama_index.finetuning.callbacks import MistralAIFineTuningHandler\n",
    "from llama_index.core.callbacks import CallbackManager\n",
    "\n",
    "finetuning_handler = MistralAIFineTuningHandler()\n",
    "callback_manager = CallbackManager([finetuning_handler])\n",
    "\n",
    "llm = MistralAI(model=\"mistral-large-latest\", temperature=0.1)\n",
    "llm.callback_manager = callback_manager"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "questions = []\n",
    "with open(\"train_questions.txt\", \"r\") as f:\n",
    "    for line in f:\n",
    "        questions.append(line.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "Settings.embed_model = MistralAIEmbedding()\n",
    "Settings.llm = llm\n",
    "\n",
    "index = VectorStoreIndex.from_documents(\n",
    "    documents,\n",
    ")\n",
    "\n",
    "query_engine = index.as_query_engine(similarity_top_k=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:   0%|          | 0/40 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:   2%|▎         | 1/40 [00:02<01:43,  2.66s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:   5%|▌         | 2/40 [00:13<04:38,  7.34s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:   8%|▊         | 3/40 [00:19<04:08,  6.71s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  10%|█         | 4/40 [00:24<03:35,  5.98s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  12%|█▎        | 5/40 [00:28<03:04,  5.26s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  15%|█▌        | 6/40 [00:33<02:55,  5.17s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  18%|█▊        | 7/40 [00:42<03:35,  6.52s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  20%|██        | 8/40 [00:45<02:50,  5.32s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  22%|██▎       | 9/40 [00:51<02:58,  5.75s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  25%|██▌       | 10/40 [00:55<02:29,  4.99s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  28%|██▊       | 11/40 [01:00<02:24,  5.00s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  30%|███       | 12/40 [01:03<02:08,  4.59s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  32%|███▎      | 13/40 [01:06<01:45,  3.90s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  35%|███▌      | 14/40 [01:15<02:21,  5.44s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  38%|███▊      | 15/40 [01:18<02:00,  4.82s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  40%|████      | 16/40 [01:29<02:42,  6.76s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  42%|████▎     | 17/40 [01:37<02:45,  7.20s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  45%|████▌     | 18/40 [01:42<02:18,  6.31s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  48%|████▊     | 19/40 [01:47<02:03,  5.89s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  50%|█████     | 20/40 [01:48<01:33,  4.66s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  52%|█████▎    | 21/40 [01:53<01:25,  4.50s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  55%|█████▌    | 22/40 [01:55<01:11,  3.98s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  57%|█████▊    | 23/40 [01:58<00:59,  3.47s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  60%|██████    | 24/40 [02:00<00:48,  3.03s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  62%|██████▎   | 25/40 [02:04<00:50,  3.38s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  65%|██████▌   | 26/40 [02:06<00:42,  3.03s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  68%|██████▊   | 27/40 [02:22<01:29,  6.89s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  70%|███████   | 28/40 [02:25<01:08,  5.73s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  72%|███████▎  | 29/40 [02:32<01:08,  6.24s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  75%|███████▌  | 30/40 [02:34<00:49,  4.99s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  78%|███████▊  | 31/40 [02:37<00:37,  4.19s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  80%|████████  | 32/40 [02:42<00:36,  4.54s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  82%|████████▎ | 33/40 [02:46<00:30,  4.39s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  85%|████████▌ | 34/40 [02:48<00:22,  3.67s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  88%|████████▊ | 35/40 [02:53<00:20,  4.01s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  90%|█████████ | 36/40 [02:58<00:17,  4.28s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  92%|█████████▎| 37/40 [03:13<00:22,  7.45s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  95%|█████████▌| 38/40 [03:16<00:12,  6.20s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions:  98%|█████████▊| 39/40 [03:18<00:05,  5.06s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing questions: 100%|██████████| 40/40 [03:30<00:00,  5.25s/it]\n"
     ]
    }
   ],
   "source": [
    "from tqdm import tqdm\n",
    "\n",
    "contexts = []\n",
    "answers = []\n",
    "\n",
    "for question in tqdm(questions, desc=\"Processing questions\"):\n",
    "    response = query_engine.query(question)\n",
    "    contexts.append(\n",
    "        \"\\n\".join([x.node.get_content() for x in response.source_nodes])\n",
    "    )\n",
    "    answers.append(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "from typing import List\n",
    "\n",
    "\n",
    "def convert_data_jsonl_format(\n",
    "    questions: List[str],\n",
    "    contexts: List[str],\n",
    "    answers: List[str],\n",
    "    output_file: str,\n",
    ") -> None:\n",
    "    with open(output_file, \"w\") as outfile:\n",
    "        for context, question, answer in zip(contexts, questions, answers):\n",
    "            message_dict = {\n",
    "                \"messages\": [\n",
    "                    {\n",
    "                        \"role\": \"assistant\",\n",
    "                        \"content\": \"You are a helpful assistant to answer user queries based on provided context.\",\n",
    "                    },\n",
    "                    {\n",
    "                        \"role\": \"user\",\n",
    "                        \"content\": f\"context: {context} \\n\\n question: {question}\",\n",
    "                    },\n",
    "                    {\"role\": \"assistant\", \"content\": answer},\n",
    "                ]\n",
    "            }\n",
    "            # Write the JSON object in a single line\n",
    "            json.dump(message_dict, outfile)\n",
    "            # Add a newline character after each JSON object\n",
    "            outfile.write(\"\\n\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "convert_data_jsonl_format(questions, contexts, answers, \"training.jsonl\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Create `MistralAIFinetuneEngine`\n",
    "\n",
    "We create an `MistralAIFinetuneEngine`: the finetune engine will take care of launching a finetuning job, and returning an LLM model that you can directly plugin to the rest of LlamaIndex workflows.\n",
    "\n",
    "We use the default constructor, but we can also directly pass in our finetuning_handler into this engine with the `from_finetuning_handler` class method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llms.mistralai import MistralAI\n",
    "from llama_index.finetuning.callbacks import MistralAIFineTuningHandler\n",
    "from llama_index.core.callbacks import CallbackManager\n",
    "from llama_index.finetuning.mistralai import MistralAIFinetuneEngine\n",
    "\n",
    "# Wandb for monitorning the training logs\n",
    "wandb_integration_dict = {\n",
    "    \"project\": \"mistralai\",\n",
    "    \"run_name\": \"finetuning\",\n",
    "    \"api_key\": \"3a486c0d4066396b07b0ebf6826026c24e981f37\",\n",
    "}\n",
    "\n",
    "finetuning_engine = MistralAIFinetuneEngine(\n",
    "    base_model=\"open-mistral-7b\",\n",
    "    training_path=\"training.jsonl\",\n",
    "    # validation_path=\"<validation file>\", # validation file is optional\n",
    "    verbose=True,\n",
    "    training_steps=5,\n",
    "    learning_rate=0.0001,\n",
    "    wandb_integration_dict=wandb_integration_dict,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# starts the finetuning of open-mistral-7b\n",
    "\n",
    "finetuning_engine.finetune()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "DetailedJob(id='19f5943a-3000-4568-b227-e45c36ac15f1', hyperparameters=TrainingParameters(training_steps=5, learning_rate=0.0001), fine_tuned_model=None, model='open-mistral-7b', status='RUNNING', job_type='FT', created_at=1718613228, modified_at=1718613229, training_files=['07270085-65e6-441e-b99d-ef6f75dd5a30'], validation_files=[], object='job', integrations=[WandbIntegration(type='wandb', project='mistralai', name=None, run_name='finetuning')], events=[Event(name='status-updated', data={'status': 'RUNNING'}, created_at=1718613229), Event(name='status-updated', data={'status': 'QUEUED'}, created_at=1718613228)], checkpoints=[], estimated_start_time=None)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# This will show the current status of the job - 'RUNNING'\n",
    "finetuning_engine.get_current_job()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "DetailedJob(id='19f5943a-3000-4568-b227-e45c36ac15f1', hyperparameters=TrainingParameters(training_steps=5, learning_rate=0.0001), fine_tuned_model='ft:open-mistral-7b:35c6fa92:20240617:19f5943a', model='open-mistral-7b', status='SUCCESS', job_type='FT', created_at=1718613228, modified_at=1718613306, training_files=['07270085-65e6-441e-b99d-ef6f75dd5a30'], validation_files=[], object='job', integrations=[WandbIntegration(type='wandb', project='mistralai', name=None, run_name='finetuning')], events=[Event(name='status-updated', data={'status': 'SUCCESS'}, created_at=1718613306), Event(name='status-updated', data={'status': 'RUNNING'}, created_at=1718613229), Event(name='status-updated', data={'status': 'QUEUED'}, created_at=1718613228)], checkpoints=[], estimated_start_time=None)"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# This will show the current status of the job - 'SUCCESS'\n",
    "finetuning_engine.get_current_job()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: GET https://api.mistral.ai/v1/fine_tuning/jobs/19f5943a-3000-4568-b227-e45c36ac15f1 \"HTTP/1.1 200 OK\"\n",
      "INFO:llama_index.finetuning.mistralai.base:status of the job_id: 19f5943a-3000-4568-b227-e45c36ac15f1 is SUCCESS\n",
      "status of the job_id: 19f5943a-3000-4568-b227-e45c36ac15f1 is SUCCESS\n"
     ]
    }
   ],
   "source": [
    "ft_llm = finetuning_engine.get_finetuned_model(temperature=0.1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluation\n",
    "\n",
    "Once the finetuned model is created, the next step is running our fine-tuned model on our eval dataset again to measure any performance increase."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core import Settings\n",
    "from llama_index.llms.mistralai import MistralAI\n",
    "from llama_index.embeddings.mistralai import MistralAIEmbedding\n",
    "\n",
    "# Setting up finetuned llm\n",
    "Settings.llm = ft_llm\n",
    "Settings.context_window = (\n",
    "    2048  # limit the context window artifically to test refine process\n",
    ")\n",
    "Settings.embed_model = MistralAIEmbedding()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "questions = []\n",
    "with open(\"eval_questions.txt\", \"r\") as f:\n",
    "    for line in f:\n",
    "        questions.append(line.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "index = VectorStoreIndex.from_documents(documents)\n",
    "\n",
    "query_engine = index.as_query_engine(similarity_top_k=2, llm=ft_llm)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:   0%|          | 0/40 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:   2%|▎         | 1/40 [00:06<04:16,  6.58s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:   5%|▌         | 2/40 [00:11<03:32,  5.58s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:   8%|▊         | 3/40 [00:14<02:45,  4.48s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  10%|█         | 4/40 [00:19<02:41,  4.48s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  12%|█▎        | 5/40 [00:21<02:16,  3.90s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  15%|█▌        | 6/40 [00:27<02:25,  4.29s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  18%|█▊        | 7/40 [00:29<02:00,  3.66s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  20%|██        | 8/40 [00:31<01:41,  3.16s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  22%|██▎       | 9/40 [00:33<01:25,  2.74s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  25%|██▌       | 10/40 [00:36<01:26,  2.90s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  28%|██▊       | 11/40 [00:38<01:18,  2.71s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  30%|███       | 12/40 [00:40<01:08,  2.44s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  32%|███▎      | 13/40 [00:47<01:42,  3.81s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  35%|███▌      | 14/40 [00:50<01:30,  3.46s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  38%|███▊      | 15/40 [01:01<02:22,  5.71s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  40%|████      | 16/40 [01:06<02:14,  5.59s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  42%|████▎     | 17/40 [01:09<01:49,  4.76s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  45%|████▌     | 18/40 [01:13<01:39,  4.54s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  48%|████▊     | 19/40 [01:16<01:27,  4.15s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  50%|█████     | 20/40 [01:20<01:19,  3.99s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  52%|█████▎    | 21/40 [01:27<01:34,  4.97s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  55%|█████▌    | 22/40 [01:30<01:17,  4.33s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  57%|█████▊    | 23/40 [01:32<01:02,  3.70s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  60%|██████    | 24/40 [01:40<01:18,  4.90s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  62%|██████▎   | 25/40 [01:44<01:12,  4.80s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  65%|██████▌   | 26/40 [01:48<01:02,  4.45s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  68%|██████▊   | 27/40 [01:54<01:02,  4.80s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  70%|███████   | 28/40 [02:03<01:15,  6.28s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  72%|███████▎  | 29/40 [02:06<00:57,  5.21s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  75%|███████▌  | 30/40 [02:09<00:44,  4.47s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  78%|███████▊  | 31/40 [02:11<00:33,  3.77s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  80%|████████  | 32/40 [02:13<00:26,  3.30s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  82%|████████▎ | 33/40 [02:15<00:19,  2.77s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  85%|████████▌ | 34/40 [02:21<00:22,  3.73s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  88%|████████▊ | 35/40 [02:24<00:18,  3.70s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  90%|█████████ | 36/40 [02:29<00:16,  4.02s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  92%|█████████▎| 37/40 [02:36<00:14,  4.83s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  95%|█████████▌| 38/40 [02:38<00:08,  4.12s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions:  98%|█████████▊| 39/40 [02:40<00:03,  3.42s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Processing Questions: 100%|██████████| 40/40 [02:44<00:00,  4.11s/it]\n"
     ]
    }
   ],
   "source": [
    "contexts = []\n",
    "answers = []\n",
    "\n",
    "for question in tqdm(questions, desc=\"Processing Questions\"):\n",
    "    response = query_engine.query(question)\n",
    "    contexts.append([x.node.get_content() for x in response.source_nodes])\n",
    "    answers.append(str(response))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:   0%|          | 0/80 [00:00<?, ?it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:   1%|▏         | 1/80 [00:05<06:55,  5.26s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:   2%|▎         | 2/80 [00:12<08:11,  6.30s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  11%|█▏        | 9/80 [00:13<01:15,  1.06s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  12%|█▎        | 10/80 [00:15<01:23,  1.20s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  14%|█▍        | 11/80 [00:17<01:33,  1.35s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  18%|█▊        | 14/80 [00:18<00:58,  1.12it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  19%|█▉        | 15/80 [00:19<01:03,  1.03it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  20%|██        | 16/80 [00:20<01:01,  1.03it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  26%|██▋       | 21/80 [00:22<00:32,  1.80it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  28%|██▊       | 22/80 [00:25<00:52,  1.10it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  41%|████▏     | 33/80 [00:27<00:19,  2.43it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  42%|████▎     | 34/80 [00:32<00:36,  1.25it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  49%|████▉     | 39/80 [00:33<00:23,  1.73it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  55%|█████▌    | 44/80 [00:34<00:15,  2.31it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  56%|█████▋    | 45/80 [00:35<00:17,  1.98it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  57%|█████▊    | 46/80 [00:39<00:30,  1.12it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  62%|██████▎   | 50/80 [00:40<00:19,  1.54it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  68%|██████▊   | 54/80 [00:44<00:20,  1.26it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  71%|███████▏  | 57/80 [00:45<00:15,  1.51it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  76%|███████▋  | 61/80 [00:47<00:11,  1.66it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  78%|███████▊  | 62/80 [00:49<00:13,  1.31it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  82%|████████▎ | 66/80 [00:50<00:07,  1.76it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  88%|████████▊ | 70/80 [00:52<00:05,  1.86it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  89%|████████▉ | 71/80 [00:52<00:04,  1.87it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  90%|█████████ | 72/80 [00:55<00:06,  1.32it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  91%|█████████▏| 73/80 [00:57<00:07,  1.05s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  92%|█████████▎| 74/80 [00:58<00:05,  1.06it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  94%|█████████▍| 75/80 [01:01<00:07,  1.41s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  95%|█████████▌| 76/80 [01:03<00:06,  1.61s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  96%|█████████▋| 77/80 [01:07<00:06,  2.27s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  98%|█████████▊| 78/80 [01:16<00:07,  3.94s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating:  99%|█████████▉| 79/80 [01:17<00:03,  3.07s/it]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Evaluating: 100%|██████████| 80/80 [01:19<00:00,  1.01it/s]\n"
     ]
    }
   ],
   "source": [
    "ds = Dataset.from_dict(\n",
    "    {\n",
    "        \"question\": questions,\n",
    "        \"answer\": answers,\n",
    "        \"contexts\": contexts,\n",
    "    }\n",
    ")\n",
    "\n",
    "result = evaluate(ds, [answer_relevancy, faithfulness])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check the results with finetuned model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'answer_relevancy': 0.8016, 'faithfulness': 0.8924}\n"
     ]
    }
   ],
   "source": [
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Observation:\n",
    "\n",
    "`open-mistral-7b` : 'answer_relevancy': **0.8151**, 'faithfulness': **0.8360**\n",
    "\n",
    "`open-mistral-7b-finetuned` : 'answer_relevancy': **0.8016**, 'faithfulness': **0.8924**\n",
    "\n",
    "As you can see there is an increase in faithfulness score and small drop in answer relevancy."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exploring Differences\n",
    "\n",
    "Let's quickly compare the differences in responses, to demonstrate that fine tuning did indeed change something."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n"
     ]
    }
   ],
   "source": [
    "from llama_index.core import VectorStoreIndex\n",
    "\n",
    "index = VectorStoreIndex.from_documents(documents)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "questions = []\n",
    "with open(\"eval_questions.txt\", \"r\") as f:\n",
    "    for line in f:\n",
    "        questions.append(line.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "What are some examples of economic losses due to disruptions in ocean and coastal ecosystem services partly attributable to climate change, as mentioned in the context?\n"
     ]
    }
   ],
   "source": [
    "print(questions[20])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Original `open-mistral-7b  `"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.core.response.notebook_utils import display_response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "data": {
      "text/markdown": [
       "**`Final Response:`** Economic losses due to disruptions in ocean and coastal ecosystem services partly attributable to climate change, as mentioned in the context, include:\n",
       "\n",
       "1. The estimated loss of 800 million USD in the farmed-salmon industry in Chile due to a climate-driven harmful algal bloom.\n",
       "2. The closure of the Dungeness crab and razor clam fishery in the USA due to a climate-driven algal bloom, which harmed 84% of surveyed residents from 16 California coastal communities.\n",
       "3. Substantial economic losses for fisheries resulting from recent climate-driven harmful algal blooms and marine pathogen outbreaks in Asia, North America, and South America.\n",
       "4. Declines in tourism and real estate values, associated with climate-driven harmful algal blooms, in the USA, France, and England."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "query_engine = index.as_query_engine(llm=open_mistral)\n",
    "\n",
    "response = query_engine.query(questions[20])\n",
    "\n",
    "display_response(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Fine-Tuned `open-mistral-7b`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/embeddings \"HTTP/1.1 200 OK\"\n",
      "INFO:httpx:HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
      "HTTP Request: POST https://api.mistral.ai/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
     ]
    },
    {
     "data": {
      "text/markdown": [
       "**`Final Response:`** Economic losses due to disruptions in ocean and coastal ecosystem services partly attributable to climate change have been recorded in various industries and regions. For instance, substantial economic losses for fisheries resulting from climate-driven harmful algal blooms and marine pathogen outbreaks have been reported in Asia, North America, and South America. A notable example is the 2016 event in Chile, which caused an estimated loss of 800 million USD in the farmed-salmon industry and led to regional government protests. Similarly, the recent closure of the Dungeness crab and razor clam fishery in the USA due to a climate-driven algal bloom harmed 84% of surveyed residents from 16 California coastal communities.\n",
       "\n",
       "In addition, declines in tourism and real estate values, associated with climate-driven harmful algal blooms, have been recorded in the USA, France, and England. These disruptions can have significant economic impacts on coastal communities, particularly those that heavily rely on tourism for income or have a strong cultural identity linked to the ocean.\n",
       "\n",
       "Moreover, changes in the abundance or quality of goods from the ocean, such as non-food products like dietary supplements, food preservatives, pharmaceuticals, biofuels, sponges, cosmetic products, jewellery coral, cultured pearls, and aquarium species, will also be affected by climate change. These changes can lead to economic losses for industries that rely on these resources.\n",
       "\n",
       "Finally, the vulnerability of communities to losses in marine ecosystem services varies within and among communities. However, climate-change impacts exacerbate existing inequalities already experienced by some communities, including Indigenous Peoples, Pacific Island countries and territories, and marginalized peoples, such as migrants and women in fisheries and mariculture. These inequities increase the risk to their fundamental human rights by disrupting livelihoods and food security, while leading to loss of social, economic, and cultural rights."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "query_engine = index.as_query_engine(llm=ft_llm)\n",
    "\n",
    "response = query_engine.query(questions[20])\n",
    "\n",
    "display_response(response)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Observation:\n",
    "\n",
    "As we can see, the fine-tuned model provides a more thorough response! This lines up with the increased faithfullness score from ragas, since the answer is more representative of the retrieved context."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusion\n",
    "\n",
    "So, in conclusion, finetuning with only ~40 questions actually helped improve our eval scores!\n",
    "\n",
    "**answer_relevancy: 0.0.8151 -> 0.8016**\n",
    "\n",
    "The answer relevancy dips slightly but it's very small.\n",
    "\n",
    "**faithfulness: 0.8360 -> 0.8924**\n",
    "\n",
    "The faithfulness appears to have been improved! This mains the anwers given better fuffil the original question that was asked."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llamaindex",
   "language": "python",
   "name": "llamaindex"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}