Reach out

Command Palette

Search for a command to run...

[Capabilities]

import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';

Custom Structured Outputs

Custom Structured Outputs allow you to ensure the model provides an answer in a very specific JSON format by supplying a clear JSON schema. This approach allows the model to consistently deliver responses with the correct typing and keywords.

Here is an example of how to achieve this using the Mistral AI client and Pydantic:

Define the Data Model

First, define the structure of the output using a Pydantic model:

1from pydantic import BaseModel
2
3class Book(BaseModel):
4    name: str
5    authors: list[str]

Start the completion

Next, use the Mistral AI python client to make a request and ensure the response adheres to the defined structure using response_format set to the corresponding pydantic model:

1import os
2from mistralai import Mistral
3
4api_key = os.environ["MISTRAL_API_KEY"]
5model = "ministral-8b-latest"
6
7client = Mistral(api_key=api_key)
8
9chat_response = client.chat.parse(
10    model=model,
11    messages=[
12        {
13            "role": "system",
14            "content": "Extract the books information."
15        },
16        {
17            "role": "user",
18            "content": "I recently read 'To Kill a Mockingbird' by Harper Lee."
19        },
20    ],
21    response_format=Book,
22    max_tokens=256,
23    temperature=0
24)

In this example, the Book class defines the structure of the output, ensuring that the model's response adheres to the specified format.

There are two types of possible outputs that are easily accessible via our SDK:

  1. The raw JSON output, accessed with chat_response.choices[0].message.content:
1{
2  "authors": ["Harper Lee"],
3  "name": "To Kill a Mockingbird"
4}
  1. The parsed output, converted into a Pydantic object with chat_response.choices[0].message.parsed. In this case, it is a Book instance:
1name='To Kill a Mockingbird' authors=['Harper Lee']

Here is an example of how to achieve this using the Mistral AI client and Zod:

Define the Data Model

First, define the structure of the output using Zod:

1import { z } from 'zod';
2
3const Book = z.object({
4  name: z.string(),
5  authors: z.array(z.string()),
6});

Start the completion

Next, use the Mistral AI TypeScript client to make a request and ensure the response adheres to the defined structure using responseFormat set to the corresponding Zod schema:

1import { Mistral } from '@mistralai/mistralai';
2
3const apiKey = process.env.MISTRAL_API_KEY;
4
5const client = new Mistral({ apiKey: apiKey });
6
7const chatResponse = await client.chat.parse({
8  model: 'ministral-8b-latest',
9  messages: [
10    {
11      role: 'system',
12      content: 'Extract the books information.',
13    },
14    {
15      role: 'user',
16      content: "I recently read 'To Kill a Mockingbird' by Harper Lee.",
17    },
18  ],
19  responseFormat: Book,
20  maxTokens: 256,
21  temperature: 0,
22});

In this example, the Book schema defines the structure of the output, ensuring that the model's response adheres to the specified format.

There are two types of possible outputs that are easily accessible via our SDK:

  1. The raw JSON output, accessed with chatResponse.choices[0].message.content:
1{
2  "authors": ["Harper Lee"],
3  "name": "To Kill a Mockingbird"
4}
  1. The parsed output, converted into a TypeScript object with chatResponse.choices[0].message.parsed. In this case, it is a Book object:
1{ name: 'To Kill a Mockingbird', authors: [ 'Harper Lee' ] }

The request is structured to ensure that the response adheres to the specified custom JSON schema. The schema defines the structure of a Book object with name and authors properties.

1curl --location "https://api.mistral.ai/v1/chat/completions" \
2     --header 'Content-Type: application/json' \
3     --header 'Accept: application/json' \
4     --header "Authorization: Bearer $MISTRAL_API_KEY" \
5     --data '{
6    "model": "ministral-8b-latest",
7    "messages": [
8     {
9        "role": "system",
10        "content": "Extract the books information."
11      },
12     {
13        "role": "user",
14        "content": "I recently read To Kill a Mockingbird by Harper Lee."
15      }
16    ],
17    "response_format": {
18      "type": "json_schema",
19      "json_schema": {
20        "schema": {
21          "properties": {
22            "name": {
23              "title": "Name",
24              "type": "string"
25            },
26            "authors": {
27              "items": {
28                "type": "string"
29              },
30              "title": "Authors",
31              "type": "array"
32            }
33          },
34          "required": ["name", "authors"],
35          "title": "Book",
36          "type": "object",
37          "additionalProperties": false
38        },
39        "name": "book",
40        "strict": true
41      }
42    },
43    "max_tokens": 256,
44    "temperature": 0
45  }'

:::note To better guide the model, the following is being always prepended by default to the System Prompt when using this method:

1Your output should be an instance of a JSON object following this schema: {{ json_schema }}

However, it is recommended to add more explanations and iterate on your system prompt to better clarify the expected schema and behavior. :::

FAQ

Q: Which models support custom Structured Outputs?
A: All currently available models except for codestral-mamba are supported.