import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
:::tip[ ] Looking for La Plateforme? Head to console.mistral.ai :::
Account setup
- To get started, create a Mistral account or sign in at https://console.mistral.ai.
- Then, navigate to your "Organization" settings at https://admin.mistral.ai.
- To add your payment information and activate payments on your account, find the billing section under Administration.
- You can now manage all your Workspaces and Organization via this page.
- Return to https://console.mistral.ai once everything is settled.
- After that, go to the API keys page under your Workspace and create a new API key by clicking "Create new key". Make sure to copy the API key, save it securely, and do not share it with anyone.
Getting started with Mistral AI API
Mistral AI API provides a seamless way for developers to integrate Mistral's state-of-the-art
models into their applications and production workflows with just a few lines of code.
Our API is currently available through La Plateforme.
You need to activate payments on your account to enable your API keys.
After a few moments, you will be able to use our chat
endpoint:
api_key = os.environ["MISTRAL_API_KEY"] model = "mistral-large-latest"
client = Mistral(api_key=api_key)
chat_response = client.chat.complete( model= model, messages = [ { "role": "user", "content": "What is the best French cheese?", }, ] ) print(chat_response.choices[0].message.content)
1
2 </TabItem>
3
4 <TabItem value="typescript" label="typescript">
5```typescript
6import { Mistral } from '@mistralai/mistralai';
7
8const apiKey = process.env.MISTRAL_API_KEY;
9
10const client = new Mistral({apiKey: apiKey});
11
12const chatResponse = await client.chat.complete({
13 model: 'mistral-large-latest',
14 messages: [{role: 'user', content: 'What is the best French cheese?'}],
15});
16
17console.log('Chat:', chatResponse.choices[0].message.content);
To generate text embeddings using Mistral AI's embeddings API, we can make a request to the API
endpoint and specify the embedding model mistral-embed
, along with providing a list of input texts.
The API will then return the corresponding embeddings as numerical vectors, which can be used for
further analysis or processing in NLP applications.
api_key = os.environ["MISTRAL_API_KEY"] model = "mistral-embed"
client = Mistral(api_key=api_key)
embeddings_response = client.embeddings.create( model=model, inputs=["Embed this sentence.", "As well as this one."] )
print(embeddings_response)
1
2 </TabItem>
3
4 <TabItem value="typescript" label="typescript">
5```typescript
6import { Mistral } from '@mistralai/mistralai';
7
8const apiKey = process.env.MISTRAL_API_KEY;
9
10const client = new Mistral({apiKey: apiKey});
11
12const embeddingsResponse = await client.embeddings.create({
13 model: 'mistral-embed',
14 inputs: ["Embed this sentence.", "As well as this one."],
15});
16
17console.log(embeddingsResponse);
For a full description of the models offered on the API, head on to the model documentation.