Skip to content
MiniMax

Configure MiniMax

MiniMax is a leading general artificial intelligence company in China, providing the abab series ultra-long text large language models. Supporting 245K tokens ultra-long context, excelling at long text understanding, multi-turn conversations, and complex reasoning tasks. In 2025, it launched new user packages offering free credits.

1. Obtain MiniMax API Key

1.1 Visit MiniMax Open Platform

Visit the MiniMax open platform and log in: https://platform.minimaxi.com

Visit MiniMax Platform

1.2 Enter API Keys Page

After logging in, click API Keys in the left menu to enter the key management page.

Enter API Keys Page

1.3 Create New API Key

Click the Create New Key button in the upper left corner.

Note: When you first register an account, there will be a default key named "Experience Center".

Click Create Button

1.4 Set API Key Information

In the pop-up dialog:

  1. Enter a name for the API Key (e.g., CueMate)
  2. Click the Create Key button

Set API Key Information

1.5 Copy API Key

After successful creation, the system will display the API Key.

Important: This is the only time you can see the complete API Key. Please copy and save it immediately.

Copy API Key

Click the copy button, and the API Key is copied to the clipboard.

2. Configure MiniMax Model in CueMate

2.1 Enter Model Settings Page

After logging into CueMate, click Model Settings in the dropdown menu in the upper right corner.

Enter Model Settings

2.2 Add New Model

Click the Add Model button in the upper right corner.

Click Add Model

2.3 Select MiniMax Provider

In the pop-up dialog:

  1. Provider Type: Select MiniMax
  2. After clicking, automatically proceed to the next step

Select MiniMax

2.4 Fill in Configuration Information

Fill in the following information on the configuration page:

Basic Configuration

  1. Model Name: Give this model configuration a name (e.g., MiniMax-abab6.5)
  2. API URL: Keep the default https://api.minimaxi.com/v1 (OpenAI compatible format)
  3. API Key: Paste the MiniMax API Key you just copied
  4. Model Version: Select the model ID you want to use. Common models include:
    • MiniMax-M2: Latest encoding and Agent model, 16K max output
    • abab6.5s-chat: Ultra-long context model, 245K context, 16K max output
    • abab6.5t-chat: Standard fast model, 8K context, cost-effective
    • abab6.5g-chat: General conversation model, 8K context

Fill in Basic Configuration

Advanced Configuration (Optional)

Expand the Advanced Configuration panel to adjust the following parameters:

Parameters Adjustable in CueMate Interface:

  1. Temperature: Controls output randomness

    • Range: 0-1
    • Recommended Value: 0.7
    • Function: Higher values produce more random and creative output, lower values produce more stable and conservative output
    • Usage Suggestions:
      • Creative writing/brainstorming: 0.8-1.0
      • Regular conversation/Q&A: 0.6-0.8
      • Code generation/precise tasks: 0.3-0.5
  2. Max Tokens: Limits single output length

    • Range: 256 - 16384 (depending on model)
    • Recommended Value: 4096
    • Function: Controls the maximum word count of model's single response
    • Model Limits:
      • MiniMax-M2: Maximum 16K tokens
      • abab6.5s-chat: Maximum 16K tokens
      • abab6.5t-chat: Maximum 8K tokens
      • abab6.5g-chat: Maximum 8K tokens
    • Usage Suggestions:
      • Short Q&A: 1024-2048
      • Regular conversation: 4096-8192
      • Long text generation: 8192-16384

Advanced Configuration

Other Advanced Parameters Supported by MiniMax API:

Although the CueMate interface only provides temperature and max_tokens adjustments, if you call MiniMax directly through the API, you can also use the following advanced parameters (MiniMax uses OpenAI-compatible API format):

  1. top_p (nucleus sampling)

    • Range: 0-1
    • Default Value: 1
    • Function: Samples from the minimum candidate set where cumulative probability reaches p
    • Relationship with temperature: Usually only adjust one of them
    • Usage Suggestions:
      • Maintain diversity but avoid unreasonable output: 0.9-0.95
      • More conservative output: 0.7-0.8
  2. stream (streaming output)

    • Type: Boolean
    • Default Value: false
    • Function: Enable SSE streaming return, generate and return simultaneously
    • In CueMate: Automatically handled, no manual setting required
No.Scenariotemperaturemax_tokenstop_pSuitable Model
1Creative Writing0.8-1.04096-81920.95MiniMax-M2, abab6.5s
2Code Generation0.2-0.52048-40960.9MiniMax-M2
3Q&A System0.71024-20480.9abab6.5t, abab6.5g
4Long Text Generation0.6-0.88192-163840.9MiniMax-M2, abab6.5s

2.5 Test Connection

After filling in the configuration, click the Test Connection button to verify if the configuration is correct.

Test Connection

If the configuration is correct, a successful test message will be displayed, along with a sample response from the model.

Test Success

2.6 Save Configuration

After a successful test, click the Save button to complete the model configuration.

Save Configuration

3. Use Model

Through the dropdown menu in the upper right corner, enter the system settings interface, and select the model configuration you want to use in the LLM provider section.

After configuration, you can select to use this model in interview training, question generation, and other functions. Of course, you can also select the model configuration for a specific interview in the interview options.

Select Model

4. Supported Model List

4.1 MiniMax Series

No.Model NameModel IDMax OutputUse Case
1MiniMax-M2MiniMax-M216K tokensCoding tasks, Agent workflows, complex reasoning
2abab6.5s-chatabab6.5s-chat16K tokensLong text processing, complex multi-turn conversations, deep analysis
3abab6.5t-chatabab6.5t-chat8K tokensRegular conversation, fast response, cost-effective
4abab6.5g-chatabab6.5g-chat8K tokensGeneral conversation, daily use

5. Common Issues

5.1 Invalid API Key

Symptom: API Key error when testing connection

Solution:

  1. Check if the API Key format is correct
  2. Confirm the API Key has not expired or been disabled
  3. Check if the account has available quota

5.2 Request Timeout

Symptom: No response for a long time during testing or use

Solution:

  1. Check if the network connection is normal
  2. Confirm the API URL address is correct: https://api.minimaxi.com/v1
  3. Check firewall settings

5.3 Insufficient Quota

Symptom: Quota exhausted or insufficient balance message

Solution:

  1. Log in to MiniMax platform to check account balance
  2. Recharge or claim new user package
  3. Optimize usage frequency

Released under the GPL-3.0 License.