Configure Baichuan AI
Baichuan AI is a leading large language model company in China, providing the Baichuan series models. Baichuan4 is the latest flagship model, supporting 128K ultra-long context. The API is fully compatible with OpenAI format, and individual users can register and use it directly.
1. Obtain Baichuan AI API Key
1.1 Visit Baichuan AI Open Platform
Visit the Baichuan AI open platform and log in: https://platform.baichuan-ai.com/

1.2 Enter API Keys Page
After logging in, click API Key Management in the left menu to enter the key management page.

1.3 Create New API Key
Click the Create API Key button in the upper right corner.
Note: When you first register an account, there will be a default key named "Experience Zone".

1.4 Set API Key Information
In the pop-up dialog:
- Enter a name for the API Key (e.g., CueMate)
- Click the Create button

1.5 Copy API Key
After successful creation, the system will display the API Key.
Important: This is the only time you can see the complete API Key. Please copy and save it immediately.

Click the copy button, and the API Key is copied to the clipboard.
2. Configure Baichuan AI Model in CueMate
2.1 Enter Model Settings Page
After logging into CueMate, click Model Settings in the dropdown menu in the upper right corner.

2.2 Add New Model
Click the Add Model button in the upper right corner.

2.3 Select Baichuan AI Provider
In the pop-up dialog:
- Provider Type: Select Baichuan AI
- After clicking, automatically proceed to the next step

2.4 Fill in Configuration Information
Fill in the following information on the configuration page:
Basic Configuration
- Model Name: Give this model configuration a name (e.g., Baichuan4)
- API URL: Keep the default
https://api.baichuan-ai.com/v1(OpenAI compatible format) - API Key: Paste the Baichuan AI API Key you just copied
- Model Version: Select the model ID you want to use. Common models include:
Baichuan4: Latest flagship model, most capable, suitable for complex tasksBaichuan3-Turbo: Fast response model, cost-effectiveBaichuan3-Turbo-128k: Ultra-long context model, 128K tokens

Advanced Configuration (Optional)
Expand the Advanced Configuration panel to adjust the following parameters:
Parameters Adjustable in CueMate Interface:
Temperature: Controls output randomness
- Range: 0-1
- Recommended Value: 0.7
- Function: Higher values produce more random and creative output, lower values produce more stable and conservative output
- Usage Suggestions:
- Creative writing/brainstorming: 0.8-1.0
- Regular conversation/Q&A: 0.6-0.8
- Code generation/precise tasks: 0.3-0.5
Max Tokens: Limits single output length
- Range: 256 - 8192 (depending on model)
- Recommended Value: 4096
- Function: Controls the maximum word count of model's single response
- Model Limits:
- Baichuan4: Maximum 8K tokens
- Baichuan3-Turbo: Maximum 8K tokens
- Baichuan3-Turbo-128k: Maximum 4K tokens (output limit)
- Usage Suggestions:
- Short Q&A: 1024-2048
- Regular conversation: 4096-8192
- Long text processing: Input only, output max 4K-8K

Other Advanced Parameters Supported by Baichuan AI API:
Although the CueMate interface only provides temperature and max_tokens adjustments, if you call Baichuan AI directly through the API, you can also use the following advanced parameters (Baichuan AI uses OpenAI-compatible API format):
top_p (nucleus sampling)
- Range: 0-1
- Default Value: 0.8
- Function: Samples from the minimum candidate set where cumulative probability reaches p
- Relationship with temperature: Usually only adjust one of them
- Usage Suggestions:
- Maintain diversity but avoid unreasonable output: 0.9-0.95
- More conservative output: 0.7-0.8
top_k (top-k sampling)
- Range: 0-100
- Default Value: 50
- Function: Samples from the k candidate words with the highest probability
- Usage Suggestions:
- More diverse: 80-100
- More conservative: 20-40
stream (streaming output)
- Type: Boolean
- Default Value: false
- Function: Enable SSE streaming return, generate and return simultaneously
- In CueMate: Automatically handled, no manual setting required
| No. | Scenario | temperature | max_tokens | top_p | Suitable Model |
|---|---|---|---|---|---|
| 1 | Creative Writing | 0.8-1.0 | 4096-8192 | 0.9 | Baichuan4 |
| 2 | Code Generation | 0.2-0.5 | 2048-4096 | 0.8 | Baichuan4 |
| 3 | Q&A System | 0.7 | 1024-2048 | 0.8 | Baichuan3-Turbo |
| 4 | Long Text Understanding | 0.6-0.8 | 4096 | 0.8 | Baichuan3-Turbo-128k |
2.5 Test Connection
After filling in the configuration, click the Test Connection button to verify if the configuration is correct.

If the configuration is correct, a successful test message will be displayed, along with a sample response from the model.

2.6 Save Configuration
After a successful test, click the Save button to complete the model configuration.

3. Use Model
Through the dropdown menu in the upper right corner, enter the system settings interface, and select the model configuration you want to use in the LLM provider section.
After configuration, you can select to use this model in interview training, question generation, and other functions. Of course, you can also select the model configuration for a specific interview in the interview options.

4. Supported Model List
4.1 Baichuan Series
| No. | Model Name | Model ID | Max Output | Use Case |
|---|---|---|---|---|
| 1 | Baichuan4 | Baichuan4 | 8K tokens | Flagship model, complex tasks, deep understanding |
| 2 | Baichuan3-Turbo | Baichuan3-Turbo | 8K tokens | Fast response, regular conversation, cost-effective |
| 3 | Baichuan3-Turbo-128k | Baichuan3-Turbo-128k | 128K tokens | Ultra-long context, long text processing, deep analysis |
5. Common Issues
5.1 Invalid API Key
Symptom: API Key error when testing connection
Solution:
- Check if the API Key format is correct
- Confirm the API Key has not expired or been disabled
- Check if the account has available quota
5.2 Request Timeout
Symptom: No response for a long time during testing or use
Solution:
- Check if the network connection is normal
- Confirm the API URL address is correct:
https://api.baichuan-ai.com/v1 - Check firewall settings
5.3 Insufficient Quota
Symptom: Quota exhausted or insufficient balance message
Solution:
- Log in to Baichuan AI platform to check account balance
- Recharge or apply for more quota
- Optimize usage frequency
