Skip to content
OpenAI

Configure OpenAI

OpenAI is a globally leading artificial intelligence research company, providing advanced large language model services including GPT-5, GPT-4.1, GPT-4o and more. Through the OpenAI API, developers can access cutting-edge AI capabilities supporting complex reasoning, multimodal understanding, code generation, and various other application scenarios.

1. Get OpenAI API Key

1.1 Access the AI Platform

Visit the AI official website and log in: https://platform.openai.com/

Access AI Platform

1.2 Go to API Keys Page

Click API keys in the left menu to enter the key management page.

Go to API Keys Page

1.3 Create a New API Key

Click the Create new secret key button in the upper right corner.

Click Create Button

1.4 Set API Key Name

In the popup dialog:

  1. Enter a name for the API Key (e.g., CueMate)
  2. Click the Create secret key button

Set API Key Name

1.5 Copy API Key

After successful creation, the system will display the API Key.

WARNING

Important Notice:

  • This is the only time you can see the complete API Key
  • Please copy and save it securely immediately
  • Do not share the API Key with others or commit it to code repositories
  • If leaked, immediately revoke the Key on the OpenAI platform

Copy API Key

Click the copy button, and the API Key will be copied to your clipboard.

2. Configure OpenAI Model in CueMate

2.1 Go to Model Settings Page

After logging into the CueMate system, click Model Settings in the dropdown menu in the upper right corner.

Go to Model Settings

2.2 Add New Model

Click the Add Model button in the upper right corner.

Click Add Model

2.3 Select OpenAI Provider

In the popup dialog:

  1. Provider Type: Select OpenAI
  2. Click to automatically proceed to the next step

Select OpenAI

2.4 Fill in Configuration Information

Fill in the following information on the configuration page:

Basic Configuration

  1. Model Name: Give this model configuration a name (e.g., GPT-4 Production)
  2. API Key: Paste the OpenAI API Key you just copied
  3. Base URL: Keep the default https://api.openai.com/v1 (if using a proxy, enter the proxy address)
  4. Model Version: Select the model to use
    • gpt-5: Latest flagship model, strongest reasoning capability
    • gpt-5-mini: Lightweight GPT-5, fast speed
    • gpt-4.1: GPT-4 upgrade, improved performance
    • gpt-4o: Multimodal model, supports image understanding
    • gpt-4o-mini: Lightweight multimodal, high value
    • gpt-4-turbo: High-performance version
    • gpt-3.5-turbo: Affordable, fast response

Fill in Basic Configuration

Advanced Configuration (Optional)

Expand the Advanced Configuration panel to adjust the following parameters:

Parameters adjustable in CueMate interface:

  1. Temperature: Controls output randomness

    • Range: 0-2
    • Recommended Value: 0.7
    • Function: Higher values produce more random and creative output, lower values produce more stable and conservative output
    • Usage Suggestions:
      • Creative writing/brainstorming: 1.0-1.5
      • Regular conversation/Q&A: 0.7-0.9
      • Code generation/precise tasks: 0.3-0.5
  2. Max Tokens: Limits single output length

    • Range: 256 - 16384 (depending on model)
    • Recommended Value: 8192
    • Function: Controls the maximum word count per model response
    • Model Limits:
      • GPT-5 series, GPT-4.1, GPT-4o series: Max 16K tokens
      • GPT-4: Max 8K tokens
      • GPT-4 Turbo, GPT-3.5 Turbo: Max 4K tokens
    • Usage Suggestions:
      • Brief Q&A: 1024-2048
      • Regular conversation: 4096-8192
      • Long text generation: 16384 (supported models only)

Advanced Configuration

Other advanced parameters supported by OpenAI API:

While the CueMate interface only provides temperature and max_tokens adjustments, if you call OpenAI directly via API, you can also use the following advanced parameters:

  1. top_p (nucleus sampling)

    • Range: 0-1
    • Default: 1
    • Function: Samples from the smallest candidate set whose cumulative probability reaches p
    • Relationship with temperature: Usually adjust only one of them
    • Usage Suggestions:
      • Maintain diversity but avoid extremes: 0.9-0.95
      • More conservative output: 0.7-0.8
  2. frequency_penalty

    • Range: -2.0 to 2.0
    • Default: 0
    • Function: Reduces probability of repeating the same words (based on word frequency)
    • Usage Suggestions:
      • Reduce repetition: 0.3-0.8
      • Allow repetition: 0 (default)
      • Force diversity: 1.0-2.0
  3. presence_penalty

    • Range: -2.0 to 2.0
    • Default: 0
    • Function: Reduces probability of words that have already appeared (based on whether they appeared)
    • Usage Suggestions:
      • Encourage new topics: 0.3-0.8
      • Allow repeating topics: 0 (default)
  4. stop (stop sequences)

    • Type: String or array (max 4 strings)
    • Default: null
    • Function: Stops when generated content contains specified strings
    • Example: ["###", "User:", "\n\n"]
    • Use Cases:
      • Structured output: Use delimiters to control format
      • Dialogue systems: Prevent model from speaking for user
  5. logit_bias (token bias)

    • Type: Dictionary mapping token ID to bias value
    • Range: -100 to 100
    • Function: Adjusts probability of specific tokens appearing
    • Use Cases:
      • Prohibit specific words: Set to -100
      • Encourage specific words: Set to positive value
  6. stream (streaming output)

    • Type: Boolean
    • Default: false
    • Function: Enables SSE streaming return, returns as it generates
    • In CueMate: Automatically handled, no manual setting needed
  7. seed (random seed)

    • Type: Integer
    • Default: null
    • Function: Fixes random seed, same input produces same output
    • Use Cases:
      • Reproducible testing
      • Comparative experiments
    • Note: Best effort, not guaranteed to be completely consistent
  8. n (generation count)

    • Type: Integer
    • Default: 1
    • Range: 1-10
    • Function: Generates multiple candidate responses in one request
    • Note: Billing is based on generation count
  9. user (user identifier)

    • Type: String
    • Function: Identifies end user, helps OpenAI monitor abuse
    • Suggestion: Recommended to set in production environments
Scenariotemperaturemax_tokenstop_pfrequency_penaltypresence_penalty
Creative Writing1.0-1.24096-81920.950.50.5
Code Generation0.2-0.52048-40960.90.00.0
Q&A System0.71024-20480.90.00.0
Summarization0.3-0.5512-10240.90.00.0
Brainstorming1.2-1.52048-40960.950.80.8

2.5 Test Connection

After filling in the configuration, click the Test Connection button to verify if the configuration is correct.

Test Connection

If the configuration is correct, a success message will be displayed along with a sample response from the model.

Test Success

If the configuration is incorrect, an error log will be displayed, and you can view specific error information through log management.

2.6 Save Configuration

After successful testing, click the Save button to complete the model configuration.

Save Configuration

3. Use the Model

Go to the system settings page through the dropdown menu in the upper right corner, and select the model configuration you want to use in the LLM provider section.

After configuration, you can select this model in features like interview training and question generation. You can also select this model configuration for a specific interview in the interview options.

Select Model

4. Supported Model List

No.Model NameModel IDMax OutputUse Case
1GPT-5gpt-516K tokensLatest flagship model, strongest reasoning
2GPT-5 Minigpt-5-mini16K tokensLightweight GPT-5, fast
3GPT-5 Nanogpt-5-nano16K tokensUltra-lightweight, low cost
4GPT-4.1gpt-4.116K tokensGPT-4 upgrade, improved performance
5GPT-4ogpt-4o16K tokensMultimodal, supports image understanding
6GPT-4o Minigpt-4o-mini16K tokensLightweight multimodal
7GPT-4 Turbogpt-4-turbo4K tokensHigh-performance version
8GPT-4gpt-48K tokensClassic flagship model
9GPT-3.5 Turbogpt-3.5-turbo4K tokensAffordable, fast response

5. FAQ

5.1 Invalid API Key

Symptom: "Invalid API Key" or "Incorrect API key provided" error when testing connection

Solutions:

  1. Check if the API Key was copied completely (no extra spaces)
  2. Confirm the API Key has not been revoked
  3. Check if the OpenAI account has available credits

5.2 Request Timeout

Symptom: Long wait time with no response when testing connection or using the model

Solutions:

  1. Check if network connection is normal
  2. If in China, try using a proxy or third-party API forwarding service
  3. Check firewall settings

5.3 Insufficient Quota

Symptom: "You exceeded your current quota" or "Rate limit exceeded" error

Solutions:

  1. Log in to OpenAI platform to check account balance
  2. Top up or upgrade account plan
  3. Check API Key usage limits

Released under the GPL-3.0 License.