Skip to content

Core Features

CueMate provides comprehensive intelligent interview training features to help job seekers with mock interview training and interview preparation.

1. Feature List

No.Feature NameDescriptionOpen SourceSubscription
1HomeQuick start interview training, entry point for selecting positions and training modes
2Mock InterviewAI plays the role of interviewer, automatically asks questions and evaluates answers, providing a complete interview experience
3Interview TrainingReal interview scenario assistance, real-time recognition of interviewer questions and answer suggestions
4Voice QuestionAsk AI questions via text or voice, quickly get professional answer suggestions
5Voice TestTest microphone and speaker devices to ensure audio equipment works properly
6Tray MenuSystem tray quick access, quickly view data statistics, switch app state, modify common settings
7New JobCreate interview positions, fill in JD description and resume information as training foundation
8Job ListView and manage all interview positions, supporting edit, delete, search and filter
9Interview QuestionsCreate and manage personalized interview question banks, build interview knowledge system
10Interview ReviewView historical interview records, analyze answer quality, summarize lessons learned
11SettingsSystem parameter configuration including language preference, default model, interface theme, etc.
12Model SettingsConfigure and manage AI LLM services, supporting multiple domestic and international model integrations
13ASR SettingsConfigure speech recognition service, select microphone and speaker devices
14LogsView system logs by time and level, quickly locate issues
15Docker MonitorReal-time Docker service status monitoring, view CPU and memory usage
16Operation LogsRecord user operation history, support audit and traceability
17NotificationsIn-app notification management including position, question bank, interview report, knowledge base, license notifications
18Vector KnowledgeVector database management, supporting semantic search and question bank sync
19Prompt ManagementCustomize AI system prompts to optimize AI answer quality and style
20AI RecordsView all AI conversation history, supporting search, filter and export
21Pixel AdsAdvertisement display system based on 32×20 pixel grid, supporting ad viewing and simulation
22Preset QuestionsSystem preset common interview question bank, continuously updated by CueMate team, auto-imported on version update, supports sync to position questions×
23License ManagementManage subscription license, view authorization status and validity, support batch import of built-in question banks-

2. Detailed Feature Description

2.1 Core Interview Features

2.1.1 Real-time Speech Recognition

Dual-channel Audio Capture:

  • Microphone Capture: Real-time recognition of your voice input
  • System Audio Capture: Speaker audio output recognition, listening to interviewer questions (supports Tencent Meeting, Zoom, DingTalk, etc.)
  • Multi-language Support: Automatic Chinese and English recognition
  • Low Latency: Speech-to-text delay under 200ms

Speech Recognition Engine:

  • Local ASR Service: High-precision speech recognition based on CueMate-ASR
  • Real-time Transcription: Transcribe as you speak, no waiting
  • Auto Punctuation: Smart sentence segmentation for better readability
  • Noise Suppression: Filter ambient noise for improved accuracy

2.1.2 AI LLM Support

NOTE

CueMate supports multiple mainstream large language models, providing flexible AI service options including international, domestic, and local models.

International Models (5 providers):

  1. OpenAI - Provider of GPT series models
  2. Anthropic - Provider of Claude series models
  3. Google Gemini - Google's multimodal large model
  4. Azure OpenAI - OpenAI service on Microsoft Azure platform
  5. Amazon Bedrock - AWS large model service platform

Domestic Models (14 providers):

  1. Alibaba Cloud Bailian - Alibaba Cloud's enterprise-grade LLM service platform
  2. Tencent Hunyuan - Tencent's self-developed large language model
  3. Tencent Cloud - Tencent Cloud's LLM service
  4. Zhipu AI - GLM series models from Zhipu AI
  5. DeepSeek - High-performance large model from DeepSeek
  6. Kimi - Kimi intelligent assistant from Moonshot AI
  7. iFlytek Spark - Spark cognitive large model from iFlytek
  8. Volcengine - Doubao large model service from ByteDance
  9. SiliconFlow - Service platform focused on AI inference acceleration
  10. Baidu Qianfan - Baidu's large language model platform
  11. MiniMax - Ultra-long text large model from MiniMax
  12. StepFun - Step series models focusing on long context
  13. SenseNova - SenseNova series models from SenseTime
  14. Baichuan - Baichuan series models from Baichuan Intelligence

Local/Private Deployment (5):

  1. Local Models - Large model services supporting local deployment
  2. Ollama - Open-source tool for running large models locally
  3. vLLM - High-performance large model inference engine
  4. Xorbits Inference - Inference framework supporting multiple models
  5. Regolo - Enterprise private deployment solution

Smart Answer Strategy:

  • Context Understanding: Generate coherent answers combining conversation history
  • Role Playing: Simulate professional job seeker's answer style
  • Length Control: Adjust answer length based on question type
  • Avoid Clichés: Generate natural and authentic answer content

2.1.3 Knowledge Base Enhancement (RAG)

Vector Database:

  • ChromaDB Integration: High-performance vector retrieval engine
  • Semantic Search: Understand question intent, retrieve relevant knowledge
  • Multi-document Support: PDF, Word, Markdown, plain text
  • Auto Chunking: Smart document splitting, maintaining semantic integrity

Knowledge Base Management:

  • Batch Import: Support drag-and-drop upload of multiple documents
  • Preset Question Bank: System preset common interview Q&A (requires License)
  • Custom Classification: Organize knowledge by tech stack, position type, etc.
  • Version Management: Track knowledge base update history

Retrieval Augmented Generation:

  1. Question Understanding: Analyze the core intent of interviewer's question
  2. Knowledge Retrieval: Find relevant content from vector database
  3. Answer Synthesis: Combine retrieval results with AI generation capability
  4. Source Attribution: Display the knowledge sources cited in answers

2.1.4 Interview Training Modes

Mock Interview:

  • Position Customization: Generate targeted questions based on target position
  • Difficulty Levels: Beginner, Intermediate, Advanced, Expert
  • Random Questions: Intelligently select questions from question bank
  • Timed Training: Simulate actual interview time pressure

Answer Evaluation:

  • Answer Quality Scoring: AI evaluates completeness and accuracy
  • Improvement Suggestions: Provide specific optimization directions
  • Reference Answers: Display excellent answer examples
  • Weakness Analysis: Identify knowledge points needing improvement

History Review:

  • Interview Recording: Save complete training process
  • Text Records: Text version of questions and answers
  • Statistical Analysis: Training count, average score, progress curve
  • Wrong Answer Collection: Collect incorrectly answered questions

2.1.5 Prompt Engineering

System Prompts:

  • Role Definition: Set AI's identity and behavior norms
  • Answer Style: Control output tone and format
  • Constraints: Prevent AI from generating inappropriate content
  • Template Variables: Dynamically insert position, skills, and other info

User Prompts:

  • Quick Templates: Preset commonly used prompt templates
  • Custom Editing: Adjust prompts based on personal needs
  • A/B Testing: Compare effects of different prompts
  • Version History: Manage prompt iteration records

Prompt Library:

  • Industry Classification: Internet, Finance, Manufacturing, etc.
  • Position Classification: Backend, Frontend, Algorithm, Product, etc.
  • Skill Tags: Java, Python, React, Machine Learning, etc.
  • Community Sharing: Download and share excellent prompts

2.2 Desktop Client Features

System Integration:

  • Global Shortcuts: Quick launch and hide window
  • Floating Window Mode: Always on top, doesn't block interview software
  • System Tray: Minimize to tray, run in background
  • Auto Start: Auto launch on startup, always ready
  • Multi-language Interface: Supports Simplified Chinese, Traditional Chinese, and English
  • Theme Switching: Supports light/dark theme switching

Audio Management:

  • Microphone Test: Detect if microphone works properly
  • System Audio Test: Verify interview software audio capture
  • Volume Control: Independently adjust input and output volume
  • Device Switching: Support multiple audio devices

Privacy Protection:

  • Local Processing: Speech recognition runs locally
  • Data Encryption: Sensitive information stored encrypted
  • Auto Cleanup: Periodically clear temporary files
  • Permission Management: Fine-grained feature permission control

2.3 Web Admin Features

Account Management:

  • User Registration/Login: Email verification, secure login
  • Personal Information: Avatar, nickname, bio
  • Account Security: Change password, two-factor authentication
  • Login History: View login records and devices

System Configuration:

  • Model Settings: Configure and test LLM services
  • Voice Settings: Select ASR engine and language
  • Interface Settings: Theme, language, display options
  • Notifications: Position, questions, interview reports, knowledge base, license system notifications
  • Deployment Configuration (Distributed Mode): SSH connection settings, remote server address, authentication method

Data Statistics:

  • Usage Stats: API call count, token consumption
  • Cost Analysis: Fee statistics for each LLM service
  • Performance Monitoring: Response time, success rate
  • Error Logs: System exceptions and error records

3. Technical Features

3.1 High Performance

  • Real-time Response: Both speech recognition and AI generation under 1 second
  • Concurrent Processing: Support multi-channel audio simultaneous processing
  • Resource Optimization: Low CPU and memory usage
  • Edge Computing: Local models reduce cloud dependency

3.2 High Availability

  • Service Monitoring: Real-time monitoring of all service status
  • Auto Restart: Services auto recover from exceptions
  • Fallback Strategy: Auto switch when primary LLM unavailable
  • Resume on Disconnect: Auto reconnect after network interruption

3.3 Extensibility

  • Plugin System: Support third-party plugin extensions
  • API Interface: Open REST API for external calls
  • Webhook: Event notifications and integrations
  • Custom Models: Connect privately deployed LLM models

4. Version Information

NOTE

Open Source vs Subscription Version

  • Open Source Version: Includes all core interview training features, suitable for individual users and small teams
  • Subscription Version: Based on open source version, adds preset question bank feature

For detailed subscription information, please see License Management page.

For more feature details, please see the Features Guide section.

Released under the GPL-3.0 License.