Public Speaking VR Simulation





Project Overview
The Public Speaking VR Simulation is a completed research-grade project designed to study attention, memory, and public speaking performance in virtual reality environments. Fully developed in Unity, this simulation offers realistic scenarios and advanced data collection capabilities for academic research purposes.
Key Features
- Immersive VR Public Speaking Environments
- VR Movement and Teleportation
- Three Different VR Teleprompter UIs
- Voice Recording and Transcription
- Research-Grade Data Collection
Research Purpose & Academic Impact
This project was designed as a comprehensive research tool to study various aspects of public speaking performance in VR environments. The system was built to be research-grade reliable rather than just a VR prototype, with reproducible transcription and comparison logic to ensure data was usable in academic studies.
AHFE 2025 Conference
Preliminary results were presented at AHFE 2025 (Applied Human Factors and Ergonomics) conference in Orlando, Florida, demonstrating the project's research validity and contribution to the field of human factors in VR environments.
Pilot Study
Conducted with 10 participants to test the effectiveness of different teleprompter UIs and scene variations, providing valuable insights for VR interface design.
Research Methodology & Evaluation
Evaluation Metrics
- Speech accuracy via transcription comparison to prepared text
- Timing and duration analysis of speeches
- Filler word detection (manually analyzed during early trials)
- Scene navigation efficiency measurement
Data Pipeline
- Results exported to Excel for statistical analysis
- Structured to support scaling for larger participant groups
- Reproducible transcription and comparison logic
Technical Implementation
The Public Speaking VR Simulation was fully developed in Unity, leveraging advanced VR interaction techniques and custom data collection systems:
- VR Movement: Realistic navigation within virtual environments
- Scene Transition: Seamless movement between different public speaking scenarios
- Three VR Teleprompter UIs: Various designs tested for user preference and effectiveness
- Three Public Speaking Scenes: Diverse settings for comprehensive testing
- Voice Recording: High-quality audio capture with real-time processing
- Speech-to-Text API Integration: Automated transcription for analysis
- Text Comparison Script: Automated evaluation of speech accuracy
- Data Export System: Structured data collection for research analysis
Future Research Directions
The completed project has established a foundation for extended research capabilities:
- Eye-tracking integration for measuring audience engagement patterns
- Stress and anxiety indicators through speech pace and pause analysis
- AI-powered feedback systems for tone, pacing, and clarity scoring
- Expanded participant studies with larger sample sizes
Project Impact
The Public Speaking VR Simulation successfully demonstrated the viability of VR as a research platform for studying human communication and performance. The project's research-grade reliability and comprehensive data collection capabilities have contributed valuable insights to the field of VR-based training and assessment. The presentation of preliminary results at AHFE 2025 validates the project's scientific rigor and potential for broader impact in educational technology and human factors research.