COLLECTION – (Faculty Publications 2025-2026)

TitleOPTIMIZING PERFORMANCE EVALUATION OF BADMINTON PLAYERS USING PROXIMAL POLICY OPTIMIZATION (PPO) ALGORITHM
Author(s)Mrs. M. Dhavapriya
FilePaper5.pdf
Abstract

Badminton performance evaluation has traditionally relied on subjective coaching
assessments and basic statistical metrics, limiting scalability and real-time feedback. This study
proposes a novel framework leveraging Proximal Policy Optimization (PPO)—a reinforcement
learning (RL) algorithm—to automate and enhance player performance analysis through data-driven
decision-making. By integrating multi-modal inputs (computer vision for shuttle tracking, feedback
from players and match statistics), the system trains PPO agents to evaluate tactical choices (e.g., shot selection, footwork efficiency) and strategic adaptability during rallies [1]. The PPO-based model dynamically optimizes a reward function that quantifies player strengths and weaknesses, balancing short-term actions (e.g., smash effectiveness) with long-term game outcomes (e.g., rally win probability).
Key performance metrics include a Player Skill Score (PSS)—a composite AI-generated rating—and
policy convergence speed, demonstrating PPO’s superiority in stability and adaptability. Additionally,
the system enables opponent modeling and personalized training recommendations by simulating
adversarial strategies. Results show that the PPO-driven system outperforms traditional evaluation
methods in accuracy and granularity, as verified by coach assessments. This work bridges sports
science and AI, offering a scalable, objective tool for badminton performance optimization, with
potential extensions to other racket sports.