PoseAdapt: Sustainable Human Pose Estimation via Continual Learning Benchmarks and Toolkit

PoseAdapt: Sustainable Human Pose Estimation via Continual Learning Benchmarks and Toolkit
Muhammad Saif Ullah Khan, Didier Stricker
In: Proceedings of the Winter Conference on Applications of Computer Vision (WACV). IEEE Winter Conference on Applications of Computer Vision (WACV-2026), March 6-10, Tucson, AZ, USA, IEEE, 2/2026.

Abstract:
Human pose estimators are typically retrained from scratch or naively fine-tuned whenever keypoint sets, sensing modalities, or deployment domains change--an inefficient, compute-intensive practice that rarely matches field constraints. We present PoseAdapt, an open-source framework and benchmark suite for continual pose model adaptation. PoseAdapt defines domain-incremental and class-incremental tracks that simulate realistic changes in density, lighting, and sensing modality, as well as skeleton growth. The toolkit supports two workflows: (i) Strategy Benchmarking, which lets researchers implement continual learning (CL) methods as plugins and evaluate them under standardized protocols; and (ii) Model Adaptation, which allows practitioners to adapt strong pretrained models to new tasks with minimal supervision. We evaluate representative regularization-based methods in single-step and sequential settings. Benchmarks enforce a fixed lightweight backbone, no access to past data, and tight per-step budgets. This isolates adaptation strategy effects, highlighting the difficulty of maintaining accuracy under strict resource limits. PoseAdapt connects modern CL techniques with practical pose estimation needs, enabling adaptable models that improve over time without repeated full retraining.