Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark
for Lifelong Crowd Counting

Jiaqi Gao (1), Jingqi Li (1), Hongming Shan (1,2), Yanyun Qu (3), James Z. Wang (4), Fei-Yue Wang (5), Junping Zhang (1)
(1) Fudan University, China
(2) Shanghai Center for Brain Science and Brain-inspired Technology, China
(3) Xiamen University, China
(4) The Pennsylvania State University, USA
(5) Chinese Academy of Sciences
Abstract:

Crowd counting has important applications in public safety and pandemic control. A robust and practical crowd counting system has to be capable of continuously learning with the new incoming domain data in real-world scenarios instead of fitting one domain only. Off-the-shelf methods have some drawbacks when handling multiple domains: (1) the models will achieve limited performance (even drop dramatically) among old domains after training images from new domains due to the discrepancies of intrinsic data distributions from various domains, which is called catastrophic forgetting; (2) the well-trained model in a specific domain achieves imperfect performance among other unseen domains because of the domain shift; (3) it leads to linearly increasing storage overhead, either mixing all the data for training or simply training dozens of separate models for different domains when new ones are available. To overcome these issues, we investigated a new crowd counting task in the incremental domains training setting called Lifelong Crowd Counting. Its goal is to alleviate the catastrophic forgetting and improve the generalization ability using a single model updated by the incremental domains. Specifically, we propose a self- distillation learning framework as a benchmark (Forget Less, Count Better, or FLCB) for lifelong crowd counting, which helps the model sustainably leverage previous meaningful knowledge for better crowd counting to mitigate the forgetting when the new data arrive. In addition, a new quantitative metric, normalized backward transfer (nBwT), is developed to evaluate the forgetting degree of the model in the lifelong learning process. Extensive experimental results demonstrate the superiority of our proposed benchmark in achieving a low catastrophic forgetting degree and strong generalization ability.


Full Paper
(PDF, 0.4MB)


Citation: Jiaqi Gao, Jingqi Li, Hongming Shan, Yanyun Qu, James Z. Wang, Fei-Yue Wang and Junping Zhang, ``Forget Less, Count Better: A Domain-Incremental Self-Distillation Learning Benchmark for Lifelong Crowd Counting,'' Frontiers of Information Technology & Electronic Engineering, vol. 24, no. 2, pp. 187-202, 2023.

© 2023 Springer and Zhejiang University Press. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from Zhejiang University Press.

Last Modified: March 21, 2023
© 2023