Multiple Teacher Model for Continual Test-Time Domain Adaptation

Publisher:
SPRINGER-VERLAG SINGAPORE PTE LTD
Publication Type:
Chapter
Citation:
AI 2023: Advances in Artificial Intelligence, 2024, 14471 LNAI, pp. 304-314
Issue Date:
2024-01-01
Filename Description Size
Multiple Teacher Model for Continual Test-time Domain Adaptation.pdfPublished version847.12 kB
Adobe PDF
Full metadata record
Test-time adaptation (TTA) without accessing the source data provides a practical means of addressing distribution changes in testing data by adjusting pre-trained models during the testing phase. However, previous TTA methods typically assume a static, independent target domain, which contrasts with the actual scenario of the target domain changing over time. Using previous TTA methods for long-term adaptation often leads to problems of error accumulation or catastrophic forgetting, as it relies on the capability of a single model, leading to performance degradation. To address these challenges, we propose a multiple teacher model approach (MTA) for continual test-time domain adaptation. Firstly, we reduce error accumulation and leverage the robustness of multiple models by implementing a weighted and averaged multiple teacher model that provides pseudo-labels for enhanced prediction accuracy. Then, we mitigate catastrophic forgetting by logging mutation gradients and randomly restoring some parameters to the weights of the pre-trained model. Our comprehensive experiments demonstrate that MTA outperforms other state-of-the-art methods in continual time adaptation.
Please use this identifier to cite or link to this item: