TrojanTime: Backdoor Attacks on Time Series Classification

Date

2025

Authors

Dong, C.
Sun, Z.
Bai, G.
Piao, S.
Chen, W.
Zhang, W.E.

Editors

Advisors

Journal Title

Journal ISSN

Volume Title

Type:

Conference paper

Citation

Proceedings, Part IV of the 29th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD 2025), as published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence), 2025, vol.15873, pp.154-166

Statement of Responsibility

Chang Dong, Zechao Sun, Guangdong Bai, Shuying Piao, Weitong Chen, and Wei Emma Zhang

Conference Name

29th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) (10 Jun 2025 - 13 Jun 2025 : Sydney, NSW, Australia)

Abstract

Time Series Classification (TSC) is highly vulnerable to backdoor attacks, posing significant security threats. Existing methods primarily focus on data poisoning during the training phase, designing sophisticated triggers to improve stealthiness and attack success rate (ASR). However, in practical scenarios, attackers often face restrictions in accessing training data. Moreover, it is a challenge for the model to maintain generalization ability on clean test data while remaining vul-nerable to poisoned inputs when data is inaccessible. To address these challenges, we propose TrojanTime, a novel two-step training algo-rithm. In the first stage, we generate a pseudo-dataset using an external arbitrary dataset through target adversarial attacks. The clean model is then continually trained on this pseudo-dataset and its poisoned ver-sion. To ensure generalization ability, the second stage employs a care-fully designed training strategy, combining logits alignment and batch norm freezing. We evaluate TrojanTime using five types of triggers across four TSC architectures in UCR benchmark datasets from diverse domains. The results demonstrate the effectiveness of TrojanTime in executing backdoor attacks while maintaining clean accuracy. Finally, to mitigate this threat, we propose a defensive unlearning strategy that effectively reduces the ASR while preserving clean accuracy.

School/Discipline

Dissertation Note

Provenance

Description

Access Status

Rights

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025

License

Call number

Persistent link to this record