Understanding Why Large Language Models Can Be Ineffective in Time Series Analysis: The Impact of Modality Alignment
Files
(Published version)
Date
2025
Authors
Zheng, L.N.
Dong, C.
Zhang, W.E.
Yue, L.
Xu, M.
Maennel, O.
Chen, W.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining, V.2 (KDD 2025), 2025, vol.2, pp.4026-4037
Statement of Responsibility
Liangwei Nathan Zheng, Chang Dong, Wei Emma Zhang, Lin Yue, Miao Xu, Olaf Maennel, Weitong Chen
Conference Name
31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) (3 Aug 2025 - 7 Aug 2025 : Toronto, ON, Canada)
Abstract
Large Language Models (LLMs) have demonstrated impressive performance in time series analysis and seems to understand the time temporal relationship well than traditional transformer-based approaches. However, since LLMs are not designed for time series tasks, simpler models-like linear regressions can often achieve comparable performance with far less complexity. In this study, we perform extensive experiments to assess the effectiveness of applying LLMs to key time series tasks, including forecasting, classification, imputation, and anomaly detection. We compare the performance of LLMs against simpler baseline models, such as single-layer linear models and randomly initialized LLMs. Our results reveal that LLMs offer minimal advantages for these core time series tasks and may even distort the temporal structure of the data. In contrast, simpler models consistently outperform LLMs while requiring far fewer parameters. Furthermore, we analyze existing reprogramming techniques and show, through data manifold analysis, that these methods fail to effectively align time series data with language and display ''pseudo-alignment'' behavior in embedding space. Our findings suggest that the performance of LLM-based methods in time series tasks arises from the intrinsic characteristics and structure of time series data, rather than any meaningful alignment with the language model architecture. We release the code for experiments here: https://github.com/IcurasLW/Official-Repository_Understanding_LLM_for_Time_Series_Analysis.git
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.