Trainable Hard Negative Examples in Contrastive Learning for Unsupervised Abstractive Summarization
dc.contributor.author | Zhuang, H. | |
dc.contributor.author | Zhang, W.E. | |
dc.contributor.author | Dong, C.G. | |
dc.contributor.author | Yang, J. | |
dc.contributor.author | Sheng, Q.Z. | |
dc.contributor.conference | Conference of the European Chapter of the Association for Computational Linguistics (EACL) (17 Mar 2024 - 22 Mar 2024 : St. Julian’s, Malta) | |
dc.contributor.editor | Purver, M. | |
dc.contributor.editor | Graham, Y. | |
dc.date.issued | 2024 | |
dc.description.abstract | Contrastive learning has demonstrated promising results in unsupervised abstractive summarization. However, existing methods rely on manually crafted negative examples, demanding substantial human effort and domain knowledge. Moreover, these human-generated negative examples may be poor in quality and lack adaptability during model training. To address these issues, we propose a novel approach that learns trainable negative examples for contrastive learning in unsupervised abstractive summarization, which eliminates the need for manual negative example design. Our framework introduces an adversarial optimization process between a negative example network and a representation network (including the summarizer and encoders). The negative example network is trained to synthesize hard negative examples that are close to the positive examples, driving the representation network to improve the quality of the generated summaries. We evaluate our method on two benchmark datasets for unsupervised abstractive summarization and observe significant performance improvements compared to strong baseline models. | |
dc.description.statementofresponsibility | Haojie Zhuang, Wei Emma Zhang, Chang George Dong, Jian Yang, Quan Z. Sheng | |
dc.identifier.citation | Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2024), 2024 / Purver, M., Graham, Y. (ed./s), pp.1589-1600 | |
dc.identifier.isbn | 9798891760936 | |
dc.identifier.orcid | Zhuang, H. [0000-0003-4387-6347] | |
dc.identifier.orcid | Zhang, W.E. [0000-0002-0406-5974] | |
dc.identifier.orcid | Dong, C.G. [0009-0005-1495-6534] | |
dc.identifier.uri | https://hdl.handle.net/2440/145993 | |
dc.language.iso | en | |
dc.publisher | Association for Computational Linguistics | |
dc.publisher.place | Online | |
dc.relation.grant | http://purl.org/au-research/grants/arc/IE230100119 | |
dc.rights | © 2024 Association for Computational Linguistics | |
dc.source.uri | https://aclanthology.org/2024.findings-eacl.110/ | |
dc.title | Trainable Hard Negative Examples in Contrastive Learning for Unsupervised Abstractive Summarization | |
dc.type | Conference paper | |
pubs.publication-status | Published |