Abstract
Language evolves over time in many ways relevant to natural language processing tasks. For example, recent occurrences of tokens 'BERT' and 'ELMO' in publications refer to neural network architectures rather than persons. This type of temporal signal is typically overlooked, but is important if one aims to deploy a machine learning model over an extended period of time. In particular, language evolution causes data drift between time-steps in sequential decision-making tasks. Examples of such tasks include prediction of paper acceptance for yearly conferences (regular intervals) or author stance prediction for rumours on Twitter (irregular intervals). Inspired by successes in computer vision, we tackle data drift by sequentially aligning learned representations. We evaluate on three challenging tasks varying in terms of time-scales, linguistic units, and domains. These tasks show our method outperforming several strong baselines, including using all available data. We argue that, due to its low computational expense, sequential alignment is a practical solution to dealing with language evolution.
Original language | English |
---|---|
Title of host publication | AAAI Conference on Artificial Intelligence |
Publisher | AAAI Press |
Pages | 7440-7447 |
Number of pages | 8 |
ISBN (Print) | 978-1-57735-835-0 |
DOIs | |
Publication status | Published - 11 Nov 2019 |
Event | 34th AAAI conference on Artificial Intelligence, AAAI 2020 - Hilton New York Midtown, New York, United States Duration: 7 Feb 2020 → 12 Feb 2020 Conference number: 34 https://aaai.org/Conferences/AAAI-20/ https://aaai.org/Conferences/AAAI-20/aaai20call/ |
Publication series
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
Number | 5 |
Volume | 34 |
Conference
Conference | 34th AAAI conference on Artificial Intelligence, AAAI 2020 |
---|---|
Abbreviated title | AAAI 2020 |
Country/Territory | United States |
City | New York |
Period | 7/02/20 → 12/02/20 |
Internet address |
Keywords
- Natural language processing (NLP)
- Deep learning
- Domain adaptation
- Subspace alignment