Abstract
It becomes increasingly important to be able to handle large
amounts of data more e??ciently, as anyone could need or
generate a lot of information at any given time. However,
distinguishing between relevant and non-relevant information
quickly, as well as responding to newly obtained data of
interest adequately, remain cumbersome tasks. Therefore, a
lot of research aiming to alleviate and support the increasing
need of information by means of Natural Language Processing
(NLP) has been conducted during the last decades. This
paper reviews the state-of-the-art of approaches on information
extraction from text. A distinction is made between
statistic-based approaches, pattern-based approaches, and
hybrid approaches to NLP. It is concluded that it depends
on the user's need which method suits best, as each approach
to natural language processing has its own advantages and
disadvantages.
Original language | English |
---|---|
Title of host publication | Proceedings of the 10th Dutch-Belgian Information Retrieval Workshop (DIR 2010), January 25, 2010, Nijmegen, the Netherlands |
Place of Publication | Nijmegen |
Publisher | Radboud Universiteit Nijmegen |
Pages | 69-70 |
Publication status | Published - 2010 |
Event | conference; The 10th Dutch-Belgian Information Retrieval Workshop (DIR 2010); 2010-01-25; 2010-01-25 - Duration: 25 Jan 2010 → 25 Jan 2010 |
Conference
Conference | conference; The 10th Dutch-Belgian Information Retrieval Workshop (DIR 2010); 2010-01-25; 2010-01-25 |
---|---|
Period | 25/01/10 → 25/01/10 |
Other | The 10th Dutch-Belgian Information Retrieval Workshop (DIR 2010) |