Text summarization using a trainable summarizer and latent semantic analysis
β Scribed by Jen-Yuan Yeh; Hao-Ren Ke; Wei-Pang Yang; I-Heng Meng
- Book ID
- 113663432
- Publisher
- Elsevier Science
- Year
- 2005
- Tongue
- English
- Weight
- 529 KB
- Volume
- 41
- Category
- Article
- ISSN
- 0306-4573
No coin nor oath required. For personal study only.
β¦ Synopsis
This paper proposes two approaches to address text summarization: modified corpus-based approach (MCBA) and LSA-based T.R.M. approach (LSA+T.R.M.). The first is a trainable summarizer, which takes into account several features, including position, positive keyword, negative keyword, centrality, and the resemblance to the title, to generate summaries. Two new ideas are exploited: (1) sentence positions are ranked to emphasize the significances of different sentence positions, and (2) the score function is trained by the genetic algorithm (GA) to obtain a suitable combination of feature weights. The second uses latent semantic analysis (LSA) to derive the semantic matrix of a document or a corpus and uses semantic sentence representation to construct a semantic text relationship map. We evaluate LSA+T.R.M. both with single documents and at the corpus level to investigate the competence of LSA in text summarization. The two novel approaches were measured at several compression rates on a data corpus composed of 100 political articles. When the compression rate was 30%, an average f-measure of 49% for MCBA, 52% for MCBA+GA, 44% and 40% for LSA+T.R.M. in single-document and corpus level were achieved respectively.
π SIMILAR VOLUMES
## Abstract Latent semantic analysis has been used for several years to improve the performance of document library searches. We show that latent semantic analysis, augmented with a PartβofβSpeech Tagger, may be an effective algorithm for classifying a textual document as well. Using Brille's Partβ