Advanced searches left 3/3

Abstractive Summarization - Astrophysics Data System

Summarized by Plex Scholar
Last Updated: 05 November 2022

* If you want to update the article please login/register

FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual Robustness

Despite being able to produce fluent and grammatical text, current Seq2Seq summarization models are also suffering from the unfaithful generation issue. We examine the reliability of existing technologies from a new perspective of factual robustness, which is the ability to accurately obtain credible data over adversarial unfaithful information. When gathering factual data, we first assess a model's truthfulness by its success rate to shield against adversary attacks. According to extensive automatic and human evaluation results, FRSUM consistently raises the accuracy of many Seq2Seq models, such as T5, BART.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221100294W/abstract


How Far are We from Robust Long Abstractive Summarization?

We use fine-grained human annotations to analyze long document abstract summarization schemes with the intention of delivering accurate summaries. ROUGE is the best at assessing the relevancy of a summary, according to the results of long document review results. We hope that our annotated long document dataset will continue to grow metrics in a wider variety of summarization styles.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221016732K/abstract


Improving abstractive summarization with energy-based re-ranking

At the same time, automated assessment tools such as CTC scores have been recently introduced, showing a higher correlation with human judgements than traditional lexical-overlap measures such as ROUGE. According to one or a combination of these measurements, we recommend an energy-based model that learns to re-rank summaries. Nevertheless, human review findings suggest that the re-ranking scheme should be used with caution for densely abstracted summaries, because the available metrics are not yet appropriate for this purpose.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221015553P/abstract


Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling

Abstract summaries that contain factual errors or hallucinated information are often unreliable summaries that lack summary information. However, creating non-factual summaries using heuristics does not always correspond to actual model errors. Our approach finds that our approach significantly outperforms prior methods in correcting erroneous summaries by quantitative and qualitative experiments on two common summarization datasets, CNN/DM and XSum. Our model, FactEdit, raises factuality scores by over 211 points on CNN/DM and over 61 points on XSum, on average, and more on XSum, resulting in more factual summaries while maintaining competitive summarization quality while still maintaining competitive summarization quality.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221012378B/abstract


Mutual Information Alleviates Hallucinations in Abstractive Summarization

High model uncertainty is present in this paper. We find a simple criterion in which models are more likely to assign more weight to hallucinated content during production: a simple criterion. We recommend a decoding scheme that shifts to optimizing for pointwise mutual knowledge of the source and target token rather than solely the likelihood of the target token's likelihood. Our results on the XSum database show that our method reduces the likelihood of hallucinated tokens while keeping the Rouge and BertS scores of top-performing decoding strategies.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221013210V/abstract


Salience Allocation as Guidance for Abstractive Summarization

Abstractive summarization schemes most commonly learn to gather salient data from scratch implicitly. It is difficult to find a definite threshold determining which content should be included in the guidelines as the number and allocation of salience content pieces differs. SEASON uses salience anticipation to inspire abstractive summarization and adapts well to papers of varying abstractness. Empirical results on more than one million news articles reveal a natural fifteen-fifty salience split for news article sentences, providing useful insight for writing news articles.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221012330W/abstract


Just ClozE! A Fast and Simple Method for Evaluating the Factual Consistency in Abstractive Summarization

The subject of factual consistency in abstractive summarization has drew a lot of attention in recent years, and assessing evidence of factual consistency between summary and paper has become a critical and urgent task. We offer ClozE, a new tool that was instantiated based on a masked language model, with good readability and a significant increase in speed in this paper. Following the paper acceptance, the ClozE code and models will be released.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221002804L/abstract


Towards Improving Faithfulness in Abstractive Summarization

Despite the success of neural abstract summarization based on pre-trained language models, one unresolved issue is that the generated summaries are not always faithful to the input document. Two potential explanations for the unfaithfulness problem can be understood or portrayed by the summarization software, and the text model over-relies on the language model to produce fluent but inadequate words. To find whether the encoder fully comprehends the input document and responds with the questions that were relevant in the input, we recommend using question-answering to see if the encoder fully understands the input document and answer the questions regarding the key details in the input. The QA focus on the correct input words can also be used to specify how the decoder should report to the source. For the second issue, we introduce a max-margin loss based on the word's difference and the summarization scheme, with the intention of preventing the language model's overconfidence.

Source link: https://ui.adsabs.harvard.edu/abs/2022arXiv221001877C/abstract

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions