Advanced searches left 3/3

Abstractive Summarization - Crossref

Summarized by Plex Scholar
Last Updated: 05 December 2022

* If you want to update the article please login/register

COVID-19 information retrieval with deep-learning based semantic search, question answering, and abstractive summarization

Abstract: The COVID-19 and SARS-CoV-2-related publications in scientific disciplines have resulted in international efforts to understand, track, and mitigate the disease, as well as a large corpus of COVID-19 and SARS-Cov-2-related research across scientific disciplines. Through the COVID-19 Open Research Dataset's database, over 400,000 coronavirus-related papers have been collected throughout 2020. The retriever is based on a deep learning framework that encodes query-level terms, as well as two keyword-based models that emphasize the most important terms of a query. Each document is assigned a relevance score by the re-ranker, calculating how much each document responds to the query and an abstract summarization module that determines how well a query matches a generated summary of the data. We investigate our system on the information retrieval test of the TREC-COVID information retrieval challenge, showing solid success across a variety of key information retrieval metrics.

Source link: https://doi.org/10.1038/s41746-021-00437-0


Template-based Abstractive Microblog Opinion Summarization

To support research in this area, we propose the task of microblog opinion summarization and dissemination of a 3100 gold-standard opinion summaries. Summaries of tweets dating back to a 2-year cycle are included in the database, which covers more topics than any other public Twitter summarization report. Summaries are abstract in nature and were produced by journalists who specialize in summarizing news articles using a template that distinguishes factual information from author opinions.

Source link: https://doi.org/10.1162/tacl_a_00516


Analysis of Abstractive Text Summarization with Deep Learning Technique

In today's era, textual format has gained a great deal and is used to extract useful data from this data to create various types of information models, such as Document Generation, Prediction Systems, Recommendation Systems, and Language modeling, among other things. One can use native Apache Kafka APIs to populate data lakes, stream variants of and from databases, and power machine learning and statistical analysis. Tensorflow has grown into a substantial library of machine learning for the fast growing of workflows for complex machine learning systems.

Source link: https://doi.org/10.2174/9879815079180122010014


Toward Fact-aware Abstractive Summarization Method Using Joint Learning

Abstract Abstractive summarization obtains text semantic embedding based on the content of the source corpus's text to produce summaries, assisting users in extracting accurate data from massive amounts of text files. The textual implication task is learned jointly with the summary task, beginning from the original text and led by template sentences to address this topic. The knowledge of implicit reasoning is embedded in the summary's encoder model by parameter sharing to increase the model's ability to understand facts. This paper first uses the extractive summary software Lead-3 to extract partial sentences from the original text as template sentences in order to increase the factual consistency of the generated text. To make the encoder text-implant aware, the text-implication task is jointly trained with the text-summarization task to ensure that the encoder text-implication is understood.

Source link: https://doi.org/10.21203/rs.3.rs-2206382/v1


Fine-tuning and multilingual pre-training for abstractive summarization task for the Arabic language

Our main aim is to develop various abstract summarization schemes for the Arabic language. The search for abstractive Arabic text summarization hasn't started so far, owing to the unavailability of the datasets needed for that. We created the first monolingual corpus in the Arabic language for abstractive text summarization in our previous research. In addition, we pretrained our own monolingual and trilingual BART models for the Arabic language and fine-tuned them in comparison to the mT5 model for abstract text summarization for the same language, using the AraSum corpus. While there is ample opportunity and the infrastructure we have, the results show that the majority of our models surpassed the XL-Sum, which is considered to be the highest level of the art for abstractive Arabic text summarization so far. Our corpus u201d will be released to facilitate future research on abstractive Arabic text summarization summarization.

Source link: https://doi.org/10.33039/ami.2022.11.002


Faithful to the Original: Fact Aware Neural Abstractive Summarization

Abstractive summarization must use portions of the source text, which incites to produce fake information. Although most recent abstractive summarization efforts tend to focus on increasing intelligence, we do note that faithfulness is also a necessary prerequisite for a practical abstract summarization scheme. To prevent producing fake facts in a summary, we use open information extraction and dependency parse technologies to extract genuine fact information from the source text. Both the source text and extracted fact data are then used to force the generation conditioned on both the source text and the extracted fact statements.

Source link: https://doi.org/10.1609/aaai.v32i1.11912


Generative Adversarial Network for Abstractive Text Summarization

We also built a discriminator that seeks to distinguish the generated summary from the ground truth summary.

Source link: https://doi.org/10.1609/aaai.v32i1.12141


Abstractive Turkish Text Summarization and Cross-Lingual Summarization Using Transformer

Abstractive summarization aims to comprehend texts semantically and reconstruct them briefly and succinctly, in the case of words that do not appear in the original text. This chapter explores the abstractive Turkish text summarization issue by using a transformer attention-based approach in this chapter. Three summarization datasets were constructed from the publicly available text information on various news websites for developing abstractive summarization models. This study's summarization of texts in English and Turkish is the first example of cross-lingual English-to-Turkish text summarization.

Source link: https://doi.org/10.4018/978-1-6684-6001-6.ch011


A Multi-Task Learning Framework for Abstractive Text Summarization

The additional text categorization and syntax information is particularly helpful in the case of the abstract text summarization model.

Source link: https://doi.org/10.1609/aaai.v33i01.33019987


Abstractive Text Summarization by Incorporating Reader Comments

Traditional sequence-to-sequence based models often fail from summarizing the wrong part of the paper in the context of the main topic. To solve this issue, we suggest the development of a reader-aware abstract summary generation that uses the reader's input to help the model craft a more detailed summary of the main feature. Unlike traditional abstract summarization tasks, reader-aware summarization faces two primary challenges: comments are informal and tumultuous; jointly modeling the news document and reader comments is difficult; unlike traditional abstractive summarization tasks, reader-aware summarization faces two key challenges: The news document and reader feedback are often ambiguous. A sequence-to-sequence based summary generator; a reader-focused summary generator capturing the reader's focus; and a goal tracker implementing the desired result for each generation step. To solve the above challenges, we developed an adversary learning system called reader-aware summary generator, which consists of four parts: a sequence-to-sequence based summary generator; a supervisor modeling the semantic gap between the generated summary and reader-focused aspects; a goal a goal; a goal; based summary; based summary; based summary; based summary; based summary; capturing the reader-focused aspects; a aspire based goal for each generation step.

Source link: https://doi.org/10.1609/aaai.v33i01.33016399

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions

* Please keep in mind that all text is summarized by machine, we do not bear any responsibility, and you should always check original source before taking any actions