Tuesday, November 29, 2022
HomeArtificial IntelligenceDialog Summaries in Google Chat – Google AI Weblog

Dialog Summaries in Google Chat – Google AI Weblog


Data overload is a big problem for a lot of organizations and people at present. It may be overwhelming to maintain up with incoming chat messages and paperwork that arrive at our inbox on a regular basis. This has been exacerbated by the rise in digital work and stays a problem as many groups transition to a hybrid work surroundings with a mixture of these working each just about and in an workplace. One resolution that may deal with info overload is summarization — for instance, to assist customers enhance their productiveness and higher handle a lot info, we lately launched auto-generated summaries in Google Docs.

Immediately, we’re excited to introduce dialog summaries in Google Chat for messages in Areas. When these summaries can be found, a card with routinely generated summaries is proven as customers enter Areas with unread messages. The cardboard features a checklist of summaries for the completely different matters mentioned in Areas. This characteristic is enabled by our state-of-the-art abstractive summarization mannequin, Pegasus, which generates helpful and concise summaries for chat conversations, and is at the moment accessible to chose premium Google Workspace enterprise prospects.

Dialog summaries present a useful digest of conversations in Areas, permitting customers to shortly catch-up on unread messages and navigate to probably the most related threads.

Dialog Summarization Modeling

The purpose of textual content summarization is to supply useful and concise summaries for several types of textual content, comparable to paperwork, articles, or spoken conversations. A great abstract covers the important thing factors succinctly, and is fluent and grammatically right. One method to summarization is to extract key elements from the textual content and concatenate them collectively right into a abstract (i.e., extractive summarization). One other method is to make use of pure language era (NLG) strategies to summarize utilizing novel phrases and phrases not essentially current within the authentic textual content. That is known as abstractive summarization and is taken into account nearer to how an individual would usually summarize textual content. A fundamental problem with abstractive summarization, nonetheless, is that it generally struggles to generate correct and grammatically right summaries, particularly in actual world functions.

ForumSum Dataset

Nearly all of abstractive summarization datasets and analysis focuses on single-speaker textual content paperwork, like information and scientific articles, primarily as a result of abundance of human-written summaries for such paperwork. Alternatively, datasets of human-written summaries for different varieties of textual content, like chat or multi-speaker conversations, are very restricted.

To deal with this we created ForumSum, a various and high-quality dialog summarization dataset with human-written summaries. The conversations within the dataset are collected from all kinds of public web boards, and are cleaned up and filtered to make sure top quality and secure content material (extra particulars within the paper).

An instance from the ForumSum dataset.

Every utterance within the dialog begins on a brand new line, accommodates an writer identify and a message textual content that’s separated with a colon. Human annotators are then given detailed directions to jot down a 1-3 sentence abstract of the dialog. These directions went via a number of iterations to make sure annotators wrote top quality summaries. We have now collected summaries for over six thousand conversations, with a median of greater than 6 audio system and 10 utterances per dialog. ForumSum gives high quality coaching knowledge for the dialog summarization downside: it has quite a lot of matters, variety of audio system, and variety of utterances generally encountered in a chat software.

Dialog Summarization Mannequin Design

As we’ve got written beforehand, the Transformer is a well-liked mannequin structure for sequence-to-sequence duties, like abstractive summarization, the place the inputs are the doc phrases and the outputs are the abstract phrases. Pegasus mixed transformers with self-supervised pre-training personalized for abstractive summarization, making it a terrific mannequin selection for dialog summarization. First, we fine-tune Pegasus on the ForumSum dataset the place the enter is the dialog phrases and the output is the abstract phrases. Second, we use data distillation to distill the Pegasus mannequin right into a hybrid structure of a transformer encoder and a recurrent neural community (RNN) decoder. The ensuing mannequin has decrease latency and reminiscence footprint whereas sustaining related high quality because the Pegasus mannequin.

High quality and Consumer Expertise

A great abstract captures the essence of the dialog whereas being fluent and grammatically right. Based mostly on human analysis and consumer suggestions, we realized that the summarization mannequin generates helpful and correct summaries more often than not. However sometimes the mannequin generates low high quality summaries. After wanting into points reported by customers, we discovered that there are two fundamental varieties of low high quality summaries. The primary one is misattribution, when the mannequin confuses which particular person or entity stated or carried out a sure motion. The second is misrepresentation, when the mannequin’s generated abstract misrepresents or contradicts the chat dialog.

To deal with low high quality summaries and enhance the consumer expertise, we’ve got made progress in a number of areas:

  1. Bettering ForumSum: Whereas ForumSum gives an excellent illustration of chat conversations, we seen sure patterns and language types in Google Chat conversations that differ from ForumSum, e.g., how customers point out different customers and using abbreviations and particular symbols. After exploring examples reported by customers, we concluded that these out-of-distribution language patterns contributed to low high quality summaries. To deal with this, we first carried out knowledge formatting and clean-ups to scale back mismatches between chat and ForumSum conversations every time doable. Second, we added extra coaching knowledge to ForumSum to raised characterize these model mismatches. Collectively, these adjustments resulted in discount of low high quality summaries.
  2. Managed triggering: To verify summaries convey probably the most worth to our customers, we first have to make it possible for the chat dialog is worthy of summarization. For instance, we discovered that there’s much less worth in producing a abstract when the consumer is actively engaged in a dialog and doesn’t have many unread messages, or when the dialog is just too quick.
  3. Detecting low high quality summaries: Whereas the 2 strategies above restricted low high quality and low worth summaries, we nonetheless developed strategies to detect and abstain from displaying such summaries to the consumer when they’re generated. These are a set of heuristics and fashions to measure the general high quality of summaries and whether or not they endure from misattribution or misrepresentation points.

Lastly, whereas the hybrid mannequin offered important efficiency enhancements, the latency to generate summaries was nonetheless noticeable to customers after they opened Areas with unread messages. To deal with this situation, we as an alternative generate and replace summaries every time there’s a new message despatched, edited or deleted. Then summaries are cached ephemerally to make sure they floor easily when customers open Areas with unread messages.

Conclusion and Future Work

We’re excited to use state-of-the-art abstractive summarization fashions to assist our Workspace customers enhance their productiveness in Areas. Whereas that is nice progress, we imagine there are lots of alternatives to additional enhance the expertise and the general high quality of summaries. Future instructions we’re exploring embrace higher modeling and summarizing entangled conversations that embrace a number of matters, and creating metrics that higher measure the factual consistency between chat conversations and summaries.

Acknowledgements

The authors want to thank the many individuals throughout Google that contributed to this work: Ahmed Chowdhury, Alejandro Elizondo, Anmol Tukrel, Benjamin Lee, Cameron Oelsen, Chao Wang, Chris Carroll, Don Kim, Hun Jung, Jackie Tsay, Jennifer Chou, Jesse Sliter, John Sipple, Jonathan Herzig, Kate Montgomery, Maalika Manoharan, Mahdis Mahdieh, Mia Chen, Misha Khalman, Peter Liu, Robert Diersing, Roee Aharoni, Sarah Learn, Winnie Yeung, Yao Zhao, and Yonghui Wu.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments