Recently, a team of researchers at Maastricht College published a study that discovered the use of GPT-3 to manage email. As someone who has a mailbox that can only be described as silly, this has made me attractive.
Big idea: We spend hours every day reading and responding to emails, so what if an AI can auto-autontheize both of these processes?
The Maastricht team discovered the idea of leaving GPT-3 free in our email system from a practical point of view. Instead of focusing precisely on the quality of GPT-3 responses to specific emails, the team checked to see if this was worth trying.
Their research papers analyze the potential effectiveness of GPT-3 in the email secretary role by examining how useful it is compared to the machines that have been refined, how financially feasible it is compared to human workers, and how it affects the mistakes that the machine makes to the sender and recipient.
The basis: Finding a way to build a better email management program is a no-stop thing, but here we are talking about having GPT-3 respond to the emails sent. According to the researchers:
“Our research shows that the market for validizing emails using GPT-3 exists in various sectors of the economy, we will only explore a few of them. In all areas, the damage of a minor error in words may not seem important because the content is often unin touched by large quantities of money or human safety.”
The authors went on to describe applications in the insurance, energy and public administration industries.
Rebut rebuts: First of all, it should be pointed out that this is an uns reviewed manuscript. Often this means that the study is good, but the paper itself is still being edited. Specifically, this article is currently a bit messy. For example, three separate sections contain the same information, so it is difficult to really recognize the views of this study.
Research seems to show that we will save both time and money if GPT-3 is applied to the task of replying to work emails. But it’s a giant “if.”
GPT-3 works quietly. People will have to reread every email it sends because there’s no way to be sure it won’t say something that invites litigation. In addition to the concern that the machine will produce offensive or misleading text, there is another problem with trying to find out how well a bot with general knowledge will do this task.
GPT-3 is internet-trained, so it can tell you about the wingspan of the seabird or who won the 1967 World Series. But it certainly can’t decide whether you want to send a birthday card to a colleague or are interested in establishing a new sub-committee.
The problem is, GPT-3 will likely respond to generic emails inferior to a simple chatbot trained to select a pre-generated response.
A brief opinion: Google told me that fixed phones were not popular in the United States until 1998. And now, just a few decades later, only a fraction of homes in the United States still have a fixed phone.
I can’t help but wonder if email is the standard of communication for much longer – especially if the ultimate trend of innovation is looking to keep us away from our inboxes. Who knows how long it will take us to get a reliable enough hypothetical version of GPT from OpenAI to make it valid for use at every commercial level.
This study was commendable and the reading of the study was interesting, but ultimately the usefulness of GPT-3 as a merely academic email responder. There are better solutions for filtering mailboxes and automated responses in addition to the shield text creation tool.