Student Liam Porr used the speech-generating AI tool GPT-3 to create a fake blog post that recently landed # 1 on Hacker News, MIT Technology Review reported. Porr tried to demonstrate that the content produced by GPT-3 could fool people into believing it was written by a human. And he told MIT Technology Review, "It was really super easy, which was the scary part."
To lay the groundwork in case you are unfamiliar with GPT-3, it's the latest in a line of AI autocomplete tools developed by OpenAI of San Francisco and in development for several years are. At its simplest, GPT-3 (which stands for “generative pre-trained transformer”) automatically completes your text based on prompts from a human writer.
My colleague James Vincent explains how it works:
Like all deep learning systems, GPT-3 looks for patterns in data. To simplify matters, the program was trained on a huge corpus of text that was broken down for statistical regularities. These regularities are unknown to humans, but are stored as billions of weighted connections between the various nodes in the GPT-3 neural network. It is important that no human input is involved in this process: the program searches for and finds patterns without instructions, which it then uses to complete text prompts. When you type the word "fire" into GPT-3, the program knows from the weights in its network that the words "truck" and "alarm" follow much more often than "clear" or "elvish". So far, so easy.
Here is an example from Porr's blog post (with a pseudonymous author) entitled "Are you feeling unproductive? Maybe you should stop thinking."
Definition # 2: Over-thinking (OT) is trying to come up with ideas that have already been thought through by someone else. OT usually leads to ideas that are impractical, impossible, or even stupid.
Yes, I would also like to believe that I would be able to find out that this was not written by a human, but there are a lot of not great writings on these internets here so I think it is possible that this could be "Content" could qualify as marketing "or other content.
OpenAI decided to give researchers access to the GPT-3 API in a private beta instead of releasing them into the wild first. Porr, a computer science student at the University of California at Berkeley, was able to find a graduate student who already had access to the API and agreed to work with him on the experiment. Porr wrote a script that gave GPT-3 a headline and intro for a blog post. A few versions of the post were generated and Porr picked one for the blog that was copied from the GPT-3 version with very little editing.
The post went viral in a matter of hours, Porr said, and the blog had more than 26,000 visitors. He wrote that only one person inquired whether the post was generated by AI, although several commentators suggested that GPT-3 was the author. But, says Porr, the community has declined these comments.
He suggests that "writing" GPT-3 could replace content producers, which, of course, are jokes that, hopefully, couldn't happen. "The whole point of releasing it in private beta is that the community can show OpenAI new use cases to either promote or look out for," writes Porr. And it is noteworthy that he does not yet have access to the GPT-3 API, despite having requested it. He admitted to MIT Technology Review, "It's possible that you're upset that I did this."