News

OpenAI Releases Text Generator AI That Was Too “Dangerous” To Share

OpenAI, the AI research lab has finally published the GPT2 — the text generating AI tool which the lab once said was too “dangerous” to share.

In a blog post, OpenAI said that despite the arguments of GPT-2 potential in creating synthetic propaganda, fake news, and online phishing campaigns, “we’ve seen no strong evidence of misuse so far

Before the big release

Back in February, OpenAI announced the GPT2, a language model based upon 1.5 billion parameters and trained by analyzing over 8 million web pages.

The main objective of GPT2 is to create coherent text from a few words. The text generating AI tool can be used for many tasks such as translation, chatbots, coming up with unprecedented answers and more.

But citing concerns that it could be used for malicious intent, the company withheld the release of the full version. Instead, it released small experimental models to test the waters.

Over the year, there have been many iterations of the GPT2 model based on the small 124M and the medium 355M parameters. In fact, two researchers were able to re-create the entire version, however, they were scrutinized in the process.

How good is GPT-2?

Concerns over spreading fake news, propaganda and being used for malicious campaigns portray OpenAI text generator to be notoriously efficient.

But just like past AI text generations, GPT-2 has it’s limitations. “Machine learning software picks up the statistical patterns of language, not a true understanding of the world.”  wrote Wired after testing the original GPT-2 model.

When we first tested the TalkToTransformer.com, an iteration of the GPT-2 model, the limitations became clear. The model would appear intelligent, even thoughtful. But after a few minutes, it was just a juxtaposition of words and a bleak understanding of the world.

As with all the AI text generators, the model is not “human enough”.

To Top

Pin It on Pinterest

Share This