New AI’s Creators Believe Creation Is Too Dangerous to Release to Public

Artificial intelligence (AI) promises to be a revolutionizing force in our lifetime in applications ranging from business, health care, media, and leisure, but many have expressed concern over AI’s potential for making the need for humans in many fields unnecessary. Some have even expressed concern over whether AI could become self-aware and eliminate and replace humans altogether.


Source: Gearstd / Shutterstock

Are Concerns Exaggerated?

To a large extent, these fears are overblown and often reflect a lack of knowledge or understanding of both the current and the likely future state of AI, as well as humans’ ability to control it.
However, a recent article in CNN Business illuminates the growing sophistication of AI and its potential to legitimately replace humans in certain fields.
CNN reports that a text-generating AI system, developed by nonprofit AI research company OpenAI, “can write page-long responses to prompts, mimicking everything from fantasy prose to fake celebrity news stories and homework assignments.”

AI-Generated Prose

So, just how good is it? CNN provides a Lord of the Rings-themed example from an OpenAI demo. The system was fed the following prompt:

Legolas and Gimli advanced on the orcs, raising their weapons with a harrowing war cry.

The system added:

The orcs’ response was a deafening onslaught of claws, claws, and claws; even Elrond was forced to retreat. “You are in good hands, dwarf,” said Gimli, who had been among the first to charge at the orcs; it took only two words before their opponents were reduced to a blood-soaked quagmire, and the dwarf took his first kill of the night.

CNN explains that OpenAI isn’t releasing this new tool publicly, as it typically does with its research projects, partially because of fears of how it could be misused—for example by malicious actors to create fake news stories or social media posts on a grand scale.

Threat or Opportunity?

While some experts support preventing access to the text generator, others feel the concern is exaggerated. For example, Christopher Manning, a Stanford professor and director of the Stanford Artificial Intelligence Lab, told CNN that although we shouldn’t be naïve about AI’s dangers, there are already similar tools publicly available, and the OpenAI version is simply the latest iteration.
Is your workforce ready for disruptive AI?