Here’s what really happened in Facebook’s AI research lab.

Researchers set out to make chatbots that could negotiate with people.

But wait, you ask, channeling Tuesday’s front-page splash from British tabloid , in which an AI system with self-awareness wages a devastating war against humans? Facebook’s simple bots were designed to do only one thing: score as many points as possible in the simple game. Because they weren’t programmed to stick with recognizable English, it’s not surprising that they didn’t.

machines rising up against their creators is a common theme in culture and in breathless news coverage.

That helps explain the lurid headlines in recent days describing how Facebook AI researchers in a “panic” were “forced” to “kill” their “creepy” bots that had started speaking in their own language. A Facebook experiment did produce simple bots that chattered in garbled sentences, but they weren’t alarming, surprising, or very intelligent.

That’s not bad, since—as you may know from talking with Siri or Alexa—computers aren’t very good at back-and-forth conversation.

Intriguingly, on some occasions Facebook’s bots said they were interested in items they didn’t really want before giving them up in a deal that secured what they were trying to collect.

What were the most interesting parts of Facebook’s experiment?

Once the bots started speaking English, they did prove capable of negotiating with humans.

The team taught their bots to play this game using a two-step program.

First, they fed the computers dialog from thousands of games between humans to give the system a sense of the language of negotiation.

Their thinking: Negotiation and cooperation will be necessary for bots to work more closely with humans.