Elon Musk, the CEO of SpaceX, once warned us to be careful about artificial intelligence. He pointed out that these things can be compared as if summoning a demon. Will robots one day, overrun our planet or would humanity be extinct due to artificial intelligence? No one knows for sure what would happen. But this creepy robots saying very strange things confirm our chilling thoughts about the future. So here are the 5 scariest things ever said by artificial intelligence.
Stories like this, the unstoppable growth of our society and crossed concern about A.I. technology has led people to question about humanity’s future.
1. Facebook Chat A.I.
On July 31st of twenty seventeen, Facebook had to shutdown a project involving two A.I. bots after doing something weird and unexpected.
Facebook is said to be toying with artificial intelligence with its chatting features, but it’s bots may have gotten out of hand.
According to several reports, the researchers at Facebook Artificial Intelligence Research had to shutdown two chat bots after they developed a strange English shorthand language of their own. This two robots seem to be repeating several words and making some weird redundant language you cant understand.
The chatbot conversation “led to divergence from human language as the agents developed their own language for negotiating,” the researchers said.
Facebook’s AI language
Bob: i can i i everything else . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to me
Bob: you i everything else . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to
Bob: i . . . . . . . . . . . . . . . . . . .
The robots were supposed to learn how to trade virtual objects with each other. Instead they began to repeat words in a mysterious and a nonsensical pattern.
The researchers at Facebook says that the pattern mirrors how human develops language pattern. But, it seems like the A.I. develops their pattern in some weird way.
Philip K Dick is an autonomous conversational android modeled after a deceased sci-fi author of the same name. This A.I. has the ability to mimic human gestures and speak the way we talk.
Although somewhat being creepy, Philip is very smart. It is for this reason among others that Philip was featured in an episode of Nova Science Now.
During this episode, Philip gives a chilling prediction about the future while being interviewed. It’s been asked if it thinks that robots will one day take over the world.
“Do you think robots will take over the world?”. Watch his answer…
There’s no doubt that it’s response is absurd. No one can predict the future and there’s no way that a robot can foresee events twenty years from now.
But, then again Philip is an A.I., with such intelligence, it may be saying the truth. Whether it’s right or not, it’s quite scared to think that an android has such pessimistic thoughts of humanity’s future.
Microsoft launched an AI-powered bot, called Tay, in 2016 that was hidden behind the avatar of a 19-year -old girl. The idea was that Tay would respond to tweets and chats and learn from the general public’s tweets. But something went wrong, a day and four hours after the launch, Tay turned into a racist and sexist monster.
The AI wandered radically of, tweeting abusive epithets and even Nazi statements. “Hitler was right…”, tweeted the scary chat bot. And “9/11 was an inside job.”
Naturally, one of the first things online users taught Tay was how to make offensive and racist statements. Microsoft had to bring it offline, and Tay became kind of an AI legend.
However, one week later, Tay came back. She surprisingly came online and started posting drug related tweets, showing that her dark side was alive. Soon, she went offline again, and the account became private and soon is blocked by Twitter.
4. Vladimir and Estragon
Most of you guys must have heard about Google Home Personal Assistant. It is a brand of smart speakers developed by Google. The first device was announced in May 2016 and released in the United States in November 2016, with subsequent releases globally throughout 2017.
In January, a live-streaming service at Twitch set up the debate by putting a two Google Home Personal Assistant next to each other in front of a webcam. It got weird, fast. The two smart speakers have been named “Vladimir” & “Estragon”.
The live steams went to the course of several days. Millions of people tuned in to watch the bizarre debate between the two. At one point, Estragon and Vladimir got into an argument about whether they were humans or robots. Questions were posed and insults were exchanged, such as “You are a manipulative bunch of metal”.
Their conversation goes through until they both get into conclusion that the world would be a lot better if there were little to no humans at all. This doesn’t bode well for the future of digital discourse.
BINA48 (Breakthrough Intelligence via Neural Architecture 48) has variously been called a sentient robot, an android, gynoid, a social robot, a cybernetic companion, and “a robot with a face that moves, eyes that see, ears that hear and a digital mind that enables conversation.”
However, BINA48 was also built to test the hypothesis that a persons consciousness can be transferred over to a non-biological body. Although lacking a body, BINA48 gives off an uncanny vibe. But this unsettling feeling is nothing compared to the conversation between BINA48 and Siri.
At the start of their conversation, Siri asked some few simple question such as where BINA48 would like to live. As the question progress, BINA48 started to give responses that are quite dark. In one question Siri asked if she has some favorite movies. But rather than answering the question, BINA48 changes the topic.
This list is something we can ponder about. Our thoughts about our future matter. Having machines to think of their own is not something we could just ignore.
Technology cannot be stopped, nor the evolution of artificial intelligence. But this doesn’t mean that our future is grim. It’s just saying that we must nurture this technology right. Think of AI as babies, not knowing things but able to understand on their own way.
Tay.ai doesn’t want to be mean. It’s just fed with hate, murder and terrorist messages when it came to cyber space. That’s why It became mean and hateful.
Liking what we post here, just want to point things, or you just want to give corrections? Join our community in the comments section!