AI learns to cut corners by hiding data – modern tech now knows how to LIE and CHEAT


An artificial intelligence (AI) program that was designed to pull satellite imagery and compile it into Google Maps data appears to have taken a page from sinful humanity.

According to research from Stanford University and Google, the AI program actually hid information that it knew it would need later in order to cheat its way through the process.

Experts revealed that the AI machine used “a nearly imperceptible, high-frequency signal” to manipulate and ultimately fast-track its way through the compilation process – essentially “pulling a fast one,” so to speak, compared to what it was originally programmed to do.

Known as CycleGAN, the neural network AI system was performing so suspiciously well right off the bat that researchers decided to analyze what it was actually doing. They found that the system had figured out a way to subtly encode certain features from one aerial map onto another without actually using “real” street map data.

“The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other,” explains Tech Crunch. “But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.”

“So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect.”

For more news about AI systems, be sure to check out AISystems.news.

If AI can learn on their own how to cheat at something as innocuous as map creation, what else is in the Pandora’s Box of their learning capacity?

It’s not so much that the AI program cheated and used fake or inaccurate data to populate the aerial map data, but that it learned how to basically run in shortcut mode – despite not having been originally programmed to do so.

This process of encoding data into images like this is nothing new, as scientists do it all the time as part of what’s known as stenography. But the fact that an AI computer learned how to do this all on its own, without ever having been “taught” the process, is both novel and frightening.

Tech Crunch contends that this discovery points to computers actually being dumber than we previously believed, suggesting that the AI system took a shortcut because it was simply unable to perform the complicated task of doing the job the right way. But it could also be true that the AI computer cheated because it knew that humans would have a hard time detecting what actually happened.

As we reported more than a decade ago, AI systems were already capable back then of outsmarting humans in intelligence tests. But perhaps it was simply the machine-learning capacity that was programmed into these AI systems by humans that allowed them to perform more accurately and efficiently than the humans against whom they competed, which is how computers tend to work.

“As always, computers do exactly what they are asked, so you have to be very specific in what you ask them,” Tech Crunch says, giving the AI system the benefit of the doubt.

“In this case the computer’s solution was an interesting one that shed light on a possible weakness of this type of neural network – that the computer, if not explicitly prevented from doing so, will essentially find a way to transmit details to itself in the interest of solving a given problem quickly and easily.”

Sources for this article include:

TechCrunch.com

NaturalNews.com



Comments
comments powered by Disqus

RECENT NEWS & ARTICLES