Source: Wired.

cameraman

First came remote piloted drones. Then came walking robots. Next, robots with a vision… or just looks that kill.

From the iWitness news desk… The Pentagon’s famed mad scientist lab DARPA (a subsidiary of Cyberdyne Systems Corporation) announced earlier this week a project called “The Mind’s Eye” (PDF). The goal of the project is to implement the one facet of human capability that has been so far elusive: Visual Intelligence.

(Wired’s Katie Drummond) We’ve got the ability to take in our surrounding, interpret them and learn concepts that apply to them. We’re also masters of manipulation, courtesy of a little thing called imagination: toying around with made up scenes to solve problems or make decisions.

But, of course, our intellect and decision-making skills are often marred by emotion, fatigue or bias. Enter machines. DARPA wants cameras that can capture their surroundings, and then employ robust intellect and imagination to “reason over these learned interpretations.”

 

I see what you’re saying. As if seeing-eye robots weren’t enough for them, last month DARPA was reportedly developing a form of “universal translator” software that can translate Arabic languages to English with a high degree of accuracy and also have voice recognition. The resulting system may be more like an iPod or netbook, but Wired couldn’t help but use an obvious analogy:

(Wired’s Katie Drummond) What troops really need is a machine that can pick out voices from the noise, understand and translate all kinds of different languages, and then identify the voice from a hit list of “wanted speakers.” In other words, a real-life version of Star Wars protocol droid C3PO, fluent “in over 6 million forms of communication.”

Now, the Pentagon’s trying to fast-track a solution that could be a kind of proto-proto-prototype to our favorite gold fussbudget: a translation machine with 98 percent accuracy in 20 different languages.

Google already has something similar: Goog-411. Maybe if the two worked together…

 

Action news. There are cameras that can identify objects, or what they refer to as the “nouns.” DARPA wants the camera to add the “verb” to those “nouns” to better describe what is happening. For example, a current camera can identify a ball or a car or maybe Glen Beck. DARPA’s idea is to have cameras not only identify the items, but to report what those things are doing: The ball is rolling, a car crashed into a tree, or Glen Beck is talking through his ass. The idea is to make the cameras into observers, field operatives who spy on enemy positions and report on their status.

That, or they want their own photojournalists and reporters that they can control and won’t show bias against whatever war is being waged.

DARPA may already be behind the curve, as one such robot already exists. It reportedly works for a website called Cyberpunk Review… ;)

This post has been filed under Rise of the Robots, News as Cyberpunk by Mr. Roboto.

WordPress database error: [You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1]
SELECT COUNT(ID) FROM

Made with WordPress and the Semiologic CMS | Design by Mesoconcepts