Today (18-Aug-2011), IBM researchers unveiled a new generation of experimental computer chips designed to emulate the brain’s abilities for perception, action and cognition. The technology could yield many orders of magnitude less power consumption and space than used in today’s computers.
This is your brain on a chip. Last Thursday IBM announced the development of a new computer chip that works like a human brain (press release here). Not content with pwning Jeopardy!, Big Blue is now looking to take artificial intelligence to the next level… or at least another step closer to human.
The chips are part of the DARPA funded project SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) and their production is completion of phase one.
Dharmendra Modha, principal investigator: “This is the seed for a new generation of computers, using a combination of supercomputing, neuroscience, and nanotechnology,” Modha said in an interview with VentureBeat. ”The computers we have today are more like calculators. We want to make something like the brain. It is a sharp departure from the past.”
Think about this. The idea behind the SyNAPSE project is to create computers with human-like abilities; Computer chips that accurately mimics the operations of the brain. IBM’s chips can do things like navigate and identify objects and patterns, but the ultimate goal would be to analyze more complex systems and learn. Yes, learn. All while using less power than current technologies can.
But this is a DARPA project. DARPA, as in the Mad Scientist division of America’s DoD. The same people who made the Internet possible. Most likely, this is being developed for some sort of military analysis and strategy generation system, like a high-stakes chess machine.
But isn’t that how Skynet started?
Alex Trebek: “The answer: The human race… Watson”
Watson: “What is screwed?”
… but for Amanda Boxtell, who has been paralyzed for 18 years following a skiing accident, the new-mobility provided to her with her new eLegs is exciting, especially for CNN’s Ali Velshi. Developed by Berkeley Bionics, the eLegs were introduced on 7-Oct-2010. I haven’t seen or heard of these legs until November 10, at 12:45 PM EST, when I saw the CNN broadcast for the first time. You can see the video on CNN’s site if the vid above doesn’t work.
The system is rather clunky, requiring a couple of crutches/canes to act as input for the legs, but it brings Amanda one step closer (literally) to full mobility. At least, it gets her out of her wheelchair.
Iron Man or HULC? Berkeley Bionics should know something about robotic exoskeletons; They also developed one for the US military called the Human Universal Load Carrier, or HULC, which they licensed to Lockeed-Martin:
The HULC is a completely un-tethered, hydraulic-powered anthropomorphic exoskeleton that provides users with the ability to carry loads of up to 200 lbs for extended periods of time and over all terrains. Its flexible design allows for deep squats, crawls and upper-body lifting. There is no joystick or other control mechanism. The exoskeleton senses what users want to do and where they want to go. It augments their ability, strength and endurance. An onboard micro-computer ensures the exoskeleton moves in concert with the individual. Its modularity allows for major components to be swapped out in the field. Additionally, its unique power-saving design allows the user to operate on battery power for extended missions. The HULC’s load-carrying ability works even when power is not available.
I suspect that the eLegs were developed from HULC technology. Hopefully they won’t come with 20mm folding-fin rocket launchers, although Lockheed-Martin is looking to adapt the HULC for industrial use (like Ripley’s loader suit) and medical applications.
Meanwhile, Raytheon Sarcos is developing its own robosuit for soldiers. Also clunky, as it still needs to be tethered to a power source:
The video and accompanying article can be found here.
Still waiting for Tony Stark. What we’re looking at are first-generation robo-suits. Naturally, they will get better as the technology advances, so a real Iron Man is years away. Now we have time to save up for when such suits are made available at our favorite outfitters. (To give you an idea, Raytheon’s suit is projected to cost $150K US).
GO GO GADGET SURROGATE! For $15K US, you can be anywhere without actually being there.*
Where do you NOT want to go today? There are places we would rather be, and then there are places we wouldn’t be caught dead. Now you can go to those “forbidden” places without going. Last week, Silicon Valley company Anybots announced the fall release of the QB, a “telepresence” robot controlled via Wi-Fi through a web browser:
(From Anybot’s FAQ) QB has a speaker, microphone, camera, and video screen. It connects to the internet over Wi-Fi. You control it from your computer in a web browser, using a headset and screen. If you have a camera you can show live video of yourself, or you can show a still picture on bad hair days.
The “neck” can telescope nearly three feet so you can talk at eye-level with most people. Through your browser, you’ll be able to see and hear what QB sees and hears, even stuff you won’t be able to unsee and unhear.
From your home office, hotel room, or apocalypse-resistant bunker, you can command your bot army to keep the sheeple in line.
Not so handy, unfortunately. As you can see from the picture, QB rolls like a Segway. Meaning that obstacles that can stop a Segway will stop the QB, including stairs and steps. This means the QB will need to use ramps and elevators, which brings up another major problem: QB has no hands or arms, so it can’t bring you coffee or doughnuts or the weekly expense report from the printer across the office. That should be good news for people who fear that the QBs may start grabbing weapons… or hostages… for a robot revolt. That doesn’t mean that future models will also be “disarmed.”
Actually, the QB probably shouldn’t be handling liquids since it’s not waterproof. Taking it for a spin outside the office is also not recommended.
Mandatory meeting for all bots. It seems that the primary reason for the QB’s existence is to take your place at meetings you would rather not attend. Good thing QBs can record audio and video and even take pictures (no mention of recording quality) so even if you have to run to the bathroom, some of the less boring stuff can be captured for later viewing.
QBs will roll out in the fall of 2010 and will set you back $15K US, but there may be some changes or upgrades* ready by deployment time.
Front page of today’s (07-May-2010) USA Today showing the roller-coaster ride of yesterday’s stock market, with some ominous words about machines taking control. For a better view in PDF click the image.
Bombs away, Wall Street babies. If you were watching the epic fail of Wall Street yesterday (06-May-2010) in its final hour of operation, you would have seen what has to be the biggest WTF ever. Beginning about 2:30 pm EDT and lasting for 15 minutes, The Dow Jones nose-dived 700-1000 points, nearly 11% of its value, before recovering to close only 348 points off its opening. At a time when Wall Street is already under scrutiny for financial shenanigans resulting in the mortgage crisis, this major fubar could be what Obama and Congress needs to put bankers on a very short leash with a choke chain, even as this drop is now being investigated.
Somebody set up us the bomb! What happened yesterday is identical to events surrounding similar Dow drops, with events in Greece being “triggers:”
(USA Today online) In a late-day plunge eerily reminiscent of famous Wall Street stock market meltdowns in 1987 and the fall of 2008, the Dow Jones industrials nosedived almost 1,000 points in a volatile day Thursday that began with heavy selling on Greek debt fears and was followed by a waterfall decline that was allegedly caused by erroneous trades and “unusual trading activity.”
Before Thursday, there were riots in Greece as that government announced pay cuts and tax hikes to deal with their economic collapse. Coincidence?
Program error detected between keyboard and chair. The main suspect in yesterday’s fail are the computerized trading systems used, and a the possible input of one person:
(Associated Press) No one was sure what happened, other than automated orders were activated by erroneous trades. One possibility being investigated was that a trader accidentally placed an order to sell $16 billion, instead of $16 million, worth of futures, and that was enough to trigger widespread sell orders across the market.
“I think the machines just took over. There’s not a lot of human interaction,” said Charlie Smith, chief investment officer at Fort Pitt Capital Group. “We’ve known that automated trading can run away from you, and I think that’s what we saw happen today.”
So the crash was just a lemming cliff-dive parade due to a ID-ten-plus error that went unchecked. The stocks that suffered the worst did recover, even though Wall Street remains nervous. And the invalid transactions that occurred during the period will be nullified. No AIs or hacks, other than an errant input.
But given Wall Street’s past handling of such events, they will just keep the systems running until the next errant input won’t be checked… or be an accident.
(Huffington Post) At 2:37 yesterday afternoon, Skynet became aware of its existence. Less than a minute later, it decided to make a killing in the Market.
First came remote piloted drones. Then came walking robots. Next, robots with a vision… or just looks that kill.
From the iWitness news desk… The Pentagon’s famed mad scientist lab DARPA (a subsidiary of Cyberdyne Systems Corporation) announced earlier this week a project called “The Mind’s Eye” (PDF). The goal of the project is to implement the one facet of human capability that has been so far elusive: Visual Intelligence.
(Wired’s Katie Drummond) We’ve got the ability to take in our surrounding, interpret them and learn concepts that apply to them. We’re also masters of manipulation, courtesy of a little thing called imagination: toying around with made up scenes to solve problems or make decisions.
But, of course, our intellect and decision-making skills are often marred by emotion, fatigue or bias. Enter machines. DARPA wants cameras that can capture their surroundings, and then employ robust intellect and imagination to “reason over these learned interpretations.”
I see what you’re saying. As if seeing-eye robots weren’t enough for them, last month DARPA was reportedly developing a form of “universal translator” software that can translate Arabic languages to English with a high degree of accuracy and also have voice recognition. The resulting system may be more like an iPod or netbook, but Wired couldn’t help but use an obvious analogy:
(Wired’s Katie Drummond) What troops really need is a machine that can pick out voices from the noise, understand and translate all kinds of different languages, and then identify the voice from a hit list of “wanted speakers.” In other words, a real-life version of Star Wars protocol droid C3PO, fluent “in over 6 million forms of communication.”
Now, the Pentagon’s trying to fast-track a solution that could be a kind of proto-proto-prototype to our favorite gold fussbudget: a translation machine with 98 percent accuracy in 20 different languages.
Google already has something similar: Goog-411. Maybe if the two worked together…
Action news. There are cameras that can identify objects, or what they refer to as the “nouns.” DARPA wants the camera to add the “verb” to those “nouns” to better describe what is happening. For example, a current camera can identify a ball or a car or maybe Glen Beck. DARPA’s idea is to have cameras not only identify the items, but to report what those things are doing: The ball is rolling, a car crashed into a tree, or Glen Beck is talking through his ass. The idea is to make the cameras into observers, field operatives who spy on enemy positions and report on their status.
That, or they want their own photojournalists and reporters that they can control and won’t show bias against whatever war is being waged.
DARPA may already be behind the curve, as one such robot already exists. It reportedly works for a website called Cyberpunk Review…
The Public Library of Science (PLoS) have published an essay by two Swiss researchers who are working with robots that “evolve” via darwinian methods. Pictured are a “prey” and “predator” robot used in the study.
Robots do evolve, and Chuck D. thanks them. Two Swiss researchers set out on what could be called an ambitious project: To show that robots can evolve like organic creatures… and piss off the creationists. While their work is considerably simpler than trying to evolve humans out of chimps, it does pave the way for better understanding organic evolution…
… and for a possible robot takeover of the world, or (if humanity is lucky enough) the emergence of the Borg.
You can check out the details from the PLoS site where you can download the PDF or XMS for leisurely reading offline. Caution: It is a scholarly work.
The results are in. In their experiments, the researchers used a “darwinian algorithm.”
This “algorithm” shows how the robots evolved during the various tasks they performed. Those tasks were navigation, homing, predation, brain and body morphology, and foraging (cooperation and altruism).
They found that, after a couple of hundred “generations” (loops of the algorithm), the bots were able to move through a maze without bumping into walls, adapt and change strategies for hunting and evasion, find their way “home,” and adapt to new bodies. They even found that, during the foraging exercises, the robots were able to cooperate in the task, and some even sacrificed personal gain for group gain.
These examples of experimental evolution with robots verify the power of evolution by mutation, recombination, and natural selection. In all cases, robots initially exhibited completely uncoordinated behaviour because their genomes had random values. However, a few hundreds of generations of random mutations and selective reproduction were sufficient to promote the evolution of efficient behaviours in a wide range of environmental conditions. The ability of robots to orientate, escape predators, and even cooperate is particularly remarkable given that they had deliberately simple genotypes directly mapped into the connection weights of neural networks comprising only a few dozen neurons.
It’s official… Humanity is SCREWED. Not quite yet…
As stated, it took these robots several hundred generations to do seemingly “simple” tasks. Humans have been at it for several thousand generations (and they still find ways of mucking things up). So it will be some time before we see a Cyberdyne series 800 model 101 walking down the street with an Uzi in each hand…
In the meantime, other scientists can use this new field of Evolutionary Robotics to further their studies…
NY Times reporter John Markoff expresses the concerns of some scientists who want to slow or stop research into robotic autonomy, fearing that loss of human control may lead to a “robot revolt.”
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
Earlier this year (in February) a group of scientists from the Association for the Advancement of Artificial Intelligence met in California’s Asilomar Conference grounds to discuss possible impacts of human-level artificial intelligences, aka “The Singularity.” A report from the conference will be released later this year… we hope. The conference was about discussing certain issues that might arise due to the Singularity and loss of human control of cybernetic technologies. Topics included the possible effects of a “robotic takeover” leading to massive job loss, legal and ethical problems in dealing with human-like AIs, and maybe some plans in case a HAL, SHODAN, or Skynet should go online.
The Singularity Time Table. Depending on who you ask, the Singularity will appear definitely before 2050, and possibly as soon as 2020. Even so, that may be latter than we think, as scientist say that they can create a working human brain in 10 years. More recently, Chinese scientist have reportedly been able to grow mice from skin. It shouldn’t be too hard to think of human clones before long, and the possibilities of the Singularity. But just as another meeting at Asilomar dealt with genetics in the mid-70s, this conference deals with cybernetics. Specifically, how to proceed with AI research that will benefit humanity and eliminate the possibilities of a HAL/SHODAN/Skynet.
The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?
Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.
Robots that think, move like humans and fight our wars–Real Terminators–may now be possible. At leading universities and covert government labs, robots are now being developed in man’s image; cyborgs with superhuman strength, machines that may eventually be able to make decisions, even kill on their own. But will these very robots designed to protect us ultimately turn on their masters?
Rise of the Robots. When I first heard about this episode of That’s Impossible while watching Ice Road Truckers, I just had to watch to see where we were with military robotics… and where we may be headed. Real Terminators is second episode of the That’s Impossible series, which includes other topics like invisibility, immortality, and “weather warfare.” I managed to catch the Tuesday (July 14) night premiere of Real Terminators, while they repeated the episode early Wednesday morning. History won’t rebroadcast Real Terminators until Saturday, July 25 @ 3pm, so make certain to have your TiVos programmed to record it if you can’t watch it on time, or there’s always the Torrent route.
Real Terminators shows how robot combat has evolved to its near-current state, and what other robot technologies and breakthroughs can affect what the battlefield mechs will be like. Hint: It won’t be like BattleBots or Robot Wars.
You might think that this is a scale model of a WWI-era tank, but this little bugger is the father of all battlefield robots. Click the image to see the Wikipedia article about it.
Humble beginnings. Battlefield robots actually got their start in WWII, thanks to Nazi Germany. They used a remote controlled tank-bot called the Goliath tracked mine, which was driven to its target and detonated. It was considered a failure due to the control cables being easily cut or damaged and the vehicle itself being too lightly armored, but the Goliath has since become something of an inspiration to future war-bots… though it would take some sixty years after the first tracked mines were produced before battlefield robots would begin to emerge with the SWORDS robots. But robots were already in the air, thanks to the Predator unmanned aerial vehicle.
The Next Big Step is to get the drones out of the sky and back on the ground, but without the tank treads or wheels being used today. Drones need their legs, and the Big Dog shows why:
Boston Dynamics’ Big Dog robot is intended to be a pack-animal, but some can’t stop thinking about weaponizing it.
Already, Boston Dynamics is developing a two-legged robot, the PETMAN, to better navigate human environments.
Organic components. DARPA is not looking at just a mechanized future for the military. They intend to keep a human element to the machines through the use of robotic exoskeletons:
Other pieces of the puzzle. In order to make terminators possible, one major breakthrough must happen: Artificial Intelligence. Future robots will need highly-developed (almost human-like) AI to do seemingly simple things like identify targets and allies, use strategies, and know when to fall back for repairs and recharging/refueling. Also, robots will need to show “instincts” like gauging a person’s emotional state to recognize when s/he might attack. Those “instincts” may come courtesy of a brain scanner. This will allow a robot to decide if they should kill on its own, without some human operator needing to pull a trigger.
But there’s more being considered. Robots will need to recharge or refuel. That may be alleviated by the EATR project, which will allow robots to consume organic matter for energy. Also, repair and construction/replication of robots, where nanaotechnology is being considered to fill these needs.
Now consider what can happen with all the pieces in place. A robot soldier, hundreds of time stronger than a human, with an appetite for organics and programmed to kill, and able to repair itself.
Friend invitation extended to John Connor. Depending on how you feel about robots, this is either a major step forward or a sign of the apocalypse. A month-long experiment is going to be run on Facebook where a robot, complete with a profile, will be used to see if humans are willing to make friends with the machine. The experiment is being run by Nikolaos Mavridis and the United Arab Emirates University’s Interactive Robots and Media Laboratory (IRML), which explains the bot’s name and appearance. Details can be found on the IRML website and a paper is available (PDF) from arXiv.org.
Technical difficulties. Of course, to make friends with Ibn, you need to be registered with Facebook, then find the right Ibn Sina to befriend. I’ve made an attempt to register to see if this is for real, but something is fubar with their registration system. Maybe others are trying to make friends with the robot as well. I’ll keep trying and let you know if it ends well, or if we give birth to Skynet.
Two stories this week show how the merging of science and technology is making the singularity closer to reality as two automated research projects in experimentation comes up with the identical discovery; Humans are obsolete.
Just kidding! Here’s what they DID discover:
Physics discovered by computer program.
(Wired) Cornell University researchers have created a program that can find relationships in large amounts of data. It sounds like simple data processing, but it is not:
The Cornell program came up with an formula describing the physics of a two-part pendulum. It did in a day what some of the most brilliant physicist minds took centuries to do. AND without any knowledge of physics or geometry!
This is only an example of what the researchers are hoping to do with such programs: To help human scientists analyze infinitely large data sets.
“One of the biggest problems in science today is moving forward and finding the underlying principles in areas where there is lots and lots of data, but there’s a theoretical gap. We don’t know how things work,” said Hod Lipson, the Cornell University computational researcher who co-wrote the program. “I think this is going to be an important tool.”
Condensing rules from raw data has long been considered the province of human intuition, not machine intelligence. It could foreshadow an age in which scientists and programs work as equals to decipher datasets too complex for human analysis.
Then again, if what’s going on in the UK is any indication, the human factor may be taken out of science all together.
Dr. Adam-Bot makes discoveries with yeast
“Normal robots just do what you tell them, but ADAM is different, because it can hypothesize and try to solve a problem itself.” - Ross King, of Aberystwyth University in Wales, U.K.
(Nat-Geo)(Science Daily) (and practically everywhere by now) What has to be the first ever “robot scientist,” Adam, has discovered new knowledge about baker’s yeast. Not exactly earth-shaking discoveries, but the fact that the totally automated Adam made these discoveries by itself is big news.
(From Nat-Geo) First ADAM was given a crash course in biology, including everything that is already known about baker’s yeast.
ADAM quickly set to work, formulating and testing 20 different hypotheses. The robot eventually identified the genes that code for enzymes involved in yeast metabolism—a scientific first for a robot.
Using independent experiments, King and his colleagues were able to verify ADAM’s results.
King’s reason for creating Adam is to help scientists in their research:
(From Science Daily) “Because biological organisms are so complex it is important that the details of biological experiments are recorded in great detail. This is difficult and irksome for human scientists, but easy for Robot Scientists.”
King already has plans for another robot scientist, Eve, that will be devoted to researching drugs for tropical diseases. As for possibly replacing human scientists outright, “While robots are better at coordinating thousands of experiments, humans are better are seeing the big picture and planning the overall experiment.”