Blogs from ZDNet reports the RFID-chip makers VeriChip is planning to push the implantable spy-chips directly to the South-Florida public in a campaign blitz targeting seniors beginning April 28. VeriChip’s idea is to link the chip to the person’s medical records. Larry Dignan believes this to be a good idea, allowing patients easier access to their personal medical records. On the other hand, Dana Blankenhorn expresses the usual concerns about their use, especially with seniors without Alzheimer’s:
* How much memory on this chip? Enough to get my full health record on it? How about my allergies and basic condition?
* How difficult is it to write to the chip? What about its security?
* How common will readers be?
* Who controls what gets written on the chip? Can it be hacked? Conversely, can it be accessed when needed?
* Can the chip be cloned? (Clone me, Doctor Memory!)
Larry asks some other good questions, although there are some long-running controversies he doesn’t address:
1. Is this really the mark of the beast?
2. Could the government use it to track and trap us?
3. What if the chip insertion site gets infected? What if the chip moves?
4. Could the VeriChip cause cancer?
5. Is this just a scam by former HHS Secretary Tommy Thompson?
There’s a sucker born every minute. VeriChip is counting on that during their ad-blitz where convenience can override paranoia about the chips (particularly the cancer risk). And if that blitz succeeds? From Blankenhorn:
If the present marketing effort succeeds the company is bound to push for chipping everyone, given the chance of violence or accidents in our society.
Instant surveillance grid, with everyone under the microscope.
Current chips are nothing more than a number that needs to be tied to your personal records in some corp-government database. The next chips may have memory, possibly recording devices, to store your (deviant) thoughts for use against you, as a way to resurrect or clone you if you die (Altered Carbon reference), or for someone to make a Final Cut of your life.
Right now, the jury is out to see if the campaign can con enough geezers into getting implanted. Hopefully not.
Eriksson, a researcher at the Swedish (Norwegian?) security firm Bitsec, uses reverse-engineering tools to find remotely exploitable security holes in hacking software. In particular, he targets the client-side applications intruders use to control Trojan horses from afar, finding vulnerabilities that would let him upload his own rogue software to intruders’ machines.
He demoed the technique publicly for the first time at the RSA conference Friday.
“Most malware authors are not the most careful programmers,” Eriksson said. “They may be good, but they are not the most careful about security.”
In other words, he uses hacker tactics to hack and pwn hacker’s systems. Confused yet?
How he RAT-ed the rat: Ericksson used a software package called a remote administration tool, or RAT, along with some standard hacking utilities to do his counterstrike:
Eriksson first attempted the technique in 2006 with Bifrost 1.1, a piece of free hackware released publicly in 2005. Like many so-called remote administration tools, or RATs, the package includes a server component that turns a compromised machine into a marionette, and a convenient GUI client that the hacker runs on his own computer to pull the hacked PC’s strings.
Using traditional software attack tools, Eriksson first figured out how to make the GUI software crash by sending it random commands, and then found a heap overflow bug that allowed him to install his own software on the hacker’s machine.
Eriksson believes his techniques can even be used to fubar botnets as well. “If there is a vulnerability, it is still game over for the hacker,” Eriksson said (in the Wired report).
A couple of stories from Wired may have the sheeple doing a Chicken Little.
Military Robot Turns Its Gun on US Soldiers
Source: Popular Mechanics
An armed military robot, known as SWORDS, was reportedly pulled of Iraqi battlefields practically at the last second:
Last year, three armed ground bots were deployed to Iraq. But the remote-operated SWORDS units were almost immediately pulled off the battlefield, before firing a single shot at the enemy. Here at the conference, the Army’s Program Executive Officer for Ground Forces, Kevin Fahey, was asked what happened to SWORDS. After all, no specific reason for the 11th-hour withdrawal ever came from the military or its contractors at Foster-Miller. Fahey’s answer was vague, but he confirmed that the robots never opened fire when they weren’t supposed to. His understanding is that “the gun started moving when it was not intended to move.” In other words, the SWORDS swung around in the wrong direction, and the plug got pulled fast. No humans were hurt, but as Fahey pointed out, “once you’ve done something that’s really bad, it can take 10 or 20 years to try it again.”
Translation: The bot started moving, and the technophobes freaked. No reason for why the bot moved when it shouldn’t have has been given. Wired likened the SWORDS situation the Robocop scene where a presentation of ED-209 goes fubar when a suit is mistakenly gunned down.
The insurgent who may have hacked the SWORD’s frequency must be smiling like a shark.
UPDATE 15-May-2008: This blog from Wired reports that the SWORDS battlebots are still in Iraq, only not doing what they were supposed to do:
The first three armed ground robots deployed onto a battlefield are stuck behind sandbags and are not patrolling Iraqi streets as its inventors envisioned, said a senior executive with its manufacturer, Foster-Miller Inc.
The reson for the bots malfunctioning is even easier to explain in two words: Shoddy workmanship.
There were three cases of uncommanded movements, but all three were prior to the 2006 safety certification, she says. “One case involved a loose wire. So, now there is now redundant wiring on every circuit. One involved a solder, a connection that broke. everything now is double-soldered.” The third case was a test were the robot was put on a 45 degree hill and left to run for two and a half hours. “When the motor started to overheat, the robot shut the motor off, that caused the robot to slide back down the incline,” she says. “Those are the three uncommanded movements.”
Once the bugs are worked out, the bots may eventually see battlefield action. Then we can say humanity is screwed.
Industrial Control Systems Killed Once and Will Again, Experts Warn
In 1999, three people died and eight were injured when gasoline in a creek from a ruptured line caught fire. This past Wednesday, computer-security “experts,” speaking at the RSA Conference in San Francisco, claimed that the incident was the result of “a control-system incident:”
Wednesday, computer-security experts who recently re-examined the Bellingham incident called its victims the first verified human causalities of a control-system computer incident. They argue that government cybersecurity standards currently under debate might have prevented the tragedy.
But the factor that intrigues (Joe) Weiss and fellow researcher Marshall Abrams, a scientist at MITRE, is a still largely unexplained computer failure that began less than 30 minutes before the accident and paralyzed the central control room operating the pipeline, preventing workers from releasing pressure in the line before it hemorrhaged.
“The NTSB concluded that if the SCADA system computers had remained responsive to the commands of the Olympic controllers, the controller operating the pipeline probably would have been able to initiate actions that would have prevented the pressure increase that ruptured the pipeline,” reads the NIST report.
“These are the first fatalities from a control-system cyberevent that I can document, and for a fact say that this really occurred,” Weiss said in an earlier interview with Wired.com.
The board found no evidence of a computer attack from the outside, though. But Weiss, an outspoken evangelist for tighter control-system security standards, said he’s suspicious of the NTSB’s finding that the computer operator was at fault.
While SCADA systems security have been improved (we hope so, at least), Mr. Weiss’s comments sounds too much like a sales pitch for NIST 800-53, the government’s “security standard” he hopes infrastructure providers will adopt, especially after the CIA’s claim of hackers attacking foreign utilities earlier this year. Keep pouring that kool-aid, Mr. Weiss.
London-based Landmine Action, who have pressured nations against land mines and cluster bombs, are now looking to terminate Terminators before they ever get their AIs in order.
Failure detected in the tin-foil-to-brain interface. Landmine Action seems to have the right idea, but this may be a sign of a paranoid delusion. Land mines and cluster bombs are existing technologies that are being used; Terminator robots do not exist… yet. There are armed robots out there (human controlled), and there are autonomous robots (somewhat), but there are no armed autonomous robots.
The Pentagon has not only never advocated taking the man-out-the-loop of targeting decisions for drones or robots, its current policies and procedures would prohibit such a move (some might argue that international law already prohibits autonomous armed drones).
My first question, and what prompted the original post, was: Where and when has the Pentagon advocated handing over actual weapons release decisions to an artificial life form? The Predator, SWORDS, and other robotic systems may have a few, limited capabilities to autonomously operate. But the decision to shoot is currently made, quite pointedly, by a human operator. If there has been a sea-change in Pentagon policy, I would like someone to point out a reliable source noting this change. (If there is another country that it taking the man completely out of the loop, again, I’d like to see evidence for this.)
My second question is: how do we define a robot? DANGER ROOM has written about “killer robots” a number of times, but these are not Terminators, since, again, there is a man in the loop. As several commenters pointed out, a Roomba is an autonomous system, so all it takes is a Roomba with a bomb to create a “killer robot.” In other words, the capability exists for robots to kill without human intervention. That’s true, but that capability has existed for decades. As another commenter noted: a heat-seeking missile could, by some definitions, be regarded as a robot (particularly if, as the original post noted, we equate land mines with robots). Okay, if a landmine is a robot, then isn’t every guided missile, weapon and bomb a robot (and if so, should we ban them all)?
If heat-seekers qualify as robots, then Landmine Action’s mission has failed miserably… even before it got started.
Perhaps the bigger problem is: What if autonomous robots fall into terrorist hands? This Reuters article asks that question outright. With the needed components becoming cheaper and cheaper, a home-brew Terminator army may not be far off. That may be the true problem that Landmine Action needs to intervene.
Then again, if Terminators do become reality, who do you think they would target first?