Aletho News

ΑΛΗΘΩΣ

Report Details Government’s Ability to Analyze Massive Aerial Surveillance Video Streams

By Jay Stanley | ACLU | April 5, 2013

Yesterday I wrote about Dayton Ohio’s plan for an aerial surveillance system similar to the “nightmare scenario” ARGUS wide-area surveillance technology. Actually, ARGUS is just the most advanced of a number of such “persistent wide-area surveillance” systems in existence and development. They include Constant Hawk, Angel Fire, Kestrel (used on blimps in Afghanistan), and Gorgon Stare.

One of the problems created by these systems—which have heretofore been used primarily in war zones—is that they tend to generate a deluge of video footage. A 2010 article says that American UAVs in Iraq and Afghanistan produced 24 years’ worth of video in 2009, and that that number was expected to increase 30-fold (which would be 720 years’ worth) in 2011. Who knows what that’s up to this year, or where it will be by, say, 2025. The human beings who operate these systems can’t possibly analyze all that footage.

In an attempt to solve this problem, Lawrence Livermore Labs has created a system for the military called “Persistics.” It can be used in conjunction with drone (or manned) camera systems such as ARGUS to help manage the vast oceans of video data that are now being generated. The system is

designed to help the Department of Defense and other agencies monitor tens of square kilometers of terrain from the skies, with sufficiently high resolution for tracking people and vehicles for many hours at a time.

That’s from a May 2011 report that I recently came across with the faintly ominous title “From Video to Knowledge.” Produced by Livermore Labs, it contains a lot of interesting detail about Persistics and the problems and solutions involved in massive aerial video surveillance.

The Persistics system consists of algorithms that “analyze the streaming video content to automatically extract items of interest.”

Its analysis algorithms permit surveillance systems to “stare” at key people, vehicles, locations, and events for hours and even days at a time while automatically searching with unsurpassed detail for anomalies or preselected targets.

With Persistics, the report boasts, “analysts can determine the relationships between vehicles, people, buildings, and events.” Among the capabilities touted in the report are:

  • “Seamless stitching” together of images from multiple cameras to create “a virtual large-format camera.”
  • Stabilizing video (“essential for accurate and high-resolution object identification and tracking”).
  • Eliminating parallax (the difference in how an object appears when viewed from slightly different angles).
  • Differentiating moving objects from the background.
  • The ability to automatically follow moving objects such as vehicles.
  • Creating a “heat map” representation of traffic density in order to “automatically discern if the traffic pattern changes.”
  • Comparing images taken at different times and automatically detecting any changes that have taken place.
  • Super-high “1,000-times” video compression.
  • The ability to provide all the locations a particular vehicle was spotted within a given time frame.
  • The ability to provide all the vehicles that were spotted at a particular location within a given time frame.

Technologically, according to the report, the Persistics program relies heavily on the explosion in the power of consumer Graphics Processing Units (GPUs) used in video games and the like.

The report also says that the system “is being further enhanced” to work with ARGUS, and includes new details about that system:

Persistics can simultaneously and continuously detect and track the motion of thousands of targets over the ARGUS-IS coverage area of 100 square kilometers. ARGUS-IS can generate several terabytes of data per minute, hundreds of times greater than previous-generation sensors.

Previous reports said that ARGUS could cover 15 square miles; here it reports 100 square kilometers, which is 38.6 square miles. (I suppose we should expect Moore’s Law-like expansion in the capabilities of these systems.)

Of course, the system is designed to store and retrieve all the records and data about everything that it surveils:

Persistics supports forensic analyses. Should an event such as a terrorist attack occur, the archival imagery of the public space could be reviewed to determine important details such as the moment a bomb was placed or when a suspect cased the targeted area. With sufficiently high-resolution imagery, a law-enforcement or military user could one day zoom in on an individual face in a heavily populated urban environment, thus identifying the attacker.

As with every privacy-invading technology designed and/or sold as helping foil terrorists, we have to wonder how long it will be before it’s applied to tracking peace activists.

Future work on Persistics is focused on the kind of behavioral analytics that have been discussed in the context of programs such as “Trapwire.” Livermore scientists, according to the report, are now working on automated methods for identifying “patterns of behavior” that could indicate “deviations from normal social and cultural patterns” and “networks of subversive activity.”

Also under development are efforts to allow the three-dimensional viewing of targets, as well as “methods to overlay multiple sensor inputs—including infrared, radar, and visual data—and then merge data to obtain a multilayered assessment.”

Of course, much of this is unobjectionable from a domestic civil liberties point of view when it’s used as originally intended: on foreign battlefields. The problem comes when the government brings the technology home and turns it inward upon the American people. In fact, at the close of the report, Livermore contemplates exactly that:

Unmanned aircraft have demonstrated their ISR [intelligence, surveillance, and reconnaissance] value for years in Afghanistan and Iraq. As U.S. soldiers return home, the role of overhead video imagery aided by Persistics technology is expected to increase. Persistics could also support missions at home, such as monitoring security at U.S. borders or guarding ports and energy production facilities. Clearly, with Persistics, video means knowledge—and strengthened national security.

Among the federal agencies most interested in the technology, the report says, is DHS.

April 5, 2013 Posted by | Civil Liberties, Full Spectrum Dominance | , , , , , | Leave a comment

Drone ‘Nightmare Scenario’ Now Has A Name: ARGUS

By Jay Stanley | ACLU | February 21, 2013

The PBS series NOVA, “Rise of the Drones,” recently aired a segment detailing the capabilities of a powerful aerial surveillance system known as ARGUS-IS, which is basically a super-high, 1.8 gigapixel resolution camera that can be mounted on a drone. As demonstrated in this clip, the system is capable of high-resolution monitoring and recording of an entire city. (The clip was written about in DefenseTech and in Slate.)

In the clip, the developer explains how the technology (which he also refers to with the apt name “Wide Area Persistent Stare”) is “equivalent to having up to a hundred Predators look at an area the size of a medium-sized city at once.”

ARGUS produces a high-resolution video image that covers 15 square miles. It’s all streamed to the ground and stored, and operators can zoom in upon any small area and watch the footage of that spot. Essentially, it is an animated, aerial version of the gigapixel cameras that got some attention for super-high resolution photographs created at Obama’s first inauguration and at a Vancouver Canucks fan gathering.

At first I didn’t think too much about this video because it seemed to be an utterly expected continuation of existing trends in camera power. But since it was brought to my attention, this technology keeps coming back up in my conversations with colleagues and in my thoughts. I think that’s because it is such a concrete embodiment of the “nightmare scenario” for drones, or at least several core elements of it.

First, it’s the culmination of the trend towards ever-more-pervasive surveillance cameras in American life. We’ve been objecting to that trend for years, and many of our public spaces are now under 24/7 video surveillance—often by cameras owned and operated by the police. But even in our most pessimistic moments, I don’t think we thought that every street, empty lot, garden, and field would be subject to video monitoring anytime soon. But that is precisely what this technology could enable. We’ve speculated about self-organizing swarms of drones being used to blanket entire cities with surveillance, but this technology makes it clear that nothing that complicated is required.

Second and more significantly to me, this technology also makes real a key threat that drones pose to privacy that we’ve talked about: the ability to do location tracking. The video shows cars and pedestrians near Quantico, Virginia automatically tagged with colored boxes, which follow them as they move around. As the technology’s developer told NOVA,

Everything that is a moving object is being automatically tracked. The colored boxes represent that the computer has recognized the moving objects. You can see individuals crossing the street, you can see individuals walking in parking lots.

The surveillance potential of such a tracking algorithm attached to such powerful cameras is worth pausing to think about. To identify someone there’s no need for face or license-plate recognition (which may be impractical from above anyhow), cell phone tracking, gait recognition, or what have you. Even knowing where a little green square starts and finishes its day can reveal a lot, because it turns out that even relatively rough location information about a person will often identify them uniquely. For example, according to this study, just knowing the zip code (actually census tract, which is basically equivalent) of where you work, and where you live, will uniquely identify 5% of the population, and for half of Americans will place them in a group of 21 people or fewer. If you know the “census blocks” where somebody works and lives (an area roughly the size of a block in a city, but much larger in rural areas), the accuracy is much higher, with at least half the population being uniquely identified.

However, ARGUS-type tracking could be used to get more precise data than that—in many cases, to determine a vehicle’s home address, which pretty much reveals who you are if you’re in a single-family home, and narrows it down pretty well even if you’re in a large apartment building. (Academic papers have been written on inferring home address from location data sets.) Add work address and I expect that would nail virtually everybody. And of course lodged in the data set would be not just where a particular vehicle starts and finishes its day, but all the places it stopped in between—potentially revealing, as we so often point out, an array of information about a person such as their political, religious, and sexual activities.

True, such tracking using ARGUS would be disrupted whenever a subject disappears from aerial view. For example, pedestrians who travel by subway or bus or walk under foliage, or vehicles entering tunnels, would be harder to track. But even there, datamining large data sets collected over time could probably reveal a lot of things about people’s daily patterns and I would bet could eventually be used to identify a surprisingly large number of them. I expect that ARGUS would be used (if it’s not already) to generate a database consisting of location tracks of moving vehicles or pedestrians beginning in one place and ending in another. Think of them as little strings on a map. Some of these strings would stretch from a person’s home to their work, with stops in between, while others might be fragments, interrupted by a tunnel or other obstruction. But even the fragments, when the dimension of time is added to the equation, could probably be correlated together.

Of course low-lying clouds or fog might also interfere with aerial tracking, though imaging technologies already in existence could probably be deployed to see through them.

NOVA was not allowed to show images of the ARGUS censor, and stated that part of the program remained classified, including whether it has yet been deployed. (Though, we know it has been deployed domestically at least once, over Virginia as shown on NOVA. I’m going to assume it has not been deployed domestically in any more routine manner.) But, it is good that the Air Force allowed NOVA to see its capabilities. I’d like to think it’s because as Americans, Air Force officials have respect for our country’s values and democratic processes and don’t want to let such powerful and potentially privacy-invasive tools to be created in secret. It could also be, however, because the Air Force needs private-sector help in figuring out how to analyze the oceans of data the device can collect (5,000 hours of high-def video per day).

Either way, it’s important for the public to be aware of the kinds of technologies that are out there so that it can better decide how drones should be regulated.

February 27, 2013 Posted by | Civil Liberties, Full Spectrum Dominance | , , , , | Leave a comment