Google Admits to Collaboration With U.S. Military On Its Illegal Drone Assassination Program

In another example of Google  straying from their initial “Don’t Be Evil” motto and Silicon Valley‘s increasing coalescence with the military-intelligence industrial complex, Google’s parent company Alphabet has confirmed a previously undisclosed contract with the U.S. Department of Defense, first revealed by Gizmodo in a March 6 article. The military contract with Google is routed through a Northern Virginia technology staffing company called ECS Federal, obscuring the relationship from the public.

According to the Gizmodo report, Google has furnished  TensorFlow programming kits for the Defense Department’s new algorithmic warfare initiative, providing assistance with a pilot project to apply its artificial intelligence solutions to help identify targets in the government’s drone assassination program.

A cross-team collaboration within the Google company has been quietly developing deep learning technology to help drone analysts interpret the vast image data vacuumed up from the military’s fleet of 1,100 drones to better target bombing strikes against the Islamic State.

Though Google’s collaboration with the Defense Department has not been previously reported, it was discussed widely within the company last week when information about the pilot project was shared on an internal mailing list, according to sources who requested to remain anonymous because they were not authorized to speak publicly about the project.

The race to adopt cutting-edge AI technology was announced in April 2017 by then-Deputy Defense Secretary Robert Work, who unveiled an ambitious Pentagon project called the Algorithmic Warfare Cross-Functional Team (AWCFT),  code-named Project Maven. Maven’s stated mission, Work wrote in an agency-wide memo, is to “accelerate DoD’s integration of big data and machine learning” and “turn the enormous volume of data available to DoD into actionable intelligence and insights at speed.” In total, the Defense Department spent $7.4 billion on artificial intelligence-related areas in 2017, the Wall Street Journal reported, and is expected to spend even more this year, with a significant amount flowing to corporations like Alphabet (Google), Amazon, and Nvidia, whose artificial intelligence capacities reportedly surpass those of in-house Pentagon programs.

Some Google employees were outraged that the company would offer resources to the military for surveillance technology involved in drone operations, sources said, while others argued that the project raised important ethical questions about the development and use of machine learning.

Former executive chairman of Google and Alphabet, Eric Schmidt, summed up the tech industry’s concerns about collaborating with the Pentagon at a talk last fall. “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly,” he said. While Google says its involvement in Project Maven is not related to combat uses, the issue has still sparked concern among employees, sources said.

“The technology flags images for human review, and is for non-offensive uses only,” a Google spokesperson told Bloomberg. “Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

The first phase of Project Maven, which incorporates multiple teams from across the Defense Department, is an effort to automate the identification and classification of objects in footage taken by drones (such as cars, buildings, or people),  taking that burden off analysts and providing them with  increased ability to make informed decisions on the battlefield. Maven’s initial goal was to provide the military with advanced computer vision, enabling the automated detection and identification of objects in as many as 38 categories captured by a drone’s full-motion camera, according to the Pentagon. Maven provides the department with the ability to track individuals as they come and go from different locations.

The idea is to essentially provide a recommendation tool, so that the AI program can quickly single out points of interest around a type of event or target so that drone analysts can work more efficiently.

The department announced last year that the AI initiative, just over six months after being announced, was used by intelligence analysts for drone strikes against ISIS in an undisclosed location in the Middle East.

Gregory C. Allen, an adjunct fellow with the Center for New American Security, says the initiative has a number of unusual characteristics, from its rapid development to the level of integration with contractors.

“The developers had access to the end-users very early on in the process. They recognized that [with] AI systems … you had to understand what your end-user was going to do with them,” Allen said. “The military has an awful lot of experts in analyzing drone imagery: ‘These are the parts of my job I hate, here’s what I’d like to automate.’ There was this iterative development process that was very familiar in the commercial software world, but unfamiliar in the defense world.”

“They were proud of how fast the development went, they were proud of the quality they were getting,” added Allen, co-author of “Artificial Intelligence and National Security,” a report on behalf of the U.S. Intelligence Advanced Research Projects Activity.

The Obama administration’s assessment of the drone program claimed that between 2009 and December 31, 2015, the United States killed between 2,372 to 2,581 of what it declared were terrorist “combatants” in 473 drone strikes. The civilian death count was reported as falling somewhere between 64 and 116, a tally that has been widely criticized as undercounted and incomplete, as it does not include the civilian death toll from drone attacks in Afghanistan, Iraq, and Syria. The Bureau of Investigative Journalism gives a much higher count of more than 800 civilian deaths resulting from drone strikes in Pakistan, Yemen, and Somalia, during the same time period covered by the Obama administration’s tally.

Attempts to kill 41 men resulted in the deaths of an estimated 1,147 people, as of November 24, 2014, and the CIA’s very own leaked documents revealed that the US often does not know whom it is killing, with militant leaders accounting for only 2% of drone-related deaths. More than 80% of the people killed in  drone strikes have never even been identified by name. Internal military documents show that for every one person targeted by a drone strike, nine bystanders are killed, meaning that the real death toll of the US government’s remote “targeted” assassination campaign in Yemen, Somalia, Afghanistan, Pakistan, and Iraq could potentially rise to the tens of thousands. Additionally, President Trump has vastly expanded the drone murder program and loosened its Obama-era restrictions, more than quadrupling the amount of lethal strikes within a given time period.

According to the American Civil Liberties Union (ACLU):

“A program of targeted killing far from any battlefield, without charge or trial, violates the constitutional guarantee of due process. It also violates international law, under which lethal force may be used outside armed conflict zones only as a last resort.”

The US government has unilaterally claimed the right to use drones to assassinate American citizens, without due process, anywhere in the world, including within the borders of the United States. In 2011, the Obama administration assassinated Anwar al-Awlaki, a US citizen, with a Predator drone strike in Yemen, then murdered his 16-year-old American son, Abdulrahman al-Awlaki, in another drone strike two weeks later.

Google’s partnership in such nefarious and possibly unlawful operations threatens the company not only with legal sanctions around the world, but also with serious commercial repercussions. The company’s decision to proceed despite these dangers is a nod to the increasingly vital role of military contracts in the business operations of the major technology giants.

Speaking at a conference last year, Marine Corps Col. Drew Cukor, the head of Project Maven, declared the US in the midst of an “AI arms race,” adding, “Many of you will have noted that Eric Schmidt is calling Google an AI company now, not a data company,” according to the Wall Street Journal.

He added, “There is no ‘black box’ that delivers the AI system the government needs… Key elements have to be put together … and the only way to do that is with commercial partners alongside us.”

Facing the rapid economic rise of substantial military powers, such as Russia and China, who are able to develop and implement new technologies without the massive logistical burden of the countless wars, overseas deployments, and destabilization operations engaged in by the United States. US military planners have come to the conclusion that the only way to retain the American military advantage in future conflicts is to integrate Silicon Valley into the war machine.

The Pentagon has devised the so-called “Third offset” strategy to defeat the “pacing threat” from China by focusing on “autonomous learning systems, human-machine collaborative decision-making, assisted human operations, advanced manned-unmanned systems operations,” and “networked autonomous weapons” as the Economist recently expounded in the cover story of an issue titled “The Next War.”

This strategy revolves around the recruitment of the US private technology sector, which remains the most developed in the world. According to the Economist, the United States “continues to dominate commercial AI funding and has more firms working in the field than any other country.”

In order to streamline the reciprocal exchange between the technology giants’ vast computational power, artificial intelligence capabilities, and massive database of sensitive user data and the US military’s virtually limitless budget, the Pentagon has set up a series of partnerships with Silicon Valley. In 2015, the Pentagon established a private-public funding vehicle known as the Defense Innovation Unit Experimental (DIUx), headquartered just minutes from Google’s main campus in Mountain View, California.

The Defense Innovation Board was created a year later by the Pentagon in 2016, aiming to “bring the technological innovation and best practice of Silicon Valley to the US Military.” Shortly thereafter, the board released a set of recommendations that stressed the importance of adopting artificial intelligence and machine learning, stressing that technological superiority with AI is as important as “nuclear weapons in the 1940s and with precision-guided weapons and stealth technology afterward.”

The DIB — which is chaired by Eric Schmidt — recommended  “an exchange program and collaboration with industry and academic experts in the field.”

Last fall, Schmidt complained about the reluctance of those working in the technology sector to collaborate with the Pentagon, bemoaning the fact, “There’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly.”

Lt. Gen. John N.T. “Jack” Shanahan, director for defense intelligence overseeing Project Maven, joked at the GeoINT2017 conference that he hoped Google would start sharing more of what it knows with the Pentagon. “On the far end of the scale, you see Google. They don’t tell us what they have, unless anyone from Google wants to whisper in my ear later,” he said.

But beyond leveraging the tech giants’ artificial intelligence capabilities for guiding missiles and selecting victims, the open secret of the Pentagon’s collaboration with Silicon Valley is that, behind the scenes, vast quantities of sensitive, personal user data is likely being funneled to the Pentagon and intelligence agencies for the purposes of surveillance and targeting.

The integration of companies like Google into what had previously been known as the military-intelligence apparatus is creating a vast system of state repression previously unknown in human history. Preparing for great-power conflict requires, as the Pentagon’s recently-released National Defense Strategy puts it, “the seamless integration of multiple elements of national power—diplomacy, information, economics, finance, intelligence, law enforcement, and military.”


[whohit]google-admits-to-collaboration-with-illegal-u-s-drone-assassination-program[/whohit]

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*