Aletho News

ΑΛΗΘΩΣ

Google, Big Tech and the US War Machine in the Global South

By Michael Kwet | CounterPunch | April 27, 2018

The recent Facebook and Cambridge Analytica fiasco deepened public concern about the political power and allegiances of Big Tech corporations. Soon after the story went viral, 3,100 Google employees submitted a petition to Google CEO Sundar Pichai protesting Google’s involvement in a Pentagon program called “Project Maven”.

Last week, the Tech Worker’s Coalition launched a petition protesting tech industry participation in development for war, urging Google to break its contract with the Department of Defense (DoD). Will Pichai respond?

Google has a lot to answer for. In March 2016, then US Secretary of Defense, Ash Carter, tapped then Alphabet CEO Eric Schmidt to chair the DoD’s new Innovation Advisory Board. The Board would give the Pentagon access to “the brightest technical minds focused on innovation” –  culled from Silicon Valley.

More recently, details about Project Maven emerged. The project uses machine learning and deep learning to develop an AI-based computer vision solution for military drone targeting. This innovative system turns reams of visual data – obtained from surveillance drones – into “actionable intelligence at insight speed.”

Because there are many more hours of surveillance footage than a team of humans can view, most of the footage cannot be evaluated by Pentagon workers. Using AI, Project Maven steps in to make sure no footage goes unwatched. The AI performs analytics of drone footage to categorize, sift and identify the items the DoD is looking for – cars, people, objects and so on – and flag the sought-after items for a human to review.  The project has been successful, and the Pentagon is now looking to make a “Project Maven factory”.

Reports of Google’s participation in Project Maven comes amidst news they are bidding alongside Amazon, IBM and Microsoft for a $10 billion “one big cloud” servicing contract with the Pentagon. Eric Schmidt, who is no longer CEO of Google or Alphabet, but who remains a technical advisor and board member at Google’s parent company Alphabet, claims to recuse himself of all information about Google AI projects for the Pentagon, because he also chairs the DoD’s Innovation Advisory Board.

Schmidt’s central role in this story underscores controversy about Google’s close relationship to the US military. In 2013, Julian Assange penned an essay highlighting Google’s sympathy for the US military empire in his essay, The Banality of ‘Don’t Be Evil’– a criticism of Schmidt and Jared Cohen’s co-authored book, The New Digital Age.

In 2015, Schmidt hosted Henry Kissinger for a fireside chat at Google. He introduced Kissinger as a “foremost expert on the future of the physical world, how the world really works” and stated Kissinger’s “contributions to America and the world are without question.”

For many, Henry Kissinger’s “contributions” are drenched in the blood of the Global South. Declassified documents show that during the Vietnam/Indochina War, Kissinger, then a national security advisor, transmitted Nixon’s orders to General Alexander Haig: use “anything that flies on anything that moves” in Cambodia. According to a study by Taylor Owen and Ben Kiernan (Director of Genocide Studies at Yale University), the United States dropped more tons of bombs on Cambodia than all of the Allies during World War II combined. Cambodia, they conclude, may be the most bombed country in history. By all reason, Kissinger should be tried for genocide.

Carpet bombing Cambodia is just one of many crimes carried out by Dr. Kissinger. During his time in government, he bolstered “moderate” white settler-colonial forces in Southern Africa to subvert the black liberation struggle for independence and self-determination. The US deemed Nelson Mandela, the African National Congress and other, less-recognized black liberation groups as “terrorist” and “communist” threats to US interests. The apartheid regime subjugated the black majority not only inside South Africa, but in brutal wars across the border in countries like Angola and Mozambique.  More than 500,000 Africans died in Angola alone.

US corporations profited from business in the region, and provided white supremacists the arms, vehicles, energy resources, financial support and computer technology used to systematically oppress black people. IBM was a primary culprit, supplying the apartheid state with the bulk of computers used to denationalize the black African population and administer the state, banks, police, intelligence and military forces.

On April 6, 2018, Kissinger welcomed one of today’s new tech leaders, Eric Schmidt, to keynote the annual Kissinger Conference at Yale University. This year’s theme was Understanding Cyberwarfare and Artificial Intelligence. After praising the ROTC and Ash Carter (both in attendance), Schmidt told the audience it is a “tremendous honor to be on the same stage as Dr. Kissinger, and we all admire him for all the reasons we all know.” In his speech, he spoke of how the US must develop AI to defend against today’s familiar adversaries: the “nasty” North Koreans, the Russians, the Chinese. A couple of Yale students were kicked out for protesting.

In decades past, human rights advocates famously challenged the development of technology for racial capitalism. Activists, including students and workers, pressured IBM, General Motors and other corporations to stop aiding and abetting apartheid and war.

Today, a new wave of technology is being tapped by military and police forces. IBM has partnered with the City of Johannesburg for early efforts at “smart” policing, while Africa and the Middle East are targets of the US drone empire. Activists advocating democracy and equality inside Africa and the Middle East are staunchly opposed to these developments.

The bi-partisan effort to police Trump-designated “shithole” countries with advanced weaponry has Big Tech on its side. Google’s involvement with Project Maven constitutes active collaboration in this endeavor.

An activist campaign about Silicon Valley’s collaboration with the US military could be unfolding. However, it’s going to take grassroots pressure across the world to make technology work for humanity.

Michael Kwet is a Visiting Fellow of the Information Society Project at Yale Law School.

April 30, 2018 Posted by | Full Spectrum Dominance, Timeless or most popular, War Crimes | , , , , , | Leave a comment

Blocked By Facebook and the Vulnerability of New Media

By Craig Murray | April 26, 2018

This site’s visitor numbers are currently around one third normal levels, stuck at around 20,000 unique visitors per day. The cause is not hard to find. Normally over half of our visitors arrive via Facebook. These last few days, virtually nothing has come from Facebook:

What is especially pernicious is that Facebook deliberately imposes this censorship in a secretive way. The primary mechanism when a block is imposed by Facebook is that my posts to Facebook are simply not sent into the timelines of the large majority of people who are friends or who follow. I am left to believe the post has been shared with them, but in fact it has only been shown to a tiny number. Then, if you are one of the few recipients and do see the post and share it, it will show to you on your timeline as shared, but in fact the vast majority of your own friends will also not receive it. Facebook is not doing what it is telling you it is doing – it shows you it is shared – and Facebook is deliberately concealing that fact from you.

Twitter have a similar system known as “shadow banning”. Again it is secretive and the victim is not informed. I do not appear to be shadow banned at the moment, but there has been an extremely sharp drop – by a factor of ten – in the impressions my tweets are generating.

I am among those who argue that the strength of the state and corporate media is being increasingly and happily undermined by our ability to communicate via social media. But social media has developed in such a way that the channels of communication are dominated by corporations – Facebook, Twitter and Google – which can in effect turn off the traffic to a citizen journalism site in a second. The site is not taken down, and the determined person can still navigate directly to it, but the vast bulk of the traffic is cut off. What is more this is done secretly, without your being informed, and in a manner deliberately hard to detect. The ability to simply block the avenues by which people get to see dissenting opinions, is terrifying.

Furthermore neither Facebook nor Twitter contact you when they block traffic to your site to tell you this is happening, let alone tell you why, and let alone give you a chance to counter whatever argument they make. I do not know if I am blocked by Facebook as an alleged Russian bot, or for any other reason. I do know that it appears to have happened shortly after I published the transcript of the Israeli general discussing the procedures for shooting children.

April 26, 2018 Posted by | Deception, Full Spectrum Dominance | , , | 5 Comments

Pentagon Capitalism and Silicon Valley: Google’s Drone War Project Shows Big Data’s Military Roots

By Elliott GABRIEL | Mint Press News | April 6, 2018

Google — the advertising and search engine monolith that once touted its official commitment, “Don’t be evil” — has thrown its full weight behind the U.S. military-industrial complex’s fast-advancing unmanned drone program – and more than three thousand of its employees will have none of it.

In a letter to Google CEO Sundar Pichai, over 3,100 employees invoked the now-discarded slogan in an appeal demanding that the company not allow its artificial intelligence technology to be used to improve the targeting capabilities of the United States’ deadly drone fleet. Google’s Project Maven is an AI surveillance engine that uses footage captured by the U.S. Armed Forces’ unmanned aerial vehicles to detect and track objects such as vehicles, while combing through, organizing, and feeding the processed data to the Pentagon.

Watch | Project Maven: The Pentagon’s New Artificial Intelligence

The letter, which is fast making the rounds on Google campuses and internal communication servers, demands the cancellation of the project and the public adoption of a policy pledging that neither Google nor its contractors produce technology for warfare. The letter states:

“This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.”

However, journalist and author Yasha Levine has explained in his new book Surveillance Valley: The Secret Military History of the Internet that Google has long been a leading Silicon Valley vendor to the U.S. repressive state:

“Over the years, [Google] supplied mapping technology used by the U.S. Army in Iraq, hosted data for the Central Intelligence Agency, indexed the National Security Agency’s vast intelligence databases, built military robots, co-launched a spy satellite with the Pentagon, and leased its cloud computing platform to help police departments predict crime. And Google is not alone. From Amazon to eBay to Facebook … Some parts of these companies are so thoroughly intertwined with America’s security services that it is hard to tell where they end and the U.S. government begins.

The grim calculus of remote-controlled warfare

An Air Force RPA reconnaissance drone is retrofitted for use in attack squadron. (Photo: U.S. Air Force)

Thousands of alleged “combatants” have been killed in U.S. drone strikes since the start of the post-9/11 “War on Terror.” Former President Barack Obama ramped up the targeted killing program using drones in 2009, pledging that the use of unmanned aerial platforms was part of a “just war—a war waged proportionally, in last resort, and in self-defense.”

Reports have shown that the use of drones in such locales as Afghanistan, Iraq, Libya, Pakistan, Syria, Yemen and other theaters of operations have claimed a vast number of civilian lives — over 15,000 in 2017, according to surveys. According to New York Times Magazine, which surveyed 150 Coalition drone strikes carried out in Iraq over an 18-month period, one out of every five strikes kills civilians.

Watch | Suffering in Silence: a documentary about the war on terror in Pakistan

The American Civil Liberties Union has denounced the use of such tactics as contrary to international law:

“A program of targeted killing far from any battlefield, without charge or trial … violates international law, under which lethal force may be used outside armed conflict zones only as a last resort to prevent imminent threats, when non-lethal means are not available.

There is very little information available to the public about the U.S. targeting of people far from any battlefield, so we don’t know when, where and against whom targeted killing can be authorized … The secrecy and lack of standards for sentencing people to death, resulting in a startling lack of oversight and safeguards, is one of our prime concerns with this program.”

In 2007 at the height of George W. Bush’s “troop surge” in Iraq, Google enlisted in the “Global War on Terror” in a discrete partnership with Lockheed Martin. While enhanced versions of Google Earth were already at the disposal of government agencies, the tech firm helped to design a Google Earth product for the Pentagon’s National Geospatial-Intelligence Agency that displayed a visual representation of U.S. and Iraqi military bases in Iraq as well as the location of Sunni and Shiite neighborhoods in Baghdad, allowing occupation forces to oversee and manage the bloody fratricidal warfare between the groups as well as the possible location of insurgent organizations within their ranks.

It was during this same year that the development and use of Air Force drones in the Iraqi quagmire dramatically increased, nearly doubling between January and October of 2007.

Google defends its public image (and low-profile military ties)

Claiming that it values the input of its employees as an “important part” of its company culture, Google has promised to address the AI-drone issue without making specific comments on the employees’ demands. In a statement Tuesday, the company said:

“Any military use of machine learning naturally raises valid concerns. We’re actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies.”

Google has also defended its participation in the Pentagon program with the claim that its usage is specifically for non-offensive purposes and uses open-source object-recognition software based on non-classified data that’s freely available to any users of the Google Cloud, and could in fact save lives while saving labor through the use of AI.

Google’s own top corporate chiefs are quite close to the Pentagon: Google vice president Milo Medin serves on the Department of Defense’s tech advisory body, the Defense Innovation Board, which is also chaired by former Google executive chairman Eric Schmidt, who remains an executive board member at Google’s parent company, Alphabet Inc.

Google itself has long been interested in developing its own line of drone products, including private delivery drones.

The company is also a leader in AI and machine-learning technology. Its subsidiary DeepMind Technologies, Inc. has recently developed a program based on Google Street View that allows AI-based platforms to take part in long-range navigation and cross complicated urban environments. The navigator AI system is capable of steering everything from autonomous, self-driving cars to robotic vacuums and even unmanned drones.

Watch | Eric Schmidt at the Artificial Intelligence and Global Security Summit

Last November in a keynote address on artificial intelligence in warfare before Washington-based think-tank the Center for a New American Security, Schmidt pinned anxiety about his company’s acquisition of DeepMind on “a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly, if you will.”

“It comes from a, it’s essentially related to the history of the Vietnam War and the founding of the tech industry,” Schmidt added.

Indeed, Levine argues:

“In the 1960s, America was a global power overseeing an increasingly volatile world: conflicts and regional insurgencies against U.S.-allied governments from South America to Southeast Asia and the Middle East. These were not traditional wars that involved big armies but guerrilla campaigns and local rebellions, frequently fought in regions where Americans had little previous experience. Who were these people? Why were they rebelling? What could be done to stop them? In military circles, it was believed that these questions were of vital importance to America’s pacification efforts, and some argued that the only effective way to answer them was to develop and leverage computer-aided information technology.”

Surveillance capitalism’s military-industrial roots

In a 2014 essay for Monthly Review magazine titled “Surveillance Capitalism: Monopoly-Finance Capital, the Military-Industrial Complex, and the Digital Age,” authors John Bellamy Foster and Robert W. McChesney coined the phrase surveillance capitalism, tracing its origins to the inception of a post-World War II “new Pentagon capitalism” that came to be known as the military-industrial complex.

Under the model – revolutionized by then-Army Chief of Staff and later President Dwight D. Eisenhower – the U.S. technological, scientific research and industrial capacity were to become “organic parts of our military structure” in conditions of national emergency, effectively giving the civilian economy a dual-use purpose. In a 1946 memorandum, Eisenhower noted:

“The future security of the nation… demands that all those civilian resources which by conversion or redirection constitute our main support in time of emergency be associated closely with the activities of the Army in time of peace.”

The model became a permanent feature of the U.S. economy, giving birth to a sprawling military-civilian economic base Eisenhower famously criticized in his 1961 farewell address to the nation. Civilian industry, science, and academia were used alongside an exorbitant and perpetually-expanding war budget to underwrite the Defense Department’s never-ending state of conflict with Cold War enemies and unruly populations, making the world safe for the unchallenged reign of U.S. monopoly capitalism (imperialism) and “pump-priming” the economy whenever an additional surge of “military Keynesian” government spending was required.

Watch | Richard Wolff On Trump’s Defence Spending

According to the popular history of the internet’s origin, it was conceived in 1969 by scientists at the Defense Department’s Advanced Research Projects Agency (ARPA), who sought to create a means of “internetworking” Pentagon-sponsored computer mainframes belonging to government agencies, universities and defense contractors across the United States and NATO bloc. Known as Arpanet, the decentralized system allowed for military nodes – down to a battlefield level – to network and share data quickly and wirelessly. In the event of a nuclear strike or major war where swathes of the network were destroyed, the Arpanet would remain operational.

Levine posits that a primary purpose of conceiving the “information superhighway” was the need for a computerized counterinsurgency tool that could predict and check the “perceived global spread of communism” and provide real-time surveillance of potential threat groups:

“The Internet came out of this effort: an attempt to build computer systems that could collect and share intelligence, watch the world in real time, and study and analyze people and political movements with the ultimate goal of predicting and preventing social upheaval. Some even dreamed of creating a sort of early warning radar for human societies: a networked computer system that watched for social and political threats and intercepted them in much the same way that traditional radar did for hostile aircraft. In other words, the Internet was hardwired to be a surveillance tool from the start. No matter what we use the network for today — dating, directions, encrypted chat, email, or just reading the news — it always had a dual-use nature rooted in intelligence gathering and war.”

The Arpanet system formed the backbone of U.S. military communications from 1969 until 1989, when the World Wide Web was introduced to civilian consumers. The unveiling of the World Wide Web opened the floodgates to the use of the internet by users across the globe, as well as its subsequent commercialization and the resulting dot com boom of the mid-to-late 1990s, when Google was founded.

Watch | Yasha Levine on why we lost our fear of computers as tools of social control

The rapid advance of digital technology has ensured that the U.S. economy — and social life itself — is now dominated by big-data giants like Google, Apple, Facebook and Amazon (GAFA), as well as Microsoft, Intel, Cisco Systems, IBM and Hewlett-Packard. This new technology-dominated market environment, where private user info is parsed, monetized, packaged and sold, has been encapsulated by the well-worn cliché: “data is the new gold.”

Vast strides in biometrics, analytics research, AI, and deep-learning technology – perfected not only by Google but by academic researchers and technology firms across the globe – have vastly boosted the state’s ability to surveil and control populations and police dissent across the globe. Many of the technologies developed by global arms, surveillance, and data-analysis firms are supplied to countries requiring tailor-made solutions to unrest such as India, China, Myanmar, Saudi Arabia, Egypt, Bahrain, and Azerbaijan.

The extent of Silicon Valley’s integration with the U.S. government was laid bare to the public in 2013, when Edward Snowden provided evidence proving that the U.S. National Security Agency (NSA) and Federal Bureau of Investigation (FBI) had direct access to the internal servers of nine major tech firms – AOL, Apple, Facebook, Google, Microsoft, PalTalk, Skype, YouTube, and Yahoo – each of which provided direct access through major internet service providers to the NSA through its secret projects like Boundless Informant and Prism.

Foster and McChesney explained:

“These monopolistic corporate entities readily cooperate with the repressive arm of the state in the form of its military, intelligence, and police functions. The result is to enhance enormously the secret national security state, relative to the government as a whole.

Edward Snowden’s revelations of the NSA’s Prism program, together with other leaks, have shown a pattern of a tight interweaving of the military with giant computer-Internet corporations, creating what has been called a ‘military-digital complex.’ Indeed, Beatrice Edwards, the executive director of the Government Accountability Project, argues that what has emerged is a ‘government-corporate surveillance complex.”

Information superiority and the modern battlefield

At present, the digitalization of the military-industrial complex gives the United States a commanding edge in terms of military technology and high-tech warfare, which is augmented by optical spy satellites capable of capturing remarkably detailed ground-level imagery, successive generations of wireless networking technologies, pattern-recognition and machine-learning systems, and unmanned warfighting platforms.

As a matter of survival, however, all modern militaries – both regular and irregular, large and small – are being forced to adapt to the digitization of warfare. China, Russia, and even non-state actors like Lebanese resistance group Hezbollah are fast making technological advances to keep pace in the informationized battlefield.

The weaponized nature of digital technology is a pandora’s box that may prove impossible to close. Be that as it may, Google’s employees are livid about their participation in these developments:

“We cannot outsource the moral responsibility of our technologies to third parties … Building this technology to assist the U.S. Government in military surveillance – and potentially lethal outcomes – is not acceptable.”

While the tech conglomerate workers’ attempt to decouple Google from the Pentagon may be in vain, one can only applaud their efforts to protest the tech conglomerate’s complicity in the bloodshed wrought by U.S. imperialism through its array of increasingly high-tech implements of death.

April 9, 2018 Posted by | Book Review, Militarism, Solidarity and Activism, Timeless or most popular, Video, War Crimes | , | Leave a comment

Google Should Not Help the U.S. Military Build Unaccountable AI Systems

By Peter Eckersley and Cindy Cohn | EFF | April 5, 2018

Thousands of Google staff have been speaking out against the company’s work for “Project Maven,” according to a New York Times report this week. The program is a U.S. Department of Defense (DoD) initiative to deploy machine learning for military purposes. There was a small amount of public reporting last month that Google had become a contractor for that project, but those stories had not captured how extensive Google’s involvement was, nor how controversial it has become within the company.

Outcry from Google’s own staff is reportedly ongoing, and the letter signed by employees asks Google to commit publicly to not assisting with warfare technology. We are sure this is a difficult decision for Google’s leadership; we hope they weigh it carefully.

This post outlines some of the questions that people inside and outside of the company should be mulling about whether it’s a good idea for companies with deep machine learning expertise to be assisting with military deployments of artificial intelligence (AI).

What we don’t know about Google’s work on Project Maven

According to Google’s statement last month, the company provided “open source TensorFlow APIs” to the DoD. But it appears that this controversy was not just about the company giving the DoD a regular Google cloud account on which to train TensorFlow models. A letter signed by Google employees implies that the company also provided access to its state-of-the-art machine learning expertise, as well as engineering staff to assist or work directly on the DoD’s efforts. The company has said that it is doing object recognition “for non-offensive uses only,” though reading some of the published documents and discussions about the project suggest that the situation is murkier. The New York Times says that “the Pentagon’s video analysis is routinely used in counterinsurgency and counterterrorism operations, and Defense Department publications make clear that the project supports those operations.”

If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects. Those are hefty ethical stakes, even with humans in the loop further along the “kill chain”.

We’re glad that Google is now debating the project internally. While there aren’t enough published details for us to comment definitively, we share many of the concerns we’ve heard from colleagues within Google, and we have a few suggestions for any AI company that’s considering becoming a defense contractor.

What should AI companies ask themselves before accepting military contracts?

We’ll start with the obvious: it’s incredibly risky to be using AI systems in military situations where even seemingly small problems can result in fatalities, in the escalation of conflicts, or in wider instability. AI systems can often be difficult to control and may fail in surprising ways. In military situations, failure of AI could be grave, subtle, and hard to address. The boundaries of what is and isn’t dangerous can be difficult to see. More importantly, society has not yet agreed upon necessary rules and standards for transparency, risk, and accountability for non-military uses of AI, much less for military uses.

Companies, and the individuals who work inside them, should be extremely cautious about working with any military agency where the application involves potential harm to humans or could contribute to arms races or geopolitical instability. Those risks are substantial and difficult to predict, let alone mitigate.

If a company nevertheless is determined to use its AI expertise to aid some nation’s military, it must start by recognizing that there are no settled public standards for safety and ethics in this sector yet. It cannot just assume that the contracting military agency has fully assessed the risks or that it doesn’t have a responsibility to do so independently.

At a minimum, any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risky AI applications should be asking:

  1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
  2. Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies’ and military agencies’ decisions?
  3. Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
  4. Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet’s AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.

These are just starting points. Other specific questions will surely need answering, both for future proposals and even this one, since many details of the Project Maven collaboration are not public. Nevertheless, even with the limited information available, EFF is deeply worried that Google’s collaboration with the Department of Defense does not have these kinds of safeguards. It certainly does not have them in a public, transparent, or accountable way.

The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety. Companies like Google, as well as their counterparts around the world, must consider the consequences and demand real accountability and standards of behavior from the military agencies that seek their expertise—and from themselves.

April 6, 2018 Posted by | Militarism, Timeless or most popular, War Crimes | , | Leave a comment

Never mind Facebook, Google is the all-seeing ‘big brother’ you should know about

RT | March 30, 2018

The Cambridge Analytica scandal put Facebook through the wringer in recent weeks, losing the company $100 billion in stock value and prompting a global debate on internet privacy.

The social media giant was forced to apologize and overhaul its privacy and data sharing practices, but it still remains in the media spotlight and in the crosshairs of the Federal Trade Commission, which says it may be liable for hundreds of millions of dollars worth of fines.

But amid all the furor, one monolithic entity has continued to harvest data from billions of people worldwide. The data gathered includes a precise log of your every move and every internet search you’ve ever made, every email you’ve ever sent, your workout routine, your favourite food, and every photo you’ve ever taken. And you have allowed it to happen to yourself, for the sake of better service and more relevant advertising.

Google is a ‘Big Brother’ with capabilities beyond George Orwell’s wildest nightmares. These capabilities are all the more chilling after Google’s parent company, Alphabet Inc., cut its famous “don’t be evil” line from its code of conduct in 2015.

Everything you’ve ever searched for on any of your devices is recorded and stored by Google. It’s done to better predict your future searches and speed up and streamline your browsing. You can clear your search history, but it only works for that particular device. Google still keeps a record of everything. Click here to see everything you’ve ever searched on a Google device.

The same goes for every app and extension you use. If it’s connected to Google, your data is stored. That means that your Facebook messages are not only farmed out to companies like Cambridge Analytica, Google also has them from the Facebook app you use.

YouTube, which is a Google subsidiary, also stores a history of every video you watch. It will know if you’ve listened to Linkin Park’s ‘In the End’ 3,569 times, or watched hours of flat-earth conspiracy theory videos.

Likewise, any file you’ve ever stored on Google Drive, any Google Calendar event you’ve attended, any photo you’ve stored on Google Photos, and every email you’ve ever sent are all stored. You can access a copy of all of this data by requesting a link from Google here.

Perhaps what hits home the hardest, though, is that Google keeps track of where you are and how you got there, at all times. If you have a smartphone, there’s a good chance it runs the Android operating system, considering Android phones account for 82 percent of the global market share. That’s over 2 billion monthly active users.

And, unless you’ve disabled this feature, clicking here will show you a list of every journey you’ve ever made with your phone, including an estimate of how you traveled there. If you’re back and forth between work and home at the same time every day, Google knows this is your commute. That heavy traffic warning Google maps gives you on your drive home; Google knows there’s a traffic jam because it knows that every Android phone in every car is moving slower than they usually do at that time of day.

Google doesn’t do this behind your back. On a desktop, Google Chrome allows sites to access your computer’s camera and microphone by default. On a smartphone, agreeing to an app’s terms of service allows the app to do nearly anything, from accessing your phone’s camera and location, to recording your calls and log your messages. The Facebook app, for example, requires 44 such permissions.

It is possible to opt out of most of Google’s tracking – including search history, location timeline and targeted advertising – but it takes a bit of rooting around in settings menus, and you have to know about the option first. And of course, Google says it’s not associating the data with you, as a person – instead, it’s linked to your “advertising ID,” and never shared unless you want it to be. Or unless a government requests that Google hands it over – which US government agencies alone have done almost 17,000 times in just the first half of 2017, with over 80 percent of requests fulfilled, at least to some extent.

March 30, 2018 Posted by | Civil Liberties, Corruption, Full Spectrum Dominance | , , , | 1 Comment

Tell Me More About How Google Isn’t Part Of The Government And Can Therefore Censor Whoever It Wants?

By Caitlyn Johnstone | March 8, 2018

When you tell an establishment Democrat that Google’s hiding and removal of content is a dangerous form of censorship, they often magically transform into Ayn Rand right before your eyes.

“It’s a private company and they can do what they like with their property,” they will tell you. “It’s insane to say that a private company regulating its own affairs is the same as government censorship!”

This is absurd on its surface, because Google is not separate from the government in any meaningful way. It has been financially intertwined with US intelligence agencies since its very inception when it received research grants from the CIA and NSA for mass surveillance, pours massive amounts of money into federal lobbying and DC think tanks, has a cozy relationship with the NSA and multiple defense contracts.

“Some of Google’s partnerships with the intelligence community are so close and cooperative, and have been going on for so long, that it’s not easy to discern where Google Inc ends and government spook operations begin,” wrote journalist Yasha Levine in a 2014 Pando Daily article titled “Oakland emails give another glimpse into the Google-Military-Surveillance Complex”.

“The purchase of Keyhole was a major milestone for Google, marking the moment the company stopped being a purely consumer-facing Internet company and began integrating with the US government,” Levine wrote in a recent blog post about his book Surveillance Valley. “While Google’s public relations team did its best to keep the company wrapped in a false aura of geeky altruism, company executives pursued an aggressive strategy to become the Lockheed Martin of the Internet Age.”

And now we learn from Gizmodo that Google has also been helping with AI for the Pentagon’s drone program.

A Google spokesperson reportedly told Gizmodo that the innovations it is bringing to the Defense Department’s Project Maven are “for non-offensive uses only,” which is kind of like saying the beer kegs you delivered to the frat house are for “non-intoxicating use only.” The DoD and its drone program exist to find and kill enemies of the US empire, and Google will be helping them do it.

“The department announced last year that the AI initiative, just over six months after being announced, was used by intelligence analysts for drone strikes against ISIS in an undisclosed location in the Middle East,” reports The Intercept on this story.

Google is not any more separable from the US government than Lockheed Martin or Raytheon are, yet it has been given an unprecedented degree of authority over human speech and the way people communicate and share information. Would you feel comfortable allowing Northrop Grumman or Boeing to determine what political speech is permissible and giving them the authority to remove political Youtube content and hide leftist and anti-establishment outlets from visibility like Google does?

How is this a thing? How is it considered acceptable for a force which has intimately interwoven itself with government power to censor and manipulate political speech in ways the official government would never be allowed to?

The notion that Google is a private company, separate from the government and thus unburdened by obligations of free speech, is not a legitimate one. You don’t get to create a power system where money translates directly into political influence and privatization creates symbiotic relationships between corporations and government agencies, create a beefed up Silicon Valley giant with research grants and contracts to prevent any competition from ever having a chance against it, involve that Silicon Valley giant in the agendas of the US war machine after you’ve helped it dominate the globe, and then legitimately claim it’s just a poor widdle private business that shouldn’t be subject to the legal limitations placed on the US government.

If you believe the government shouldn’t be able to regulate speech, then there’s no legitimate reason to believe that Google should be, because Google is part of the government. You shouldn’t want there to be a loophole where government power can get around constitutional restrictions on its ability to silence dissent by funneling all speech into institutions it created and collaborates with and then quash anti-establishment voices under the pretense of protecting the public from “fake news” and “Russian propaganda”.

There needs to be some sort of measure in place which protects the public from such manipulations. Either remove corporate power from government power or acknowledge that they are fully meshed and expand constitutional protections to the users of any media giant which has enmeshed itself in government power. Pretending corporate power and government power are separate when they are not while exploiting that inseparable symbiosis to silence political dissent is not acceptable.

Government should be a tool of the people to help the people, not a tool of the powerful to oppress and exploit the people. Something’s going to have to change, and we’re going to have to stop asking nicely.

March 8, 2018 Posted by | Civil Liberties, Full Spectrum Dominance | , , | 1 Comment

All-Seeing Eye: Google working with Pentagon on using AI for drone improvement

MQ-9 Reaper Drone. FILE PHOTO: © Gene Blevins / Global Look Press
RT | March 7, 2018

Ubiquitous IT giant Google has silently inked a partnership with the Department of Defense to militarize artificial intelligence and machine learning technologies, reinvigorating fears of a Terminator-style apocalyptic scenario.

Google has been secretly working with the Pentagon in order to help its 1,100-strong fleet of drones to detect images, faces, and behavioral patterns, and plans to scour through massive amounts of video footage in order to improve bombing accuracy for autonomous drones. The endgame is to improve combat performance by automating the decision-making process in locating and targeting combatants, The Intercept reported on Tuesday.

Project Maven was launched in April 2017 to establish an “Algorithmic Warfare Cross-Functional Team,” which advocates using sophisticated algorithm-based technologies to combat rising “competitors and adversaries”.

According to a Pentagon memo, dated April 26, 2017, its objective is to accelerate the process of using big data and machine learning together during combat situations and speed up the process of analyzing collected data. Former Secretary of Defense Robert Work signed off on the initiative.

Project Maven also aims to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles (UAVs)” in order to “reduce the human factors burden of [full motion video] analysis, increase actionable intelligence, and enhance military decision-making,” he wrote.

The Pentagon has become increasingly worried that it will become displaced as the world’s top AI developer. At a February 13 hearing, Senators Jack Reed (D-Rhode Island), Mark Warner (D-Virginia) and others lamented Chinese efforts to develop artificial intelligence (AI) and quantum computing, leaving the US behind.

Another DOD report, “Unmanned Systems Integrated Roadmap”, notes that there are three primary impetuses driving the push towards AI, which are “department budgetary challenges, evolving security requirements, and a changing military environment.” The report reflects another US Government Accountability Office (GAO) report which addressed problems with human-piloted drones, including fatigue, human error, and demoralization.

“Downward economic forces will continue to constrain Military Department budgets for the foreseeable future. Achieving affordable and cost-effective technical solutions is imperative in this fiscally constrained environment,” it also pointed out.

“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Marine Corps Colonel Drew Cukor said during a 2017 Defense One Tech summit. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

Cukor also mentioned the program would help to identify 38 classes of objects essential to detect in warfare, especially when “fighting” Islamic State militants. He also addressed plans to carry out Project Maven by the end of last year.

“We are in an AI arms race […] It’s happening in industry [and] the big five Internet companies are pursuing this heavily. Many of you will have noted that Eric Schmidt […] is calling Google an AI company now, not a data company,” he said.

Google is no stranger to the Department of Defense. Eric Schmidt, former CEO of its parent company Alphabet, chaired the DOD Defense Innovation Board under the Obama administration.

Some Google employees were outraged that the company would share its technology with the military, according to Gizmodo, while others said the project raised ethical questions about machine learning.

A company spokeswoman told Bloomberg that Google was sharing TensorFlow API with the military for “non-offensive uses only.”

“Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others,” the unnamed spokeswoman said.

Read more:

US drone pilots are ‘stressed’ and ‘demoralized’ – official report

March 8, 2018 Posted by | Militarism, War Crimes | , | 3 Comments

Social media bow to pressure and censor dissident voices

By Nebojsa Malic | RT | February 27, 2018

Twitter, YouTube and Facebook, accused of enabling US President Donald Trump’s rise to power through “Russian meddling,” are facing pressure to de-platform heretics. This has raised fears for the safety of free speech in the US.

At the Conservative Political Action Conference (CPAC) this past weekend, media crusader James O’Keefe headlined an hour-long panel on social media censorship, arguing that it targeted mostly conservatives.

“They really make sure you don’t see any differing views,” O’Keefe said at the panel.

Last week, the blogging platform Medium deleted a number of accounts, including those of Mike Cernovich, Jack Posobiec and Laura Loomer, described by The Hill as “prominent far-right figures.” The purge took place after Medium replaced a commitment to free speech in its terms of service in favor of fighting “online hate, abuse, harassment, and disinformation.”

Though Medium would not comment on individual account bans, it is notable that Cernovich’s account was deleted after he was named in a Newsweek article that blamed the “alt-right,” overseas social media bots and “Russians” for the ouster of Senator Al Franken (D-Minnesota) over sexual misconduct. Newsweek retracted the story after criticism that it could not be substantiated.

A number of YouTube creators have complained that the video platform has demonetized basically anything that isn’t deemed “family friendly,” including political dissent. Another crackdown followed the school shooting in Parkland, Florida, after the top-ranking video on the site featured accusations that some of the students were “crisis actors.”

Yet if YouTube simply censored any videos even referring to conspiracy theories, that would surely present a new problem.  After all, wouldn’t it also undermine efforts to debunk them?

Conservative critics accuse the social media giants of being run by Democrats. There is certainly evidence pointing in that direction, from the involvement of Alphabet (Google’s parent company) CEO Eric Schmidt with Hillary Clinton’s 2016 campaign and the Obama presidency, to Twitter’s admission it censored the hashtags about WikiLeaks’ publication of revealing emails from Clinton’s campaign chief John Podesta in the run-up to the November 2016 vote. Those emails also revealed the commitment of several Facebook executives to get Clinton elected.

After Clinton lost to Trump, however, the three social media giants found themselves in the crosshairs of Congress. Many Republicans joined the chorus of Democrats accusing the social networks of enabling alleged “Russian” activity.

“You created these platforms… and now they’re being misused,” Senator Dianne Feinstein (D-California) told the executives of Facebook, Google, and Twitter during a hearing in October 2017. “And you have to be the ones who do something about it — or we will.”

So far, “doing something” seems to consist mostly of purging “Russian bots,” as identified by the either the social media companies themselves or an alliance of Democrats and neo-conservatives ousted from power by Trump, and now seeing Russians behind every hashtag.

Censorious actions also include what activists call “de-platforming” of people singled out for unacceptable or offensive opinions by the ad-hoc online mobs. For example, after the Florida school shooting angry Twitterati have successfully badgered a number of businesses into canceling discounts they previously offered to members of the National Rifle Association (NRA). Amazon also found itself under pressure to drop the “NRA TV” channel from its platform.

In a recent interview, former Google engineer James Damore speculated that the climate at social media companies have an atmosphere which resembles college campuses. Such locations which have also seen crackdowns on freedom of expression in recent times.

“It was very much like a college campus,” Damore told the Washington Examiner. “And they tried to make it like a college campus where you would live at Google essentially, where they have all your food and all the amenities, and once you start living there you aren’t able to disconnect, and so you feel like my words were a threat against your family. That was part of the fervor, I think.”

Damore was purged from Mountain View over a memo in which he questioned the company’s  practices when it came to diversity.

While the social media companies may hope the lawmakers would be appeased by an occasional purge of unpopular voices, another danger is headed their way: the legacy media, is aiming to recapture its hold on audiences.

On Monday, CNN president Jeff Zucker addressed the Mobile World Congress in Barcelona. His thrust was that government should look into Google and Facebook “monopolies” if journalism is to survive.

“In a Google and Facebook world, monetization of digital and mobile continues to be more difficult than we would have expected or liked,” Zucker said, according to Variety. “I think we need help from the advertising world and from the technology world to find new ways to monetize digital content, otherwise good journalism will go away.”

Tempting as it would be to quip about CNN’s tenuous relationship with “good journalism.” At this time, doing so would be self-defeating as the chances are it would get one quicklybe a short-cut to getting purged from Google, Twitter or Facebook.

February 27, 2018 Posted by | Civil Liberties, Full Spectrum Dominance, Mainstream Media, Warmongering | , , , , | 4 Comments

YouTube Is Using Artificial Intelligence To Delete Channels & To Handle Subsequent Appeals

By Richie Allen | February 25, 2018

Hello,

Thank you for your account suspension appeal. We have decided to keep your account suspended based on our Community Guidelines and Terms of Service. Please visit http://www.youtube.com/t/community_guidelines for more information.

Sincerely,
The YouTube Team

Short and sweet from Google. I wrote to them (using their appeal form) last Thursday evening, asking for an explanation for the deletion of my channel. I was polite but firm and asked for a contact, a name, someone who I could speak with, just for the record mind as I know their subscriber interaction is run by AI now. Stop and think about that for a minute. A machine decided to delete the channel. I am then reduced to appealing to the same machine to have my intellectual property restored to me. We’re now living Blade Runner, Judge Dredd, Demolition Man and any other sci-fi flick about a dystopian future. Google denies this of course. The corporation admits using AI to scour videos for harmful content, but claims that decisions on banning channels are made by a person. I don’t believe them. My second strike was issued for an interview I did with Michael Rivero back in August 2015. Michael was telling me why he DID NOT believe that the shooting of two journalists in Virginia was a false flag attack. The interview was harmless. I immediately appealed (you can appeal community strikes). I pressed SUBMIT to send the appeal and was promptly emailed by Google to say that the appeal was rejected! That took seconds, it was like the email came back from them at the very second I submitted the appeal.

There could not have been any human involvement, it was so instantaneous. I am certain that nobody reviewed my appeal. It was undoubtedly a program. Just before writing this, I wrote to Google again, to challenge the above response. This time I was a little less cordial and reminded them/it, whatever the fuck it is, that I have legal remedies at my disposal. I insisted that the channel be restored and asked for the name and department of the person who a) took the decision to delete the channel and b) the name of the person who handled the appeal. They will not be able to provide me with any name of course. Maybe it’s HAL or Ed-209 or T-1000…….

I’m not going to flog a dead horse in terms of banging on and on about this. I won’t be boring the shite out of you constantly about Google, I promise. I just wanted to let you know that I had received a response of sorts from them. Anyway, enjoy the rest of your Sunday. Speak tomorrow. Sunday View can be heard on the homepage. It wasn’t a bad show today, there are some interesting stories in there.

Richie is the host of The Richie Allen Show and has enjoyed a long, and varied, broadcasting career.

February 27, 2018 Posted by | Civil Liberties, Full Spectrum Dominance | , | Leave a comment

Social media giants crack down on RT under Senate pressure

RT | January 26, 2018

Facebook, Google and Twitter are taking action against RT in response to pressure from the Senate Intelligence Committee, but have found very little to indicate ‘Russian meddling’ in the 2016 elections, new documents show.

Google Search, for example, has labels “describing RT’s relationship with the Russian Government” and the company is “working on disclosures to provide similar transparency on YouTube,” according to a letter sent to the committee by Google’s VP and general counsel Kent Walker.

Twitter has “off-boarded” RT and Sputnik “and will no longer allow those companies to purchase ad campaigns and promote Tweets on our platform,”said the letter from the company’s acting general counsel Sean Edgett.

The letters were provided following the October 31, 2017 hearing at which the senators grilled social media executives on alleged Russian meddling in the 2016 presidential election via their products and services.

Senator Joe Manchin (D-West Virginia) was interested to know whether any of the companies accepted advertising from RT or Sputnik. Unlike Twitter, Facebook and Google continue to carry ads from both outlets. Google’s Walker wrote that such ads remain subject to “strict ads policies and community guidelines,” and that “to date, we’ve seen no evidence that they are violating these policies.”

Walker added that Google took RT out of its Preferred Lineup on YouTube. In November, Eric Schmidt, chairman of Google’s parent company Alphabet, told an international forum that he planned to “de-rank” RT and Sputnik in displayed search results.

Facebook’s general counsel Colin Stretch wrote that RT and Sputnik can “use our advertising tools as long as they comply with Facebook’s policies, including complying with applicable law.”

Committee chairman Richard Burr (R-North Carolina) asked whether any of the companies provide any data to the Russian government. Twitter said it had received requests for data, but did not comply with any of them. Facebook said it had received 28 requests for data between  2013 and 2017, but that it “did not provide any data in response.”

Google said it had “not complied with every request” but declined to provide any specifics, referring the senators to its Transparency Report. RT’s analysis of that data shows that Google received 237 requests in the first half of 2016 and provided responses in 7 percent of cases. Another 234 requests came in the second half of the year, with a 15 percent response rate. There were 318 requests in 2017 with a 10 percent response rate.

Senator Kamala Harris (D-California) was very interested to hear what the social media companies are doing with the revenue supposedly earned from “Russian” advertising. Edgett’s letter confirmed Twitter’s commitment to donate the $1.9 million that RT had spent globally on ads to “academic research into elections and civic engagement.” He did not specify the organizations that would benefit from this funding.

Although Stretch said that revenue from ads running on pages managed by the Internet Research Agency (IRA, usually described in the Western press as the “St. Petersburg troll farm”) was “immaterial,” he revealed that Facebook has contributed “hundreds of thousands of dollars” to the Defending Digital Democracy Project, an outfit based at the Harvard Kennedy School of Government “that works to secure democracies around the world from external influence.”

Furthermore, the investments Facebook has made to “address election integrity and other security issues” have been so significant that “we have informed investors that we expect that the amount that we will spend will impact our profitability,” Stretch added.

Google said the total amount of revenue from “Russian” ads amounted to $4,700, while the company has contributed $750,000 to the the Defending Digital Democracy Project.

The outfit is run by Eric Rosenbach, former assistant secretary of defense in the Obama administration. According to the Belfer Center at Harvard University, Rosenbach recruited Hillary Clinton’s former campaign manager Robby Mook and Mitt Romney’s 2012 campaign manager Matt Rhoades to co-chair the project.

Among the project’s advisers is Marc Elias of Perkins Coie, the law firm that has represented Clinton and the DNC, and was revealed to have paid for the notorious “Steele Dossier.” Another member of the project’s senior advisory group is Dmitri Alperovitch, CEO of Crowdstrike, the private company hired by the DNC which originated the accusation that Russia hacked into the party’s emails. Alperovitch is also a senior fellow at the Atlantic Council, a think tank associated with anti-Russian reports and partially funded by the US military, NATO, and defense contractors like Lockheed Martin and Boeing.

Read more:

Twitter, Google & Facebook grilled by Senate, try hard to find ‘Russian influence’

Censoring #PodestaEmails, defining Russians, DNC advisers: Twitter & Google’s 2016 election tricks

January 27, 2018 Posted by | Full Spectrum Dominance, Russophobia | , , , | 1 Comment

Facebook pretending to care about democracy now is the height of hypocrisy

By Danielle Ryan | RT | January 24, 2018

Facebook has admitted that sometimes, it might actually be bad for democracy. Facebook is right about that. However, I’m not sure that the social media platform really understands why this is the case.

The admission comes in a series of official blog posts by Facebook insiders about what effect social media can have on democracy. “I wish I could guarantee that the positives are destined to outweigh the negatives, but I can‘t,” wrote Samidh Chakrabarti, a Facebook product manager. He continued: “… we have a moral duty to understand how these technologies are being used and what can be done to make communities like Facebook as representative, civil and trustworthy as possible.”

First off, it’s important to understand the political and media context in which Facebook has felt forced to make these comments. That context is alleged ‘Russian interference’ in the 2016 election through the promotion of political ads designed to take advantage of social division. Facebook is responding to a not small cohort of Americans who genuinely believe that Russian Facebook ads are destroying democracy. The second thing to understand is that while Facebook’s admission may sound like noble self-reflection, the truth is that what Facebook says and what it means are two very different things.

There is a temptation among some to believe that the social media giant is a neutral actor that cares about fairness and democracy and that it is doing its very best to ensure it has a positive effect on democracy. This could not be further from the truth.

If Facebook’s recent history is anything to go by, the California-based company is not actually a big fan of democracy at all. Even before Facebook decided to become selectively outraged about the ubiquitousness of propaganda and ‘fake news’ on its platform, it was already engaging in political censorship. Take this 2016 story in which Facebook employees admit to suppressing conservative news on the platform, for example. Not only that, but employees were told to artificially “inject” Facebook-approved stories into the trending news module when they weren’t popular enough to make it there organically. The employees were also told not to include news about Facebook itself into the trending category.

“Facebook’s news section operates like a traditional newsroom, reflecting the biases of its workers and the institutional imperatives of the corporation,” Michael Nunez wrote for Gizmodo. With that kind of ability and willingness to manipulate, Facebook itself possesses huge potential to affect political outcomes, far more than some Russian ads.

Facebook has said it believes that simply adding the ability to click an “I voted” sticker can increase actual voter turnout significantly through a combination of simply seeing the sticker and feeling the peer pressure to vote if your friends have done so. This is supposed to be one of the good things Facebook has done for democracy, but there are so many ways that Facebook could use this kind of thing to surreptitiously promote its own political agenda.

What if Facebook were to artificially push certain news stories in specific locations – say, where an election was taking place – and then add the “I voted” button for users in that area. Or alternatively chose not to add that button for certain races where a lower turnout might be deemed a good thing.

What Facebook means when it says it is worried about how its platform is being used is that it’s not entirely comfortable with the fact that it can’t fully control the political narrative. Even Facebook believes it has created a monster. It would like to control what our impressionable minds might see and read – lest we fall victim to unapproved opinions or ideologies. But Facebook also knows that such control is not entirely possible – and therein lies their true crisis.

Even the steps Facebook has taken to address alleged Russian interference in the 2016 election are questionable. In his blog post, Chakrabarti writes that the platform has “made it easier to report false news” and has “taken steps in partnership with third-party fact checkers to rank these stories lower” in the news feed. Once the fact-checkers identify a story as fake, Facebook can reduce impressions of that story by 80%, he says. But who are these third-party fact checkers? Facebook doesn’t tell us.

“We’re also working to make it harder for bad actors to profit from false news,” he writes. But again, we don’t get a definition of bad actor, either. One assumes Russia is the bad actor referred to – but if Facebook was truly concerned about government propaganda and its effect on election outcomes, the crackdown would surely not be limited to one government. Are some governments bad actors and other governments good actors? Is some propaganda good and some bad? Are some sock-puppet accounts acceptable and others not? Can we get a breakdown?

Facebook has also been kind enough to help users figure out whether they were unfortunate enough to have come into contact with any Russian-linked posts. It’s part of their “action plan against foreign interference”. Again, we might benefit from a definition here of “foreign interference.” Facebook is an international platform, thus the potential exists for elections to be ‘interfered’ with through Facebook all over the world, not just in the United States. Does Facebook’s fight against foreign interference incorporate all those efforts equally? This kind of information would be really helpful, if Facebook would be kind enough to provide it.

Facebook is not alone in its mission to rid the world of nasty Russian propaganda. Twitter is at it, too. Last week, the company sent out emails to users warning them that they may have come into contact with Russian propaganda on the microblogging platform. Curiously, no similar warnings have been sent to users who came into contact with American propaganda online – despite the fact that we’ve known for years that the US government has been using sock-puppet accounts to spread its own propaganda and misinformation online.

Google has also dipped its toes in the water. Eric Schmidt, the executive chairman of Google’s parent company Alphabet Inc., said recently that Google was trying to create special algorithms and “engineer the systems” to make RT’s content less visible on the search engine.

Media coverage of Facebook’s comments was fairly uniform. Most outlets have been treating the blog posts as a ‘see, we told you!’ moment, focusing entirely on the Russia angle but ignoring the many other ways in which Facebook has itself attempted to corrupt the free flow of information and manipulated its users. The reporting is almost sympathetic: Poor innocent Facebook is coming to terms with the fact that sometimes bad things happen online.

The Washington Post called Facebook blog posts the “most critical self-assessment yet.” Another piece in the Post opines on Facebook’s “year of reckoning.” Reuters reported that the sharing of “misleading headlines” became a “global issue” after accusations that Russia had used Facebook to interfere in the 2016 election. The implication is almost that misleading headlines are some kind of new phenomenon and Facebook is out there on the frontlines of the battle.

Facebook wants you to stay mad about Russian ads. It wants you to believe that its democracy-loving executives are truly sorry and doing all they can to make the platform as good for democracy as possible. What they don’t want is for us to examine their own practices too closely. But that’s exactly what we should be doing – instead of congratulating them on their disingenuous foray into self-reflection.

January 24, 2018 Posted by | Full Spectrum Dominance, Russophobia, Timeless or most popular | , , | 1 Comment

Facebook, Google, Twitter Announce ‘Counterspeech’ Psyop to Keep Public Docile

By Jake Andersen | ANTIMEDIA | January 18, 2018

If you’re a radical or search for “extremist” content online, the biggest social networks and internet companies on Earth will soon be converting you into a docile moderate, or at least, they will try.

Facebook, Google, and Twitter have been screening and filtering extremist content for years, but on Wednesday, the gatekeepers of the internet confirmed to Congress that they are accelerating their efforts and will target users who may be exposed to extremist/terrorist content, redirecting them instead to “positive and moderate” posts.

Representatives for the three companies testified before the Senate Committee on Commerce, Science and Transportation to outline specific ways they are trying to combat extremism online. Facebook, Google, and Twitter aren’t just tinkering with their algorithms to restrict certain kinds of violent content and messaging. They’re also using machine learning and artificial intelligence (AI) to manufacture what they call “counterspeech,” which has a hauntingly Orwellian ring to it. Essentially, their goal is to catch burgeoning extremists, or people being radicalized online, and re-engineer them via targeted propagandistic advertisements.

Monika Bickert, Facebook’s head of global policy management, stated:

“We believe that a key part of combating extremism is preventing recruitment by disrupting the underlying ideologies that drive people to commit acts of violence. That’s why we support a variety of counterspeech efforts.”

Meanwhile, Google’s YouTube has deployed something called the “Redirect Method,” developed by Google’s Jigsaw research group. With this protocol, YouTube taps search history metrics to identify users who may be interested in extremist content and then uses targeted advertising to counter “hateful” content with “positive” content. YouTube has also invested in a program called “Creators for Change,” a group of users that makes videos opposed to hate speech and violence. Additionally, the video platform has tweaked their algorithm to reduce the reach of borderline content.

In his testimony, Juniper Downs, YouTube’s head of public policy, said, “Our advances in machine learning let us now take down nearly 70% of violent extremism content within 8 hours of upload and nearly half of it in 2 hours.”

On the official YouTube blog, the company discussed how they plan to disrupt the “radicalization funnel” and change minds. The four steps include:

  • “Expanding the new YouTube product functionality to a wider set of search queries in other languages beyond English.
  • Using machine learning to dynamically update the search query terms.
  • Working with expert NGOs on developing new video content designed to counter violent extremist messaging at different parts of the radicalization funnel.
  • Collaborating with Jigsaw to expand the ‘Redirect Method’ in Europe.”

Starting at the end of last year, the company had already begun altering its algorithm so that 30% of its videos were demonetized. The company had explained that it wanted YouTube to be a safer place for brands to advertise, but the move has angered many content producers who generate income with their video channels.

The effort to use machine learning and AI as part of a social engineering funnel is probably not new, but we’ve never seen it openly wielded on a vast scale by a government-influenced corporate consortium. To say the least, it is unsettling for many. One user commented underneath the post, “So if you have an opinion that’s not there [sic] agenda You are a terrorist. Free speech is dead on YouTube.”

For its part, Twitter’s representative told Congress that since 2015 the company had taken part in over 100 training events focused on how to reduce the impact of extremist content on the platform.

In a post called “Introducing Hard Questions” on its blog, Facebook discussed rethinking the “meaning of free expression.” The post posed a number of hypothetical questions, including:

  • How aggressively should social media companies monitor and remove controversial posts and images from their platforms? Who gets to decide what’s controversial, especially in a global community with a multitude of cultural norms?
  • Who gets to define what’s false news — and what’s simply controversial political speech?”

The three tech giants have been under intense scrutiny from lawmakers who feel the platforms have been used to sow division online and even recruit homegrown terrorists. While the idea of using an algorithm to fight extremism online is not new, a unified front of Facebook, Google, and Twitter has never collectively produced original online propaganda, the specifics and scope of which remain vague despite the companies’ attempts at transparency.

Only recently, in the 2012 National Defense Authorization Act (NDAA), was the use of propaganda on the American people by the government formally legalized. Then-President Barack Obama continued strengthening government propaganda at the end of his administration with the dystopic Countering Disinformation and Propaganda Act of 2017, which created a kind of Ministry of Truth for the creation of so-called “fact-based narratives.”

It appears that while the government continues to strengthen its potential to conduct psychological operations (psyops), it is also joining forces with internet gatekeepers that can use their algorithms to shape billions of minds online. While one may applaud the ostensible goal of curbing terrorist recruitment, the use of psyops for social engineering and manufacturing consent could extend far beyond the original intent.

January 23, 2018 Posted by | Civil Liberties, Full Spectrum Dominance | , , , , | 1 Comment