Nutzerkonto

350 000 000 photographs onto Facebook everyday

Hans Block, Moritz Riesewieck

What we don’t see

Veröffentlicht am 26.10.2018

DE

Throwing a net around the world

Almost 30 years ago the promise was given to connect people around the globe through an asysnchronous and decentralized netlike infrastructure. The “inventor” of the World Wide Web, Tim Berners-Lee, guessed its potential to change the world fundamentally: “This is experimental. However, it could start a revolution in information access.” The internet was supposed to become an emancipatory project of the many. Everything was intended to be able to be linked, commented, supplemented, and discussed. From now on everyone would be the producers and users of content, instead of individual elites. The “prosumer” was born. The internet was supposed to become a field of experimentation for alternative lifestyles and social and economic concepts. But what remains of all these millennial promises? We are experiencing a collective disenchantment of a once-existing utopia. Instead of mature co-creators of a globalized digital public sphere, we have become the naive users of a rather centrally organized, monitored infrastructure. We no longer explore; we move instead through an algorithmically adjusted world aligned with our consumer behavior. In short, we were robbed. Or alternatively, we willingly allowed ourselves to be robbed.


Invisible walls

The idea of making the world more open and connected has degenerated to a business concept. We no longer explore the extent of the Net on asynchronous detours, continually making new links and venturing into regions we have never seen before. Instead—more than three billion people worldwide—we use a handful of platforms such as Facebook, Instagram, YouTube, Twitter. They may tell us they are the whole world and are only guiding us through an otherwise impenetrable jungle, but on these platforms we are actually in walled gardens. The term comes from business administration, where it describes private-sector operators anxious to embed a variety of different services in their product range. Facebook has already embedded videotelephony, chat functions, news feeds, video streaming, and a marketplace for secondhand articles, and intends before long to extend its offers to a dating service and self-produced series—the expansion is underway. From the outset the architects of this walled garden did their utmost to portray these private-sector spaces as places of freedom and the naturally peaceful co-existence of a diverse “community.”

Tech giants like Facebook and the rest have always been very careful to be perceived in public as neutral technical platforms. In contrast to traditional media, Facebook, for example, repeatedly points out that it doesn’t publish or even select content of its own, and that ultimately everything lies in its users’ hands. But in fact the apparently smooth flow of information and communication can only be maintained because much shared content—despite appearances—is curated. Or to remain with the metaphor of the garden: somebody has to clear the weeds so that the garden can grow, blossom, and thrive. But who determines what is and isn’t a weed? The owners of this digital walled garden—and they have every right to do so. In the end we are on private property, even though its owners have taken great care to make us forget this, promoting themselves as a charitable association. They decide what three billion people see or don’t see, what appears in the digital public realm or is banned from it into invisibility. Obviously the operators of walled gardens prefer to conceal these processes. Their interventions, their arbitrary decisions about what may and may not appear here, are not meant to attract attention.

The invisible gardeners

The gardeners of these walled gardens—the social networks—are called “content moderators.” Even the euphemistic name betrays the playing down of their curating by the corporations. Most users are unaware of who these gardeners are, and what they declare as weeds, and which plants they allow to thrive. The operators maintain their silence. Their gardeners are meant to act in secret, preferably without being perceived at all. They sit in unknown places around the world, sometimes thousands of kilometers away from us, in countless cubicles, in front of thousands of monitors, eliminating everything that isn’t supposed to see the light of day. Facebook, Google, and Twitter don’t keep their platforms “clean” themselves; they outsource this dirty work to third-party companies in developing and emerging countries who attend to it quickly and quietly, discreetly and cheaply.

In Manila, one of the first and largest outsourcing locations worldwide, the “content moderators” who work for Facebook are contractually obliged to speak of their employer only as “Honey Badger Project.” Confidentially agreements prevent them from sharing with their families what pops onto their screens by the second in their well-shielded offices: photographs and videos full of violence—decapitations, mutilations, executions, child abuse, torture, and sex in all variations. Where the analogue toxic waste of the Western world was transported for years by container ship, digital waste is now dumped via fiber-optic cable. Just as the so-called scavengers rummage through the gigantic tips on the edge of the city, thousands of “content moderators” click their way in airconditioned office blocks in Manila’s commercial district through an endless toxic ocean of images and all manner of intellectual garbage. But unlike the wounds of the scavengers, those of the moderators are invisible. Full of shocking and abusive content, the photos and videos burrow into their memories, where at any time they can have unpredictable effects: eating disorders, loss of libido, anxiety disorders, alcoholism, depression, even suicide. This is the price we pay for our “clean” social platforms.

And if that weren’t enough, along with grave psychological consequences for the workers, our digital freedom of opinion is threatened with sickness. For it can’t always be determined with certainty whether a text, image, or video should be deleted.. What about all the controversial and ambivalent content posted by civil-rights activists, resistance fighters, investigative journalists from war zones, satirists, caricaturists …? Content moderators often decide about this kind of thing as swiftly as with the clear cases, and—sometimes mistakenly, sometimes following guidelines—cause material to disappear that could open people’s eyes, provide new perspectives, encourage thought, and revise opinions. The content moderators’ screens show them everything filtered as suspicious by an automatic image- or text-recognition software during uploading, or reported as inappropriate by users of the social networks.

Whenever social-network executives bring themselves to talk with the wider public, they refer to the detailed guidelines their “gardeners” are required to adhere to. Who devises these guidelines, what lobby interests lie behind the “representatives of civil society” who contribute to them, why the procedure has to take place with so little transparency, and not least how many grey areas remain despite the guidelines, which have to be interpreted within seconds by the content moderators, is withheld from the public.

The pain of others

Airwars is the name of a civil-rights organization based in London that specializes in evaluating photographs and videos made by people in war zones and uploaded onto social networks so as to document such things as civilian victims of air raids and other injustices the military likes to keep quiet. With the aid of social networks, local people with camera phones can bring to light what journalists often don’t get to see because of their dependency on the belligerent forces. Here the spirit of the early years of the World Wide Web can still be felt, when people around the world euphorically dreamt of the emancipatory potential of the internet. But the often relentlessly brutal videos, from the war in Syria or Iraq, for example, collide with the promise given above all by Facebook Instagram, and YouTube to their advertisers to provide a “safe space” where people can feel good and therefore spend a lot of time. Civil-rights organizations like Airwars or Syrian Archive freuqently complain of evidence videos disappearing from social networks before they are able to analyze them. Because of the market dominance of Facebook and co., and the often aggressive enforced political conformity of all conventional media by the respective local authorities in autocratic state, civil-rights activists have hardly any other channels through which to make their footage available quickly and easily to a wider public. But as valuable as phone videos may be in publicizing war crimes, are they also appropriate for the users of social networks? Is it reasonable to make them accessible to billions of people via Facebook, YouTube, Twitter, and so on?

The philosopher and essayist Susan Sontag spent much of her life thinking about the effect and problematics of disturbing images. In her 2003 essay Regarding the Pain of Others she writes: “Harrowing photographs do not inevitably lose their power to shock. But they are not much help if the task is to understand. Narratives can make us understand. Photographs do something else: they haunt us.” In contrast to her world-famous essay On Photography, from 1980, in which she primarily describes the danger of the mass presence of images making people insensitive the sufferings of others, in 2003 Sontag recognized the necessity of an unsparing portrayal of war, for example, and considers the possibility of photographic images calling an end to the atrocities they depict. Sontag didn’t live to see the phenomenon of images going viral in social media. If people no longer susceptible to facts and arguments, is perhaps their haunting by images the very thing needed to provoke them to rethink? This is the opinion of the American media scholar Ethan Zuckerman at MIT. Together with colleagues he was able to show how the photograph of the drowned Syrian child on the beach at Bodrum, which was massively shared on social networks in 2015, fundamentally altered the debate about the “refugee crisis”: with scientific surveys of the choice of words in the public debate on refugees, Zuckerman documented how the distribution of the photograph changed the attitude of millions of people, making them think about the humanitarian crisis with more empathy and solidarity. Zuckerman and others therefore call on the operators of social networks to publish uncomfortable, shocking content if it serves clarification. At the same time the influence of those calling for greater protection from disturbing content is growing. In the US, where even being scalded by a hot coffee can lead to a successful lawsuit against its vendor, media are increasingly prefixing content that could be perceived by certain people as irksome with a “trigger warning.” Originally used in connection with portrayals of extreme violence, so as to warn sufferers of post-traumatic stress disorders that watching or reading a particular contribution could be the cause of stimuli with far-reaching consequences, “trigger warnings” are now even being demanded by students at American universities, for example about the mythological classic Metamorphoses, by the Roman poet Ovid. Over two thousand years old, this work in verse form contains passages in which lecherous gods sexually coerce women. But for many students schooled in representational critique, “trigger warnings” aren’t enough. They call for such works to be banned from the teaching plan. For them the very rendition and presence of such content is a continuation and normalization of sexual violence and other crimes. The idea that the sole presence of such images or descriptions cements the status quo isn’t new and can’t be denied. In fact neuroscientists have now shown that the mere repetition of particular visual or verbal patterns—even if they are given a critical commentary—is enough to establish certain patterns in the brain whose effects can hardly be controlled. So caution in the face of the difficult-to-calculate effects of images and descriptions is certainly advisable. But how far do we want to take this caution? The right to physical integrity is guaranteed as a human right by many constitutions around the world. In the EU Charta of Fundamental Rights this right also expressly applies to “mental integrity.” But even if the possible traumatic effect of images and texts is difficult to predict and can have severe consequences: do we want, as a society, to be so cautious as to accept a loss of enlightenment and social consciousness of injustice? Trauma first comes about when people are incapable of processing what they have experienced—or even merely seen. The decisive thing here is mainly its ability to be verbalized: can the affected person discuss the experience with other people? So instead of preventatively shielding ourselves from possibly disturbing content, we should, as a society, ensure that less people are left alone with their experiences. Digital social networks present us with a great challenge: when people see disturbing content on the Net, this often happens when there is no one else around to share their responses with. Social networks simulate community, but they tend to encourage social atomization and throw people back on themselves. People rarely put down roots in walled gardens.

Be smart and fix things

Confronted with the problem of people’s differing sensibility to violence, sex, hatred, and other content, Mark Zuckerberg declared his wish to leave it entirely up to each user in future to decide what he/she gets to see through personal filter settings that regulate what is held back. If we bear in mind that more than three billion people now use the social networks, such individual filtering of reality would have far-reaching consequences for the public realm. Anyone could decide to remain undisturbed by images of war and other violent conflicts. Instead of discussing solutions for social problems on the basis of at least some degree of social awareness, more parallel worlds would arise: like Pippi Longstocking, people would make social reality “widdle widdle how they like it.” Contemporary phenomena like filter bubbles and echo chambers would be nothing in comparison.

So on the one hand young entrepreneurs with lots of ideas are shaping the digital world at breakneck speed along the lines of “Move fast and break things,” and on the other a breathless politics tries to chase after something they can no longer catch up with: the frightening power of Silicon Valley.

The consistently imposed guidelines of the social platforms frequently come up against very different legal frameworks, and this is increasingly leading to problems. For a while now the European Union has been fighting to make companies responsible for what lands on their platforms, calling on them to delete everything that breaks the law. The EU parliament, for example, has been urging for a blanket protection against terror, hate propaganda, and content rated as glorifying violence. Because of the increasingly massive subversion of social networks by terrorist organizations, propagandists, and political extremists, a majority of EU politicians are now arguing for completely automated deletion of suspicious content. In order to do justice to EU regulations, companies see themselves compelled to develop fully automated filters, so-called upload filters, which make sure that no law is broken. On the basis of certain visual or linguistic patters, these KI-based applications are intended to identify terror, violence, hatred, and rabble-rousing at the moment of their being uploaded, and to delete the tweets and posts that contain them before they appear on the platform at all. Because companies try to minimize their liability risk—because this is associated with high fines—they prefer to filter too much than too little. A logical consequence is that completely unsuspicious legal content falls victim to the filters. But the legal wording doesn’t define what happens in the case of falsely deleted content. What at first sounded like the perfect solution swiftly ended in catastrophe. As early as in August 2017 YouTube tried out algorithms to examine and delete disturbing videos. The apparent advantage was obvious: no one had to be confronted with brutal violence any more; machines relieved human controllers of the burdensome work. But only a few weeks later there was an outcry from hundreds of activists: YouTube had deleted thousands of important videos, thus hampering and even preventing the clarification of international war crimes. How such mistakes are made is very difficult to determine in retrospect, for the decisions are made by artificial neuronal networks. Developers have been working on an artificial intelligence similar to that of humans for a long time. In principle every arithmetical or logical function can be calculated by neuronal networks, which also have the ability to change themselves, so that independent learning processes can be simulated. Human beings sometimes lack insight into these processes. Who should be made responsible for mistakes, and can they be precisely rectified, if it isn’t known how they are caused?

The global constitution

The state is divesting itself of a central responsibility, namely to decide in a digital world what may be said and shown or what is obviously illegal and must therefore be banned from the digital realm. More and more state tasks are being outsourced to private companies. And we shouldn’t be surprised when these companies don’t primarily pursue noble social aims, but as business enterprises strive to bring in maximum profits.

For this reason it is all the more important for us to take back the power of decision over digital space. One of the central questions will be how to balance freedom of opinion against people’s need for protection. It is a fundamental issue: do we want an open or closed society in digital space? This is essentially about freedom versus security.

As the Net interweaves more and more with the material world around us, in only a few years time we will hardly be able to distinguish between analogue and digital space. What applies to the Net will also become characteristic of life on the streets, in the squares and parks, and the rest of the public sphere, because of the increasing superimposition of virtual and concrete space.

The worldwide upturn of nationalism shouldn’t hide the fact that according to surveys hundreds of millions of people feel themselves to be world citizens. The “community standards” with which the social networks more or less succeed in reconciling the digital behavior of three billion people from all over the world, will increasingly be seen by these world citizens as a kind of global constitution. If—as is becoming apparent around the world—we leave law enforcement to the transnational companies that operate the digital public sphere, then this sense of a world constitution will even take on tangible legal effect.

The opportunity to take back the walled gardens still exists, if we pull down their walls and reverse the “landgrab” carried out by Facebook, Google, Twitter, and co. The chance is still there to call for and co-develop diverse, fluid, democratic digital institutions, to give shape—democratically and with the maximum participation of people from different ethnic and cultural backgrounds—to the constitution under which billions of people will live in future. It is a utopian project, and it will probably soon show that communication between people of such different backgrounds must take a more arduous road than we cosmopolitans once euphorically imagined. But it is a project that would take up the original idea of the World Wide Web.

“The goal of the Web is to serve humanity. The Web as I envisaged it, we have not seen it yet. The future is still so much bigger than the past.” (Tim Berners-Lee)


All images  from the film The Cleaners

© Max ­Preiss/Axel Schneppat,

gebrueder beetz filmproduktion

Meine Sprache
Deutsch

Aktuell ausgewählte Inhalte
Deutsch

Hans Block

Hans Block

is a German theatre, radio, and film director and musician. He studied percussion at the University of the Arts and directing at the Ernst Busch University of Performing Arts in Berlin. A number of his productions (at the Maxim Gorki Theater Berlin, Schauspiel Frankfurt, Münchner Volkstheater, and elsewhere) were invited to the Radikal Jung festival of directing or the directors’ studio at the Schauspiel Frankfurt. His radio play Don Don Don Quijote – Attackéee was a winner at the international PRIX MARULIC festival. He develops cross-media narrative forms together with Moritz Riesewieck under the name of Laokoon.
Weitere Texte von Hans Block bei DIAPHANES
Moritz Riesewieck

Moritz Riesewieck

ist Autor, Theater- und Filmregisseur. Er studierte erst Wirtschaftswissenschaft, dann Regie an der Hochschule für Schauspielkunst »Ernst Busch« in Berlin. Seine Theaterarbeiten wurden u.a. am Schauspiel Dortmund, beim Internationalen Forum des Berliner Theatertreffens und in Mexico City gezeigt. 2015 wurde Moritz Riesewieck zum Heidelberger Stückemarkt eingeladen und mit dem Elsa-Neumann-Stipendium des Landes Berlin ausgezeichnet. Sein Essay Digitale Drecksarbeit – Wie uns Facebook & Co. von dem Bösen erlösen (dtv) ist vor kurzem als Buch erschienen. Zusammen mit Hans Block entwickelt er unter dem Namen Laokoon crossmediale Erzählformate.
Weitere Texte von Moritz Riesewieck bei DIAPHANES