Subtotal: $00.00

Checkout

Your cart is empty

The Monster from Menlo Park

And, while this moral has its origin in ancient myth, the novel’s alternate title also suggests it’s one that will forever persist—so long as humanity continues to modernize and progress.

Modern variations upon the myth include Stanley Kubrick’s Dr. Strangelove, a film which ultimately questions the definitions of “progress” and “modernization,” as it satirizes East-West relations and the hubristic act of deploying nuclear weapons during the cold war. Other variations, such as The Terminator and The Matrix, depict future worlds colonized by human-invented AI and machine algorithms. Throughout these narratives, humans are punished for building something they cannot in the end harness. Perhaps what distinguishes Frankenstein, though, is its emphasis upon a single character—Victor Frankenstein—who remains solely responsible for the monster he creates.

2018, it seems, has seen another iteration of the myth. For many reporters, including those from The New York Times, Sky News, The American Conservative, Business Today and various interviewees featuring on platforms like CNBC, Mark Zuckerberg is experiencing his “Frankenstein moment.” What’s remarkable is how the news from Menlo Park contains all of the narrative elements mentioned in the films above: deteriorating relations between the U.S. and Russia, the dangers surrounding AI development, and, behind all this, the Frankenstein-like figure of Zuckerberg himself. Exactly 200 years after the publication of Frankenstein, it looks as though Zuckerberg is the most modern Prometheus, a Frankenstein for our time.

No matter how many times he repeated the word “control” on Capitol Hill in front of the Federal Trade Commission, many onlookers felt Zuckerberg had already lost it.

And yet perhaps it’s worth considering in more detail how useful this widely-embraced analogy is and whether or not it detracts our attention from the most significant matter at hand: the future regulation of social media.

Much attention focuses on Zuckerberg the man, which has given rise to several popular characterizations. These include depictions of him as a super-intelligent yet emotionally-lacking robot, one whose libertarian idealism and laissez-faire attitude towards online privacy embodies a lot of what’s wrong with his generation. Outlets like the BBC reported that “acting normal” on Capitol Hill was going to be the 33-year-old’s biggest obstacle: his “challenge will be to appear human and come across as genuinely remorseful.” One Saturday Night Live skit portrayed a jumpy Mark Zuckerberg who told badly rehearsed jokes and counted the seconds he needed to maintain eye contact in order to appear “normal”. Like Frankenstein, Zuckerberg is shown to struggle with his emotions (when he is shown to have any) and this feeds into the image of him, again like the obsessive Genevese doctor, “engaged, heart and soul, in one pursuit,” only ever “deeply engrossed in [his] occupation’” of perfecting and streamlining his monster. This is all very well but it tends to obscure the fact that Facebook is an enormous company, which has the resources to enact change.

Of course, this is not to suggest that Zuckerberg should not be scrutinized—that scrutiny surely comes with the territory of his occupation. And it certainly does not suggest that he should not be held accountable for the ways in which Facebook has mishandled a number of serious cases involving data privacy.

For at least a couple of years, it’s been clear that something has needed to change.

Facebook has been desperately navigating a number of free-speech and moderation issues with governments all over the world. It’s common knowledge that Russian operatives used targeted Facebook ads to influence the 2016 presidential election, while recent Cambridge Analytica leaks threw further light on how Facebook could be used as a major political instrument (it’s now estimated that the U.K. consultancy firm harvested data from up to 87 million users). In Myanmar, activists have accused Facebook of censoring Rohingya Muslims, who are under attack from the country’s military. And in Africa, the social network continues to face accusations that it helped human traffickers extort victims’ families by leaving up abusive videos. It’s a lose-lose situation. Regulate too much and the platform will be accused of censorship. Regulate too little and bad actors exploit other platform users.

Zuckerberg himself—for so long resistant to the idea of any social media regulation—expressed recently, ‘“I’m not sure we shouldn’t be regulated.”’ Following this remark, Facebook’s Chief Operating Officer Sheryl Sandberg reiterated to CNBC that “Mark has said it’s not a question of if regulation; it’s a question of what type.” And yet: the question of if should remain on the agenda since data regulation varies around the world and can mean any number of things—from outright app bans through to highly prescriptive laws to virtually no regulation at all (only a set of platform guidelines).

So, what should be done? ProPublica’s Julia Angwin has put forward four ideas:

  1. Impose fines for data breaches
  2. Police political advertising
  3. Make tech companies liable for objectionable content
  4. Install ethics review boards

In theory, this seems like a reasonable modus operandi: the tech companies continue to run their platforms, while governments hold the ability to exercise penalties. Yet, even Angwin’s first point presents a whole host of difficulties. Take the Cambridge Analytica leaks, for instance. Zuckerberg stressed that Facebook was not responsible for the breach of data which saw 87 million users’ data harvested. While the platform permitted the access to its millions of users, it was in fact Aleksandr Kogan’s misuse of the data that constituted the breach.

Facebook is a social network. It’s a popular public network and it will always attract bad actors for that reason. It has grown over time as millions of users use it. It’s been an organic growth, which has required back and forth communication with its users. Facebook was not, as the Frankenstein analogy intimates, a sudden creation, animated out of nothing. It relied—and still relies—on its users, who ought to know what using Facebook and similar social media entails. It should not be surprising to hear that Facebook is a source for marketers and advertisers. Facebook and other tech companies monetize the data the user shares while tracking their movements across the web. It’s not pleasant to acknowledge but it is the truth.

This is how the internet operates now; it is its logic. And it would be naïve to not admit this.

And yet, of course, users need transparency. Many feel that, right now, Facebook does not provide the appropriate levels of transparency. And there’s a difference between control and transparency. “You have control over your information,” Zuckerberg repeated throughout his testimony. But as the Washington Post put it, “that’s like saying anyone can control a 747 because it has buttons and dials.” And it should be remembered that, once up and running, pilots tend to opt for autopilot.

“If you told me in 2004,” Zuckerberg mentioned in a CNN interview in March 2018, “that a big part of my responsibility today would be to help protect the integrity of elections against interference by other governments, um…I wouldn’t have really believed that that was going to be something that I would have to work on 14 years later…but we’re here now and we’re going to make sure we do a good job.”

It is well documented: Zuckerberg has a tendency to promise a lot, only to deliver very little. It is within everyone’s interests—not least of all his—that he does do a good job. But we shouldn’t wait to find out. We ought to see these recent events as an opportunity to take responsibility, admit our agency, and educate ourselves about the increasingly complex issues the internet continues to throw up. What The New Yorker identified as the ambient dislike for Zuckerberg should not distract us from the central fact of the debate: the security of data begins with the user. And it is imperative that they are able to understand what being a Facebook user really involves. If there are no eventual measures to explains this, then we risk losing all control—and another monster will be reanimated.

→Check out the EU General Data Protection Regulation coming into effect on 25 May 2018 which could drastically change data privacy and regulation globally

Read More