The primary issue with Facebook is not their platforms or algorithms - those can be fixed. It is their leadership.
The company was borne out of deception after Mr. Zuckerberg reneged on a contractual agreement he made with his Harvard classmates. They had approached him with an idea for a social networking platform and sought his help to build it. In 2009, Mr. Zuckerberg paid his ex-classmates around $65 million to settle the lawsuit.
I am not suggesting Mr. Zuckerberg set out to build a hate-filled platform, but I am saying that his lack of ethics and his megalomania combined with a desire to blindly maximize profits make Meta’s platforms uniquely dangerous.
The reality is even worse.
It is not just that Mr. Zuckerberg and Ms. Sandberg have chosen to ignore real-world harms. In a bid to make their service more addictive to users, they have actively designed and conducted experiments to find new ways to manipulate emotions.
Independent studies have shown that a large majority of health news shared on Facebook is fake or misleading, yet for a long-time the platform embraced conspiracy theorists, anti-vaxxers and climate deniers because fake news drives more engagement than boring facts, which in turn translates to more advertising revenue for the company.
In my mind, the pivotal point for Facebook came in 2012 when General Motors, one of the largest advertisers in the U.S., decided to stop advertising on Facebook, saying that “paid ads on the site were having little impact on consumers' car purchases.”
GM’s announcement came one week before Facebook’s IPO and raised some uncomfortable questions, not only about the company’s ability to maintain its 88% revenue growth from the prior year, but also its astronomical valuation. One based entirely on an ad-driven revenue model.
At the time, Anant Sundaram, with the Tuck School of Business at Dartmouth, noted that the average price-to-earnings ratio for the majority of US companies over the last one hundred years had been around 15, but Facebook’s PE ratio was 100.
He added that “it would take Facebook 100 years to generate enough profits to pay for itself” and it seemed like investors were betting the company's profits would “double, and then double again, and then double again — within the next few years“. He summed up the challenge saying that to succeed Facebook would “need to attract 10 percent of all advertising dollars spent on the planet…”
When you combine this unrealistic growth expectation with an unscrupulous founder, the result is what Francis Haugen described as a company that has, “over and over again, shown it chooses profit over safety”.
We know that the data analytics firm which briefly worked with Trump’s election team in 2016, Cambridge Analytica, legally bought and harvested the personal data of 50 million Facebook users (and their friends). They then used this data to try to influence and manipulate voting behaviour.
While this was the first time most people became aware of real-world dangers and the cost of giving away personal information for “free”, the red flags around Mr. Zuckerberg and Ms. Sandberg’s business decisions had been apparent for many years prior.
In 2013 a tech consultant revealed that Facebook collected content that people typed on the site but erased and never actually posted. The company’s argument justifying this intrusive data collection was that Facebook could better understand their users if they knew their “self-censored” thoughts.
In 2014 the NYTimes reported that Facebook was manipulating people’s newsfeeds, showing overwhelmingly negative or positive posts. In effect, they were using people as lab rats in a “psychological study to examine how emotions can be spread on social media”. At the time the lead researcher at Facebook, Adam Kramer, posted a public apology which has since disappeared.
In conducting this experiment Facebook never felt the need to inform or seek consent from users before making them part of the experiment, a precondition for any ethical research. After they were outed, Facebook argued that users had given “blanket consent to the company’s research as a condition of using the service”.
When Mr. Zuckerberg bought WhatsApp in 2014 he promised to protect user privacy. In fact, WhatsApp’s co-founder penned a blog post assuring users that “Respect for privacy is coded into our DNA…” and that they would continue to “know as little about you as possible…”. Less than two years later Mr. Zuckerberg went back on his word, mandating that WhatsApp share personal information with Facebook.
In 2015, Mr. Zuckerberg launched a seemingly altruistic initiative to provide free internet access to the poorest people in the world, called internet.org. This too turned out to be smoke and mirrors. Arguably, Mr. Zuckerberg’s real goal was to create a global monopoly for Facebook by building a walled-off internet.
The condition for the “free internet” was that Facebook would decide the basket of websites people could access. No other social networks were included and Google Search was also excluded. Mr. Zuckerberg likely wagered that if people’s primary experience on the internet was on Facebook, they would come to think of Facebook as the internet. You can read my piece on "How Facebook Can Fix Internet.org".
Based on internal documents reviewed by the Wall Street Journal, we now know that many of these poor people ended up being charged millions of dollars a month for their “free” internet via carrier data charges due to “software problems” at Facebook.
In 2016, the Wall Street Journal discovered that Facebook was attempting to spread its tentacles into the personal lives of non-Facebook users by tracking them across the internet. Under the guise of showing people more targeted ads, their plan was “to collect information about all Internet users through 'like' buttons and pieces of code embedded on websites.”
The Wall Street Journal reported in 2018 that Facebook had been over inflating the average viewing time for video ads on its platform, by as much as 900 percent for over a year. An unreacted filing from a 2018 lawsuit in California claims that Sheryl Sandberg was informed of the issue in 2017, including a proposed fix, but the company refused to make the changes saying it would have a “significant” impact on revenue.
The Financial Times reported the statements today based on a newly unredacted filing from a 2018 lawsuit in California. The lawsuit claims that Facebook knowingly overestimated its “potential reach” metric for advertisers, largely by failing to correct for fake and duplicate accounts. The filing states that Facebook COO Sheryl Sandberg acknowledged problems with the metric in 2017, and product manager Yaron Fidler proposed a fix that would correct the numbers. But the company allegedly refused to make the changes, arguing that it would produce a “significant” impact on revenue.
In 2018, a U.N. fact-finding mission pointed to the role of social media networks, and Facebook in particular, in fueling hate speech against the Rohingya minority in Myanmar. The report said that the “incitement to violence” was “rampant” and “unchecked.” The chair of the committee added that in Myanmar “social media is Facebook”, and “for most users [in Myanmar], Facebook is the internet.”
Independent research going back to 2004 has shown that social media detracts from healthy face-to-face relationships and reduces time spent on meaningful activities while increasing sedentary behavior.
This can lead to internet addiction, which in turn erodes self-esteem through negative comparisons people make on sites like Instagram. But skeptics claimed it was not clear whether “people with lower self-esteem are more likely to use social media, rather than social media causing lower self-esteem…”
In 2017, two academic researchers conducted a rigorous longitudinal study and published the results in the American Journal of Epidemiology, definitively answering this question.
Their findings concluded that using Facebook was “consistently detrimental to mental health” and that both “liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.”
In 2018, another comprehensive study by University of Pennsylvania confirmed that there was a direct link between social-media usage and depression and loneliness, and connected Facebook, Snapchat, and Instagram use to decreased well-being.
You might ask that if all social media is harmful, why single out Meta (formerly Facebook)? It’s a valid question. I offer a few reasons why we need to start with Meta.
First, no other social platform comes close to matching Meta’s global reach and scale.
As of Q3, 2021, Facebook had more than 2.89 billion monthly active users and both Instagram and WhatsApp crossed 2 billion each. TikTok is the only other social platform with more than one billion users. Compare this with less than 400 million people on Twitter, 478 million on Pinterest, and 514 million on Snapchat.
Meta owns three of the four largest social networks on earth, which means Mr. Zuckerberg alone has the power to control and manipulate vital news, daily information and communication flow for more than half the planet’s population.
Second, consider that in many countries, Facebook’s platforms are not just dominant but they are the primary mode of communication for people. In India, around 340 million people use Facebook, and 400 million use WhatsApp's messaging service to communicate daily.
The world’s largest democracy has become a case-study in the real-world dangers of one company having unchecked power to impact people’s daily lives with uncontrolled content that is fueled by opaque algorithms.
In 2019, documents leaked to the Associated Press revealed that a Facebook employee created a dummy account to test how its algorithms affect new Indian users on their platform. The results shocked the company’s own staff.
In less than three weeks the test account’s newsfeed turned into a cesspool of fake news, vitriol and incendiary images and videos. Bloomberg reports that there were “graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence”. In documents released by Francis Haugen, a staffer wrote in a 46-page internal report “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life."
The reason this test was significant is because it was designed to focus exclusively on Facebook’s algorithms recommending content for the test user and not friends, family or others on the platform.
Additional documents reviewed by the Associated Press show that Facebook had been aware of this problem for years, and even flagged India as one of its most “at risk countries” in the world, but struggled to do anything to limit the spread of vitriol in their largest and fastest growing market.
The other problem highlighted by Facebook’s test was that because the majority of posts were in Hindi, their content moderation algorithms were not able to detect it. Compounding this challenge is the fact that Indians also use different blends of Hindi, including something called Hinglish. A mix of Hindi and English words that no algorithm can be trained to decipher because often it is made up phonetically as people type.
Consider that in India alone there are 22 official languages and dozens more dialects, and globally there are over 7,000 official languages spoken, not including dialects. As of 2019, Facebook supported 111 languages, but translations for community guidelines and content moderation only existed in 41 languages.
In essence, Meta’s public pledges to improve content moderation algorithms and hire thousands more human moderators will not solve this problem. According to internal documents reviewed by CNN, Facebook's own researchers stated that the company is not in a position to effectively address hate speech and misinformation content in languages other than English.
Another internal study reviewed by the Washington Post found that between 2017 and 2019 Facebook’s ranking algorithm gave “five times more weight to posts that users responded to with an "angry" reaction than those that evoked other reactions, such as “like”. The newspaper concluded that such posts, while more engaging, were far more likely to include “misinformation, toxicity and low quality news”.
The bottom line is that Mr. Zuckerberg and Ms. Sandberg have shown time and again that they have no real intention of reducing the vitriol and misinformation on their platform. The closest thing we have to a smoking gun pointing to the fact that they prioritise engagement over well-being is a 2011 internal email from Ms. Sandberg, when Facebook was preparing to take on Google’s new social network, Google+.
The emails are is included as part of evidence in an antitrust case filed by 46 US state attorneys general, the District of Columbia and Guam. In the email exchange, Ms. Sandberg writes “For the first time, we have real competition and consumers have real choice…”
At the time the company was planning to remove users’ ability to untag themselves in photos. But based on the competitive situation it was decided internally to hold off on making changes “…until the direct competitive comparisons begin to die down.” The suit argues that it is proof that Facebook preserves user privacy when it faces external threats, but degrades it when those dissipate.
In late 2021, after seeing alarming signs of deteriorating mental health among youth, the U.S. Surgeon General conducted a national study. His report cited that one of the factors contributing to the mental health crisis is the fact that “social media companies were maximizing time spent, not time well spent.”
The report was prompted by an alarming rise in teen emergency room visits for suicide attempts. Among adolescent girls suicide attempts surged 51% in early 2021, compared with the same period in 2019.
The Surgeon Generals’ findings are supported by another 2021 study that found that non-educational screen time for teenagers doubled during the pandemic, increasing from an average of 3.8 hours, to 7.7 hours a day. The researchers directly associated increased screen time with adverse health outcomes, which included weight gain and increased stress.
We also now have hard evidence, based on a Washington Post and ProPublica investigation, that groups on Facebook played a key role in spreading misinformation and false narratives between Election Day and the January 6th siege on the US Capitol. The investigation found at least 650,000 posts questioning the legitimacy of Mr. Biden’s victory, with many posts “calling for executions and other political violence”.
An exasperated Facebook employee wrote on Jan 6th, on an internal forum, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” “We’ve been fueling this fire for a long time, and we shouldn’t be surprised it’s now out of control.”
The fact is that any other company faced with so much internal and external evidence of their harm to society, and particularly young children, might seriously take stock and reconsider their business model. However, Meta, under Mr. Zuckerberg and Ms. Sandberg have demonstrated that they have no real intention of doing so.
Sure, they continue to offer cosmetic changes but these do nothing to solve the underlying problems. Take for example, Facebook’s creation of an independent oversight board. With 20 members this committee can only review a tiny subset of issues, and that too after the damage has been done. Not surprisingly, reporters have found that Meta has been less than honest with their own oversight board.
Even now, Meta’s leadership refuses to take any responsibility. Andrew Bosworth, soon to be their new CTO, recently told Axios that “society” was responsible for misinformation. He said, “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing.”
This is not surprising since Mr. Zuckerberg told employees not to apologise. On the company’s earnings call, after Ms. Haugen’s revelations, he said that this was a “coordinated effort to selectively use leaked documents to paint a false picture of our company”.
If Mr. Zuckerberg has nothing to hide, one wonders why, in the weeks following Ms. Haugen’s disclosures, Meta imposed new rules to limit internal access to “research discussions on topics, including mental health and radicalization” and researchers were told “to submit work on sensitive topics for review by company lawyers.”
Over the years, Mr. Zuckerberg has publicly called on lawmakers to regulate social media platforms. In 2019, he penned an op-ed in the Washington Post, saying “I believe we need a more active role for governments and regulators” and added “Lawmakers often tell me we have too much power over speech, and frankly I agree.” He asks Congress to regulate important online issues like free speech, harmful content, election integrity, privacy and data portability.
From an honest broker this might seem like a reasonable request, but this is Mr. Zuckerberg we are talking about.
Aside from the deep partisan divisions that forestall any meaningful legislation being enacted by Congress, Mr. Zuckerberg is aware that half the US Senate is 65 years or older. The current 117th Congress is the oldest in two decades. The average age of senators is 63.9 and the average age of house members is 58.3. We have twenty-one senators who are between the ages of 70 and 80.
In addition, there exists a skill gap within Congress. Only 11 members (10 in the House and 1 in the Senate) of the current 535 voting members and 6 non-voting delegates have an engineering degree or technical background.
Mr. Zuckerberg is still not taking any chances and has been quietly spending millions to build a powerful D.C. lobbying arm. Over the last decade Big Tech firms have become the dominant lobbying group in Washington, overtaking Big Oil and Big Tobacco.
Meta which was not among the top eight spenders in 2017 has become the largest individual lobbyist, along with Amazon. Between 2018-2020, Facebook increased its lobbying spend by an a whopping 56%.
In 2020, after lawmakers began to increase scrutiny of tech companies, Meta spent more on lobbying than all the other Big Tech firms. More recently in the quarter ending September 2021, after the whistleblower Ms. Haugen came forward, they nearly outspent the entire D.C. industry on lobbying.
Their goal, it would seem, is to overwhelm the small handful of lawmakers who understand the complexities of social media and technology, by ensuring that they are outgunned and outvoted. To achieve this, Meta’s army of lobbyists routinely wine, dine, woo and whisper in the ears of the majority lawmakers.
The Wall Street Journal reported that the day after Ms. Haugen went public, Meta’s lobbying arm went to work.
First they called lawmakers and advocacy groups on the right, telling them that Ms. Haugen was trying to help Democrats. Next they reached out to Democratic lawmakers to say that Republicans were focussed on “the company’s decision to ban expressions of support for Kyle Rittenhouse”, the teenager who killed two people during unrest in Kenosha, Wisconsin.
Both Republicans and Democrats familiar with the company’s outreach told the WSJ that Meta's goal was clearly to sow discord along partisan lines and muddy the waters so the two parties would not reach consensus on tough new rules governing social media companies, and Meta in particular.
We know that social media has adverse effects because the algorithms are designed with monetisation in mind. The more time you spend on these platforms, the more opportunities to advertise. As a result, harassment, manipulation and misinformation are rife in an environment where gaining followers and increasing likes is dependent on getting noticed.
With the volume of noise and clutter on these platforms today, the more controversial, vitriolic and outrageous a post, the more likely it is to get noticed and promoted by the algorithms. Anyone remember the viral video of granny crossing the street safely?
Other CEO’s have acknowledged these dangers and are making efforts to mitigate adverse impacts. Even TikTok, a Chinese company, says they are working on changing their algorithms. Pinterest recently took the extreme step of blocking all vaccine related searches, until they can find a long-term solution.
I have nothing against Mr. Zuckerberg personally, and believe that when we get this right, social media can be a net positive force in the world. However, I don’t believe this can or will happen under Mr. Zuckerberg’s stewardship. There must be a reason why, of all the Big Tech companies, Meta has by far the longest list of “insiders-turned-critics.“
Peter Drucker, the marketing guru, famously said “Culture eats strategy for breakfast” and this is fundamentally the issue at Meta.
It took Microsoft over a decade, two CEO changes and a Federal antitrust investigation before they were able to change their toxic ‘rank and yank’ culture. Similarly, it was not until Mr. Kalanick was forced out of Uber by powerful venture investors that the company was able to expunge its cut throat, chauvinistic and frat boy culture.
Mr. Zuckerberg holds an absolute majority of Meta’s voting shares and with the company’s dual-class voting share structure, he retains majority control in any shareholder vote. What that means is, as John Webster noted in the Duchess of Malfi, “Usually goodness flows, but if it is poisoned near the head, death spreads throughout the entire fountain.”
I am wholeheartedly a capitalist and make no bones about the fact that it is the only system, even with its many flaws, that has proven successful in lifting millions out of poverty. However, a few private companies should never have this much power to disseminate the world’s news and information thorough black boxes.
Meta has the power to manipulate the minds of people on a hitherto unimaginable scale. Between Facebook, Instagram, WhatsApp and Messenger, one company and one man control the flow of critical information for more than half the earth’s population.
With great power comes great responsibility, and as long as a reckless, irresponsible and dishonest leader like Mr. Zuckerberg is at the helm, that power will continue to be used irresponsibly.