Google Analytics

Showing posts with label social media. Show all posts
Showing posts with label social media. Show all posts

Monday, July 11, 2022

The Case Against Facebook (now Meta)

(Image: OMG News Today)

The primary issue with Facebook is not their platforms or algorithms - those can be fixed. It is their leadership.

The company was borne out of deception after Mr. Zuckerberg reneged on a contractual agreement he made with his Harvard classmates. They had approached him with an idea for a social networking platform and sought his help to build it. In 2009, Mr. Zuckerberg paid his ex-classmates around $65 million to settle the lawsuit.

I am not suggesting Mr. Zuckerberg set out to build a hate-filled platform, but I am saying that his lack of ethics and his megalomania combined with a desire to blindly maximize profits make Meta’s platforms uniquely dangerous.

The reality is even worse.

It is not just that Mr. Zuckerberg and Ms. Sandberg have chosen to ignore real-world harms. In a bid to make their service more addictive to users, they have actively designed and conducted experiments to find new ways to manipulate emotions.

Independent studies have shown that a large majority of health news shared on Facebook is fake or misleading, yet for a long-time the platform embraced conspiracy theorists, anti-vaxxers and climate deniers because fake news drives more engagement than boring facts, which in turn translates to more advertising revenue for the company.

In my mind, the pivotal point for Facebook came in 2012 when General Motors, one of the largest advertisers in the U.S., decided to stop advertising on Facebook, saying that “paid ads on the site were having little impact on consumers' car purchases.”

GM’s announcement came one week before Facebook’s IPO and raised some uncomfortable questions, not only about the company’s ability to maintain its 88% revenue growth from the prior year, but also its astronomical valuation. One based entirely on an ad-driven revenue model.

At the time, Anant Sundaram, with the Tuck School of Business at Dartmouth, noted that the average price-to-earnings ratio for the majority of US companies over the last one hundred years had been around 15, but Facebook’s PE ratio was 100.

He added that “it would take Facebook 100 years to generate enough profits to pay for itself” and it seemed like investors were betting the company's profits would “double, and then double again, and then double again — within the next few years“. He summed up the challenge saying that to succeed Facebook would “need to attract 10 percent of all advertising dollars spent on the planet…”

When you combine this unrealistic growth expectation with an unscrupulous founder, the result is what Francis Haugen described as a company that has, “over and over again, shown it chooses profit over safety”.

We know that the data analytics firm which briefly worked with Trump’s election team in 2016, Cambridge Analytica, legally bought and harvested the personal data of 50 million Facebook users (and their friends). They then used this data to try to influence and manipulate voting behaviour.

While this was the first time most people became aware of real-world dangers and the cost of giving away personal information for “free”, the red flags around Mr. Zuckerberg and Ms. Sandberg’s business decisions had been apparent for many years prior.

In 2013 a tech consultant revealed that Facebook collected content that people typed on the site but erased and never actually posted. The company’s argument justifying this intrusive data collection was that Facebook could better understand their users if they knew their “self-censored” thoughts.

In 2014 the NYTimes reported that Facebook was manipulating people’s newsfeeds, showing overwhelmingly negative or positive posts. In effect, they were using people as lab rats in a “psychological study to examine how emotions can be spread on social media”. At the time the lead researcher at Facebook, Adam Kramer, posted a public apology which has since disappeared.

In conducting this experiment Facebook never felt the need to inform or seek consent from users before making them part of the experiment, a precondition for any ethical research. After they were outed, Facebook argued that users had given “blanket consent to the company’s research as a condition of using the service”.

When Mr. Zuckerberg bought WhatsApp in 2014 he promised to protect user privacy. In fact, WhatsApp’s co-founder penned a blog post assuring users that “Respect for privacy is coded into our DNA…” and that they would continue to “know as little about you as possible…”. Less than two years later Mr. Zuckerberg went back on his word, mandating that WhatsApp share personal information with Facebook.

In 2015, Mr. Zuckerberg launched a seemingly altruistic initiative to provide free internet access to the poorest people in the world, called This too turned out to be smoke and mirrors. Arguably, Mr. Zuckerberg’s real goal was to create a global monopoly for Facebook by building a walled-off internet. 

The condition for the “free internet” was that Facebook would decide the basket of websites people could access. No other social networks were included and Google Search was also excluded. Mr. Zuckerberg likely wagered that if people’s primary experience on the internet was on Facebook, they would come to think of Facebook as the internet. You can read my piece on "How Facebook Can Fix".

Based on internal documents reviewed by the Wall Street Journal, we now know that many of these poor people ended up being charged millions of dollars a month for their “free” internet via carrier data charges due to “software problems” at Facebook.

In 2016, the Wall Street Journal discovered that Facebook was attempting to spread its tentacles into the personal lives of non-Facebook users by tracking them across the internet. Under the guise of showing people more targeted ads, their plan was “to collect information about all Internet users through 'like' buttons and pieces of code embedded on websites.”

The Wall Street Journal reported in 2018 that Facebook had been over inflating the average viewing time for video ads on its platform, by as much as 900 percent for over a year. An unreacted filing from a 2018 lawsuit in California claims that Sheryl Sandberg was informed of the issue in 2017, including a proposed fix, but the company refused to make the changes saying it would have a “significant” impact on revenue.

The Financial Times reported the statements today based on a newly unredacted filing from a 2018 lawsuit in California. The lawsuit claims that Facebook knowingly overestimated its “potential reach” metric for advertisers, largely by failing to correct for fake and duplicate accounts. The filing states that Facebook COO Sheryl Sandberg acknowledged problems with the metric in 2017, and product manager Yaron Fidler proposed a fix that would correct the numbers. But the company allegedly refused to make the changes, arguing that it would produce a “significant” impact on revenue.

In 2018, a U.N. fact-finding mission pointed to the role of social media networks, and Facebook in particular, in fueling hate speech against the Rohingya minority in Myanmar. The report said that the “incitement to violence” was “rampant” and “unchecked.” The chair of the committee added that in Myanmar “social media is Facebook”, and “for most users [in Myanmar], Facebook is the internet.”

Independent research going back to 2004 has shown that social media detracts from healthy face-to-face relationships and reduces time spent on meaningful activities while increasing sedentary behavior. 

This can lead to internet addiction, which in turn erodes self-esteem through negative comparisons people make on sites like Instagram. But skeptics claimed it was not clear whether “people with lower self-esteem are more likely to use social media, rather than social media causing lower self-esteem…”

In 2017, two academic researchers conducted a rigorous longitudinal study and published the results in the American Journal of Epidemiology, definitively answering this question. 

Their findings concluded that using Facebook was “consistently detrimental to mental health” and that both “liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.”

In 2018, another comprehensive study by University of Pennsylvania confirmed that there was a direct link between social-media usage and depression and loneliness, and connected Facebook, Snapchat, and Instagram use to decreased well-being.

You might ask that if all social media is harmful, why single out Meta (formerly Facebook)? It’s a valid question. I offer a few reasons why we need to start with Meta.

First, no other social platform comes close to matching Meta’s global reach and scale.

As of Q3, 2021, Facebook had more than 2.89 billion monthly active users and both Instagram and WhatsApp crossed 2 billion each. TikTok is the only other social platform with more than one billion users. Compare this with less than 400 million people on Twitter, 478 million on Pinterest, and 514 million on Snapchat. 

Meta owns three of the four largest social networks on earth, which means Mr. Zuckerberg alone has the power to control and manipulate vital news, daily information and communication flow for more than half the planet’s population.

Second, consider that in many countries, Facebook’s platforms are not just dominant but they are the primary mode of communication for people. In India, around 340 million people use Facebook, and 400 million use WhatsApp's messaging service to communicate daily.

The world’s largest democracy has become a case-study in the real-world dangers of one company having unchecked power to impact people’s daily lives with uncontrolled content that is fueled by opaque algorithms.

In 2019, documents leaked to the Associated Press revealed that a Facebook employee created a dummy account to test how its algorithms affect new Indian users on their platform. The results shocked the company’s own staff.

In less than three weeks the test account’s newsfeed turned into a cesspool of fake news, vitriol and incendiary images and videos. Bloomberg reports that there were “graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence”. In documents released by Francis Haugen, a staffer wrote in a 46-page internal report “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life."

The reason this test was significant is because it was designed to focus exclusively on Facebook’s algorithms recommending content for the test user and not friends, family or others on the platform. 

Additional documents reviewed by the Associated Press show that Facebook had been aware of this problem for years, and even flagged India as one of its most “at risk countries” in the world, but struggled to do anything to limit the spread of vitriol in their largest and fastest growing market.

The other problem highlighted by Facebook’s test was that because the majority of posts were in Hindi, their content moderation algorithms were not able to detect it. Compounding this challenge is the fact that Indians also use different blends of Hindi, including something called Hinglish. A mix of Hindi and English words that no algorithm can be trained to decipher because often it is made up phonetically as people type.

Consider that in India alone there are 22 official languages and dozens more dialects, and globally there are over 7,000 official languages spoken, not including dialects. As of 2019, Facebook supported 111 languages, but translations for community guidelines and content moderation only existed in 41 languages. 

In essence, Meta’s public pledges to improve content moderation algorithms and hire thousands more human moderators will not solve this problem. According to internal documents reviewed by CNN, Facebook's own researchers stated that the company is not in a position to effectively address hate speech and misinformation content in languages other than English.

Another internal study reviewed by the Washington Post found that between 2017 and 2019 Facebook’s ranking algorithm gave “five times more weight to posts that users responded to with an "angry" reaction than those that evoked other reactions, such as “like”. The newspaper concluded that such posts, while more engaging, were far more likely to include “misinformation, toxicity and low quality news”.

The bottom line is that Mr. Zuckerberg and Ms. Sandberg have shown time and again that they have no real intention of reducing the vitriol and misinformation on their platform. The closest thing we have to a smoking gun pointing to the fact that they prioritise engagement over well-being is a 2011 internal email from Ms. Sandberg, when Facebook was preparing to take on Google’s new social network, Google+.

The emails are is included as part of evidence in an antitrust case filed by 46 US state attorneys general, the District of Columbia and Guam. In the email exchange, Ms. Sandberg writes “For the first time, we have real competition and consumers have real choice…” 

At the time the company was planning to remove users’ ability to untag themselves in photos. But based on the competitive situation it was decided internally to hold off on making changes “…until the direct competitive comparisons begin to die down.” The suit argues that it is proof that Facebook preserves user privacy when it faces external threats, but degrades it when those dissipate.

In late 2021, after seeing alarming signs of deteriorating mental health among youth, the U.S. Surgeon General conducted a national study. His report cited that one of the factors contributing to the mental health crisis is the fact that “social media companies were maximizing time spent, not time well spent.”  

The report was prompted by an alarming rise in teen emergency room visits for suicide attempts. Among adolescent girls suicide attempts surged 51% in early 2021, compared with the same period in 2019.

The Surgeon Generals’ findings are supported by another 2021 study that found that non-educational screen time for teenagers doubled during the pandemic, increasing from an average of 3.8 hours, to 7.7 hours a day. The researchers directly associated increased screen time with adverse health outcomes, which included weight gain and increased stress.

We also now have hard evidence, based on a Washington Post and ProPublica investigation, that groups on Facebook played a key role in spreading misinformation and false narratives between Election Day and the January 6th siege on the US Capitol.  The investigation found at least 650,000 posts questioning the legitimacy of Mr. Biden’s victory, with many posts “calling for executions and other political violence”.

An exasperated Facebook employee wrote on Jan 6th, on an internal forum, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” “We’ve been fueling this fire for a long time, and we shouldn’t be surprised it’s now out of control.”

The fact is that any other company faced with so much internal and external evidence of their harm to society, and particularly young children, might seriously take stock and reconsider their business model. However, Meta, under Mr. Zuckerberg and Ms. Sandberg have demonstrated that they have no real intention of doing so.

Sure, they continue to offer cosmetic changes but these do nothing to solve the underlying problems. Take for example, Facebook’s creation of an independent oversight board. With 20 members this committee can only review a tiny subset of issues, and that too after the damage has been done. Not surprisingly, reporters have found that Meta has been less than honest with their own oversight board.

Even now, Meta’s leadership refuses to take any responsibility. Andrew Bosworth, soon to be their new CTO, recently told Axios that “society” was responsible for misinformation. He said, “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing.”

This is not surprising since Mr. Zuckerberg told employees not to apologise. On the company’s earnings call, after Ms. Haugen’s revelations, he said that this was a “coordinated effort to selectively use leaked documents to paint a false picture of our company”

If Mr. Zuckerberg has nothing to hide, one wonders why, in the weeks following Ms. Haugen’s disclosures, Meta imposed new rules to limit internal access to “research discussions on topics, including mental health and radicalization” and researchers were told “to submit work on sensitive topics for review by company lawyers.”

Over the years, Mr. Zuckerberg has publicly called on lawmakers to regulate social media platforms. In 2019, he penned an op-ed in the Washington Post, saying “I believe we need a more active role for governments and regulators” and added “Lawmakers often tell me we have too much power over speech, and frankly I agree.” He asks Congress to regulate important online issues like free speech, harmful content, election integrity, privacy and data portability.

From an honest broker this might seem like a reasonable request, but this is Mr. Zuckerberg we are talking about.

Aside from the deep partisan divisions that forestall any meaningful legislation being enacted by Congress, Mr. Zuckerberg is aware that half the US Senate is 65 years or older. The current 117th Congress is the oldest in two decades. The average age of senators is 63.9 and the average age of house members is 58.3. We have twenty-one senators who are between the ages of 70 and 80.

In addition, there exists a skill gap within Congress. Only 11 members (10 in the House and 1 in the Senate) of the current 535 voting members and 6 non-voting delegates have an engineering degree or technical background.

Mr. Zuckerberg is still not taking any chances and has been quietly spending millions to build a powerful D.C. lobbying arm. Over the last decade Big Tech firms have become the dominant lobbying group in Washington, overtaking Big Oil and Big Tobacco. 

Meta which was not among the top eight spenders in 2017 has become the largest individual lobbyist, along with Amazon. Between 2018-2020, Facebook increased its lobbying spend by an a whopping 56%.

In 2020, after lawmakers began to increase scrutiny of tech companies, Meta spent more on lobbying than all the other Big Tech firms. More recently in the quarter ending September 2021, after the whistleblower Ms. Haugen came forward, they nearly outspent the entire D.C. industry on lobbying.

Their goal, it would seem, is to overwhelm the small handful of lawmakers who understand the complexities of social media and technology, by ensuring that they are outgunned and outvoted. To achieve this, Meta’s army of lobbyists routinely wine, dine, woo and whisper in the ears of the majority lawmakers.

The Wall Street Journal reported that the day after Ms. Haugen went public, Meta’s lobbying arm went to work

First they called lawmakers and advocacy groups on the right, telling them that Ms. Haugen was trying to help Democrats. Next they reached out to Democratic lawmakers to say that Republicans were focussed on “the company’s decision to ban expressions of support for Kyle Rittenhouse”, the teenager who killed two people during unrest in Kenosha, Wisconsin.

Both Republicans and Democrats familiar with the company’s outreach told the WSJ that Meta's goal was clearly to sow discord along partisan lines and muddy the waters so the two parties would not reach consensus on tough new rules governing social media companies, and Meta in particular.

We know that social media has adverse effects because the algorithms are designed with monetisation in mind. The more time you spend on these platforms, the more opportunities to advertise. As a result, harassment, manipulation and misinformation are rife in an environment where gaining followers and increasing likes is dependent on getting noticed. 

With the volume of noise and clutter on these platforms today, the more controversial, vitriolic and outrageous a post, the more likely it is to get noticed and promoted by the algorithms. Anyone remember the viral video of granny crossing the street safely?

Other CEO’s have acknowledged these dangers and are making efforts to mitigate adverse impacts. Even TikTok, a Chinese company, says they are working on changing their algorithms. Pinterest recently took the extreme step of blocking all vaccine related searches, until they can find a long-term solution.

I have nothing against Mr. Zuckerberg personally, and believe that when we get this right, social media can be a net positive force in the world. However, I don’t believe this can or will happen under Mr. Zuckerberg’s stewardship. There must be a reason why, of all the Big Tech companies, Meta has by far the longest list of “insiders-turned-critics.“

Peter Drucker, the marketing guru, famously said “Culture eats strategy for breakfast” and this is fundamentally the issue at Meta. 

It took Microsoft over a decade, two CEO changes and a Federal antitrust investigation before they were able to change their toxic ‘rank and yank’ culture. Similarly, it was not until Mr. Kalanick was forced out of Uber by powerful venture investors that the company was able to expunge its cut throat, chauvinistic and frat boy culture.

Mr. Zuckerberg holds an absolute majority of Meta’s voting shares and with the company’s dual-class voting share structure, he retains majority control in any shareholder vote. What that means is, as John Webster noted in the Duchess of Malfi, “Usually goodness flows, but if it is poisoned near the head, death spreads throughout the entire fountain.”

I am wholeheartedly a capitalist and make no bones about the fact that it is the only system, even with its many flaws, that has proven successful in lifting millions out of poverty. However, a few private companies should never have this much power to disseminate the world’s news and information thorough black boxes.

Meta has the power to manipulate the minds of people on a hitherto unimaginable scale. Between Facebook, Instagram, WhatsApp and Messenger, one company and one man control the flow of critical information for more than half the earth’s population.

With great power comes great responsibility, and as long as a reckless, irresponsible and dishonest leader like Mr. Zuckerberg is at the helm, that power will continue to be used irresponsibly.

Wednesday, March 21, 2018

Facebook and Division by Data in the Digital Age


“The world is now awash in data and we can see consumers in a lot clearer ways.”
Max Levchin (PayPal co-founder)

There was a time not too long ago when people from all walks of life gathered around the proverbial water cooler in offices, places of worship, community centers, schools, local sporting events or watering holes. This ritual was underpinned by a shared experience based on a national or local conversation or a cultural artifact like a popular new book, advertisement or TV show that everyone had recently experienced.

It was not that people gathered around and sang Kumbaya, but that we brought a variety of viewpoints relating to the same event. I remember such gatherings being a melting pot of diverse perspectives, and passionate opinions; some that we vehemently agreed with and others we disagreed with, equally vehemently. But irrespective of where we stood on an issue, we all walked away without animosity and with a perspective we would not have otherwise had.

I am not suggesting that we left with changed minds or that we were competing to bring others around to our point of view, but that by listening, discussing and accepting the fact that there are different reactions to exactly the same content, it allowed us to build empathy and I believe helped to open minds in the long run; and being face-to-face they were also civil and respectful.

The internet, with its ability to turn the planet into a virtual global square, was meant to be the ultimate water cooler and bring us even closer together through diverse and shared experiences on a scale unimaginable before, but the opposite has transpired.

In country after country, social media feeds and discussion forums are filled with disagreement and hate. Once respected members of society like journalists, academics and scholars are engaging in shouting matches on TV screens, while family members are unfriending each other on social media. Research shows that this generation is more lonely and unhappy than any before it.

Nobody seems willing to entertain or discuss a point of view slightly different from their own. We have lost the ability for nuanced conversation and seem only to find comfort in absolutism. And we have eroded our ability to empathise with those who do not share our finite and inflexible worldviews.

It’s as if we have all stopped talking to each other, and now only talk at each other. What happened?

To begin with, it is true that we no longer reside in neighborhoods populated with a broad mix people from different walks of life. Increasingly we live, work and socialize only with people with similar income and educational backgrounds. The majority of educated urbanites have long stopped attending places of worship or congregating in local centers where they might still fraternize with a wider cross-section of society and viewpoints.

Even online we have retreated into echo chambers and digital fortresses filled with similarly-minded people, and our social rituals have been replaced with impersonal digital ones. We chat with friends on WhatsApp, visit grandma on Skype and share all significant milestones with extended family through email and social media.

While it is true income and educational segregation have been in part responsible for our growing divide, I believe that digital targeting technology, invented by the advertising and social media industry, along with the growing sophistication of how much data is being used, has contributed to our loss of empathy, inability to compromise and increasing vitriol. Not only are massive amounts of personal data being accumulated, but it is being used to divide people into groups and to manipulate behaviour.

Every advertiser and marketer has always wanted to connect with customers on a more personal level, but it was never possible to talk to us on a one-to-one basis until recently. The sophistication of digital technology allows companies to monitor every keystroke, eye movement, voice command, even physical movement, and, more worryingly, they are now able to put it all together to create a startlingly granular and deeply accurate view of our daily lives, habits and motivations on an individual level.

Like most innovations, this type of data accumulation was done for targeting of products and to deliver personalised content; so people would no longer waste time looking at diaper ads when they wanted to buy shoes. The idea was to accumulate so much data about each individual that it would allow marketers to get so precise that they would always show the right ad, with the right product message, or right piece of content, at the very moment we were looking for it.

Sounds great in theory, but nobody considered the dangerous and unintended consequences of such sophisticated tracking and predictive algorithms that now power every website, internet service and mobile app. Or the ability to use it for things other than selling us shoes and diapers.

What started as an advertising tool has now grown into an information arms race with numerous companies accumulating more and more personal data on each of us without any transparency, independent or third party oversight. People do not have the ability to opt-out and nobody has a clear idea of how this data is being used or with whom it is being shared.

Granted, most advertisers still use personal data to sell more shoes or diapers, but because the use of this technology has proliferated far beyond marketing and media and is used by virtually every industry and by governments, it has greatly increased the potential for information to fall into the wrong hands, and to be used to manipulate and influence behaviour of individuals and groups.

We need look no further than the 2016 US election. We know the effectiveness with which state-sponsored Russian actors used ad-targeting technology on platforms like Facebook, Google, Twitter and other sites to target, test and fine-tune messages that spread various bits of misinformation. Cambridge Analytica, the data analytics firm that briefly worked with Trump’s election team, legally bought and harvested personal data of 50 million Facebook users (and their friends) from an academic who had built a Facebook app, to influence and manipulate voting behaviour.

It is important to understand just how sophisticated targeting technology is today. Anyone can accurately target the 38 year old baseball loving, Democrat voting, Budweiser drinking and Nike shoe collector on the Upper East Side of Manhattan, as well as their Grandma in Bhopal, India. The targeting is both granular and precise.

In addition, you can exclude people by age, ethnicity, religious belief or political affiliation, thereby ensuring efficacy of your message among only like-minded people. Additionally, I could ensure that the message I show grandma is not even seen by her neighbours, even when they are all on the same page on the same website or watching the same TV show (known as addressable TV).

This is what I refer to as division by data, when data is used to segment and sub-segment every section of the population, with each segment further refined with more granular data until it gets down to an individual level based on which algorithms decide “what” to show people.

What this means is that what I see on my Facebook newsfeed is not what my wife, my neighbour or colleague sees. With addressable TV, companies can show different ads to different people in the same area code and building while they are watching the same programs. The same is true of our Twitter feed, news, iTunes and Netflix recommendations and even Google search.

Ask a liberal and a conservative friend to type in the exact same search query, e.g. global warming, on their respective computers and see how different the results and ‘facts’ they get are. I urge every skeptic to read this article about an experiment conducted by Dr. Epstein, a senior research psychologist at the American Institute for Behavioural Research and Technology: “Epstein conducted five experiments in two countries to find that biased rankings in search results can shift the opinions of undecided voters. If Google tweaks its algorithm to show more positive search results for a candidate, the searcher may form a more positive opinion of that candidate.”

Consider that Facebook has become the primary “source of news for 44% of Americans” and now boasts over two billion active users worldwide and Google is what the world relies on to search for news, information and facts, and both are driven by this underlying ‘personalisation and targeting’ philosophy that I call division by data. Think about the fact that the greatest source of influence on human minds is still the power of persuasion - one that is driven by repeated exposure to the same message.

This is where the notion of using data obsessively to personalise everything down to the individual level has gone horribly wrong. By treating human beings like objects and dividing them into ever smaller groups that only see content, information, news and even ‘facts’ uniquely tailored and created based on their preferences and biases, we might manage to increase ad sales, but we also increase societal divisions by reducing the ability to find common ground on issues.

In the digital age, we have effectively replaced our real and proverbial water coolers with bottles of water that can be dynamically flavoured to meet individual tastes, and with this hyper-precise targeting we have ensured that we no longer have shared experiences that human beings have relied on for centuries as a way to build bonds that lead to diversity of thought and open-mindedness.

This is a solvable problem, but until we find ways to restore our water coolers in the digital age and craft sensible new regulations on data privacy, sharing and targeting, we will continue to weaken every democracy and hamper our shared progress. 

Thursday, November 30, 2017

Why You Should #DeleteFacebook from Your Phone


“Happiness is not something ready made. It comes from your own actions.”
Dalai Lama

Larry Page the CEO of Google’s parent company, Alphabet, famously told the New York Times that when he looks to purchase a company, he asks whether it passes the toothbrush test; Is it something you will use once or twice a day, and does it make your life better? 

At first glance the statement seems perfectly innocuous and almost noble when you think about technology making your life better, but the reality is far more pernicious. Unlike brushing your teeth, something we are taught to do from early child hood, in order to preserve our gums and have healthy teeth, for internet companies the equivalent is finding ways to ensure we get fixated with and completely addicted to their products.

This type of addiction to Facebook, Google, Amazon, LinkedIn or Netflix has nothing to do with making us healthier or better human beings; in fact it is having exactly the opposite effect on our brains, mental well-being and state of happiness.

Merriam-Webster describes addiction as;
1: the quality or state of being addicted
2: compulsive need for and use of a habit-forming substance (such as heroin, nicotine, or alcohol) characterized by tolerance and by well-defined physiological symptoms upon withdrawal; broadly: persistent compulsive use of a substance known by the user to be harmful

There is a reason Silicon Valley does not use traditional business metrics like earnings, sales or revenue to measure an acquisition target, instead they look at ‘stickiness’ or addiction in terms of how often users interact with the app on a daily basis.

Until now we thought about harmful addictions primarily in terms of substance abuse because it is easier to see the visible and physical effects on someone addicted to drugs, alcohol or sex; with the internet and social media, the addiction is more disarming and harder to see. We can all agree that most addictions are bad for human beings, and scientists and researchers are just now starting to see the detrimental effect smart phones are having on our intelligence, social skills and declining levels of happiness.

I understand that this is a hard thing to get your head around because few people will be able to imagine navigating daily life without a smartphone. It is how we stay in touch with friends, share kid’s milestones with family, communicate with co-workers, stay on top of breaking news, search for answers and even solve complex work problems, as well as what we turn to for entertainment during commutes and down-time. Nobody is suggesting we power down our phones and move back into caves, but it is important to understand the harm of constant use and without conscious boundaries.

A recent Wall Street Journal article cites a number of independent research studies reaching the same dangerous conclusion that the “integration of smartphones into daily life” appears to cause a “brain drain” that can diminish such vital mental skills as “learning, logical reasoning, abstract thought, problem solving, and creativity.”

To keep us addicted, each service needs to constantly invent new ways to get us to spend time within their apps and to do it many times a day. This is how Facebook, BuzzFeed, Instagram, Reditt and every other similar service make money - the more often we use it, the more likely we are to see an ad, and thus the more valuable their service becomes to an advertiser.

There are only so many baby pictures and cat videos one can watch. After a while the bit of content vying for our attention needs to become more and more outrageous and sensational to command our repeated attention. It is this vicious cycle in a race to become the most addictive that is driving all their content into the gutter, as we saw with the mass proliferation of fake news across all news and social media platforms in the last US election.

People will argue that we have dealt with many captive and unhealthy mediums over the centuries and mankind has not only survived, but thrived, and this is true; but unlike cinema, radio, television or computers, we have never before been able to immerse ourselves in these things twenty-four hours a day, seven days a week and have them within our reach from the moment we wake up to when we sleep.

The same WSJ article explains this fundamental difference with a mobile phone in this way: “Imagine combining a mailbox, a newspaper, a TV, a radio, a photo album, a public library and a boisterous party attended by everyone you know, and then compressing them all into a single, small, radiant object. That is what a smartphone represents to us. No wonder we can’t take our minds off it.”

Another study, published in the American Journal of Epidemiology, found a direct connection between increased Facebook usage and decreased well-being; “And the team says their findings show that "well-being declines are also a matter of quantity of use rather than only quality of use." Even if we were to argue that adults are generally more capable of dealing with this type of addiction, which the data says is not true, we must consider the devastating effect it is having on younger minds.

Jean M. Twenge, a professor of psychology at San Diego State University who has been studying generational differences for 25 years, recently wrote an article in The Atlantic on this issue. She found that “there is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.” She concludes that “there’s not a single exception. All screen activities are linked to less happiness, and all non-screen activities are linked to more happiness.”

I am not suggesting that Facebook, LinkedIn or Google are evil; in fact in the grand scheme of life they have done much more good than bad. The issue is the frequency with which we engage with our apps based on having our mobile phones tethered to us 24x7, and the incessant and constant need to consume information via the built in alerts and notifications, which are designed to distract us from life and encroach on our minds in unhealthy ways.

I understand that it is not possible to live without Facebook and Google or a mobile phone today, but there is no reason why we need to have access to and distraction by these services twenty-four hours a day. My suggestion (and this is what I have done) is to delete Facebook from your phone, because it is the MOST distracting and harmful social platform of the lot and then turn OFF your notifications on all the other apps barring maybe two or three news sites.

This way you will still have access to everything but will be in total command of when and where you do, and no longer be a slave to their alerts and notifications.

I promise you that you will be much happier and science says your mind will be much healthier.