Google Analytics

Showing posts with label advertising. Show all posts
Showing posts with label advertising. Show all posts

Monday, July 11, 2022

The Case Against Facebook (now Meta)

(Image: OMG News Today)


The primary issue with Facebook is not their platforms or algorithms - those can be fixed. It is their leadership.


The company was borne out of deception after Mr. Zuckerberg reneged on a contractual agreement he made with his Harvard classmates. They had approached him with an idea for a social networking platform and sought his help to build it. In 2009, Mr. Zuckerberg paid his ex-classmates around $65 million to settle the lawsuit.


I am not suggesting Mr. Zuckerberg set out to build a hate-filled platform, but I am saying that his lack of ethics and his megalomania combined with a desire to blindly maximize profits make Meta’s platforms uniquely dangerous.


The reality is even worse.


It is not just that Mr. Zuckerberg and Ms. Sandberg have chosen to ignore real-world harms. In a bid to make their service more addictive to users, they have actively designed and conducted experiments to find new ways to manipulate emotions.


Independent studies have shown that a large majority of health news shared on Facebook is fake or misleading, yet for a long-time the platform embraced conspiracy theorists, anti-vaxxers and climate deniers because fake news drives more engagement than boring facts, which in turn translates to more advertising revenue for the company.


In my mind, the pivotal point for Facebook came in 2012 when General Motors, one of the largest advertisers in the U.S., decided to stop advertising on Facebook, saying that “paid ads on the site were having little impact on consumers' car purchases.”


GM’s announcement came one week before Facebook’s IPO and raised some uncomfortable questions, not only about the company’s ability to maintain its 88% revenue growth from the prior year, but also its astronomical valuation. One based entirely on an ad-driven revenue model.


At the time, Anant Sundaram, with the Tuck School of Business at Dartmouth, noted that the average price-to-earnings ratio for the majority of US companies over the last one hundred years had been around 15, but Facebook’s PE ratio was 100.


He added that “it would take Facebook 100 years to generate enough profits to pay for itself” and it seemed like investors were betting the company's profits would “double, and then double again, and then double again — within the next few years“. He summed up the challenge saying that to succeed Facebook would “need to attract 10 percent of all advertising dollars spent on the planet…”

When you combine this unrealistic growth expectation with an unscrupulous founder, the result is what Francis Haugen described as a company that has, “over and over again, shown it chooses profit over safety”.


We know that the data analytics firm which briefly worked with Trump’s election team in 2016, Cambridge Analytica, legally bought and harvested the personal data of 50 million Facebook users (and their friends). They then used this data to try to influence and manipulate voting behaviour.


While this was the first time most people became aware of real-world dangers and the cost of giving away personal information for “free”, the red flags around Mr. Zuckerberg and Ms. Sandberg’s business decisions had been apparent for many years prior.


In 2013 a tech consultant revealed that Facebook collected content that people typed on the site but erased and never actually posted. The company’s argument justifying this intrusive data collection was that Facebook could better understand their users if they knew their “self-censored” thoughts.


In 2014 the NYTimes reported that Facebook was manipulating people’s newsfeeds, showing overwhelmingly negative or positive posts. In effect, they were using people as lab rats in a “psychological study to examine how emotions can be spread on social media”. At the time the lead researcher at Facebook, Adam Kramer, posted a public apology which has since disappeared.


In conducting this experiment Facebook never felt the need to inform or seek consent from users before making them part of the experiment, a precondition for any ethical research. After they were outed, Facebook argued that users had given “blanket consent to the company’s research as a condition of using the service”.


When Mr. Zuckerberg bought WhatsApp in 2014 he promised to protect user privacy. In fact, WhatsApp’s co-founder penned a blog post assuring users that “Respect for privacy is coded into our DNA…” and that they would continue to “know as little about you as possible…”. Less than two years later Mr. Zuckerberg went back on his word, mandating that WhatsApp share personal information with Facebook.


In 2015, Mr. Zuckerberg launched a seemingly altruistic initiative to provide free internet access to the poorest people in the world, called internet.org. This too turned out to be smoke and mirrors. Arguably, Mr. Zuckerberg’s real goal was to create a global monopoly for Facebook by building a walled-off internet. 


The condition for the “free internet” was that Facebook would decide the basket of websites people could access. No other social networks were included and Google Search was also excluded. Mr. Zuckerberg likely wagered that if people’s primary experience on the internet was on Facebook, they would come to think of Facebook as the internet. You can read my piece on "How Facebook Can Fix Internet.org".


Based on internal documents reviewed by the Wall Street Journal, we now know that many of these poor people ended up being charged millions of dollars a month for their “free” internet via carrier data charges due to “software problems” at Facebook.


In 2016, the Wall Street Journal discovered that Facebook was attempting to spread its tentacles into the personal lives of non-Facebook users by tracking them across the internet. Under the guise of showing people more targeted ads, their plan was “to collect information about all Internet users through 'like' buttons and pieces of code embedded on websites.”


The Wall Street Journal reported in 2018 that Facebook had been over inflating the average viewing time for video ads on its platform, by as much as 900 percent for over a year. An unreacted filing from a 2018 lawsuit in California claims that Sheryl Sandberg was informed of the issue in 2017, including a proposed fix, but the company refused to make the changes saying it would have a “significant” impact on revenue.


The Financial Times reported the statements today based on a newly unredacted filing from a 2018 lawsuit in California. The lawsuit claims that Facebook knowingly overestimated its “potential reach” metric for advertisers, largely by failing to correct for fake and duplicate accounts. The filing states that Facebook COO Sheryl Sandberg acknowledged problems with the metric in 2017, and product manager Yaron Fidler proposed a fix that would correct the numbers. But the company allegedly refused to make the changes, arguing that it would produce a “significant” impact on revenue.


In 2018, a U.N. fact-finding mission pointed to the role of social media networks, and Facebook in particular, in fueling hate speech against the Rohingya minority in Myanmar. The report said that the “incitement to violence” was “rampant” and “unchecked.” The chair of the committee added that in Myanmar “social media is Facebook”, and “for most users [in Myanmar], Facebook is the internet.”

Independent research going back to 2004 has shown that social media detracts from healthy face-to-face relationships and reduces time spent on meaningful activities while increasing sedentary behavior. 


This can lead to internet addiction, which in turn erodes self-esteem through negative comparisons people make on sites like Instagram. But skeptics claimed it was not clear whether “people with lower self-esteem are more likely to use social media, rather than social media causing lower self-esteem…”

In 2017, two academic researchers conducted a rigorous longitudinal study and published the results in the American Journal of Epidemiology, definitively answering this question. 


Their findings concluded that using Facebook was “consistently detrimental to mental health” and that both “liking others’ content and clicking links significantly predicted a subsequent reduction in self-reported physical health, mental health, and life satisfaction.”


In 2018, another comprehensive study by University of Pennsylvania confirmed that there was a direct link between social-media usage and depression and loneliness, and connected Facebook, Snapchat, and Instagram use to decreased well-being.


You might ask that if all social media is harmful, why single out Meta (formerly Facebook)? It’s a valid question. I offer a few reasons why we need to start with Meta.


First, no other social platform comes close to matching Meta’s global reach and scale.


As of Q3, 2021, Facebook had more than 2.89 billion monthly active users and both Instagram and WhatsApp crossed 2 billion each. TikTok is the only other social platform with more than one billion users. Compare this with less than 400 million people on Twitter, 478 million on Pinterest, and 514 million on Snapchat. 


Meta owns three of the four largest social networks on earth, which means Mr. Zuckerberg alone has the power to control and manipulate vital news, daily information and communication flow for more than half the planet’s population.


Second, consider that in many countries, Facebook’s platforms are not just dominant but they are the primary mode of communication for people. In India, around 340 million people use Facebook, and 400 million use WhatsApp's messaging service to communicate daily.


The world’s largest democracy has become a case-study in the real-world dangers of one company having unchecked power to impact people’s daily lives with uncontrolled content that is fueled by opaque algorithms.


In 2019, documents leaked to the Associated Press revealed that a Facebook employee created a dummy account to test how its algorithms affect new Indian users on their platform. The results shocked the company’s own staff.


In less than three weeks the test account’s newsfeed turned into a cesspool of fake news, vitriol and incendiary images and videos. Bloomberg reports that there were “graphic photos of beheadings, doctored images of India air strikes against Pakistan and jingoistic scenes of violence”. In documents released by Francis Haugen, a staffer wrote in a 46-page internal report “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life."


The reason this test was significant is because it was designed to focus exclusively on Facebook’s algorithms recommending content for the test user and not friends, family or others on the platform. 


Additional documents reviewed by the Associated Press show that Facebook had been aware of this problem for years, and even flagged India as one of its most “at risk countries” in the world, but struggled to do anything to limit the spread of vitriol in their largest and fastest growing market.


The other problem highlighted by Facebook’s test was that because the majority of posts were in Hindi, their content moderation algorithms were not able to detect it. Compounding this challenge is the fact that Indians also use different blends of Hindi, including something called Hinglish. A mix of Hindi and English words that no algorithm can be trained to decipher because often it is made up phonetically as people type.


Consider that in India alone there are 22 official languages and dozens more dialects, and globally there are over 7,000 official languages spoken, not including dialects. As of 2019, Facebook supported 111 languages, but translations for community guidelines and content moderation only existed in 41 languages. 

In essence, Meta’s public pledges to improve content moderation algorithms and hire thousands more human moderators will not solve this problem. According to internal documents reviewed by CNN, Facebook's own researchers stated that the company is not in a position to effectively address hate speech and misinformation content in languages other than English.


Another internal study reviewed by the Washington Post found that between 2017 and 2019 Facebook’s ranking algorithm gave “five times more weight to posts that users responded to with an "angry" reaction than those that evoked other reactions, such as “like”. The newspaper concluded that such posts, while more engaging, were far more likely to include “misinformation, toxicity and low quality news”.


The bottom line is that Mr. Zuckerberg and Ms. Sandberg have shown time and again that they have no real intention of reducing the vitriol and misinformation on their platform. The closest thing we have to a smoking gun pointing to the fact that they prioritise engagement over well-being is a 2011 internal email from Ms. Sandberg, when Facebook was preparing to take on Google’s new social network, Google+.


The emails are is included as part of evidence in an antitrust case filed by 46 US state attorneys general, the District of Columbia and Guam. In the email exchange, Ms. Sandberg writes “For the first time, we have real competition and consumers have real choice…” 


At the time the company was planning to remove users’ ability to untag themselves in photos. But based on the competitive situation it was decided internally to hold off on making changes “…until the direct competitive comparisons begin to die down.” The suit argues that it is proof that Facebook preserves user privacy when it faces external threats, but degrades it when those dissipate.


In late 2021, after seeing alarming signs of deteriorating mental health among youth, the U.S. Surgeon General conducted a national study. His report cited that one of the factors contributing to the mental health crisis is the fact that “social media companies were maximizing time spent, not time well spent.”  


The report was prompted by an alarming rise in teen emergency room visits for suicide attempts. Among adolescent girls suicide attempts surged 51% in early 2021, compared with the same period in 2019.


The Surgeon Generals’ findings are supported by another 2021 study that found that non-educational screen time for teenagers doubled during the pandemic, increasing from an average of 3.8 hours, to 7.7 hours a day. The researchers directly associated increased screen time with adverse health outcomes, which included weight gain and increased stress.


We also now have hard evidence, based on a Washington Post and ProPublica investigation, that groups on Facebook played a key role in spreading misinformation and false narratives between Election Day and the January 6th siege on the US Capitol.  The investigation found at least 650,000 posts questioning the legitimacy of Mr. Biden’s victory, with many posts “calling for executions and other political violence”.


An exasperated Facebook employee wrote on Jan 6th, on an internal forum, “All due respect, but haven’t we had enough time to figure out how to manage discourse without enabling violence?” “We’ve been fueling this fire for a long time, and we shouldn’t be surprised it’s now out of control.”


The fact is that any other company faced with so much internal and external evidence of their harm to society, and particularly young children, might seriously take stock and reconsider their business model. However, Meta, under Mr. Zuckerberg and Ms. Sandberg have demonstrated that they have no real intention of doing so.


Sure, they continue to offer cosmetic changes but these do nothing to solve the underlying problems. Take for example, Facebook’s creation of an independent oversight board. With 20 members this committee can only review a tiny subset of issues, and that too after the damage has been done. Not surprisingly, reporters have found that Meta has been less than honest with their own oversight board.


Even now, Meta’s leadership refuses to take any responsibility. Andrew Bosworth, soon to be their new CTO, recently told Axios that “society” was responsible for misinformation. He said, “Individual humans are the ones who choose to believe or not believe a thing. They are the ones who choose to share or not share a thing.”


This is not surprising since Mr. Zuckerberg told employees not to apologise. On the company’s earnings call, after Ms. Haugen’s revelations, he said that this was a “coordinated effort to selectively use leaked documents to paint a false picture of our company”


If Mr. Zuckerberg has nothing to hide, one wonders why, in the weeks following Ms. Haugen’s disclosures, Meta imposed new rules to limit internal access to “research discussions on topics, including mental health and radicalization” and researchers were told “to submit work on sensitive topics for review by company lawyers.”


Over the years, Mr. Zuckerberg has publicly called on lawmakers to regulate social media platforms. In 2019, he penned an op-ed in the Washington Post, saying “I believe we need a more active role for governments and regulators” and added “Lawmakers often tell me we have too much power over speech, and frankly I agree.” He asks Congress to regulate important online issues like free speech, harmful content, election integrity, privacy and data portability.


From an honest broker this might seem like a reasonable request, but this is Mr. Zuckerberg we are talking about.


Aside from the deep partisan divisions that forestall any meaningful legislation being enacted by Congress, Mr. Zuckerberg is aware that half the US Senate is 65 years or older. The current 117th Congress is the oldest in two decades. The average age of senators is 63.9 and the average age of house members is 58.3. We have twenty-one senators who are between the ages of 70 and 80.


In addition, there exists a skill gap within Congress. Only 11 members (10 in the House and 1 in the Senate) of the current 535 voting members and 6 non-voting delegates have an engineering degree or technical background.


Mr. Zuckerberg is still not taking any chances and has been quietly spending millions to build a powerful D.C. lobbying arm. Over the last decade Big Tech firms have become the dominant lobbying group in Washington, overtaking Big Oil and Big Tobacco. 


Meta which was not among the top eight spenders in 2017 has become the largest individual lobbyist, along with Amazon. Between 2018-2020, Facebook increased its lobbying spend by an a whopping 56%.


In 2020, after lawmakers began to increase scrutiny of tech companies, Meta spent more on lobbying than all the other Big Tech firms. More recently in the quarter ending September 2021, after the whistleblower Ms. Haugen came forward, they nearly outspent the entire D.C. industry on lobbying.


Their goal, it would seem, is to overwhelm the small handful of lawmakers who understand the complexities of social media and technology, by ensuring that they are outgunned and outvoted. To achieve this, Meta’s army of lobbyists routinely wine, dine, woo and whisper in the ears of the majority lawmakers.


The Wall Street Journal reported that the day after Ms. Haugen went public, Meta’s lobbying arm went to work


First they called lawmakers and advocacy groups on the right, telling them that Ms. Haugen was trying to help Democrats. Next they reached out to Democratic lawmakers to say that Republicans were focussed on “the company’s decision to ban expressions of support for Kyle Rittenhouse”, the teenager who killed two people during unrest in Kenosha, Wisconsin.


Both Republicans and Democrats familiar with the company’s outreach told the WSJ that Meta's goal was clearly to sow discord along partisan lines and muddy the waters so the two parties would not reach consensus on tough new rules governing social media companies, and Meta in particular.


We know that social media has adverse effects because the algorithms are designed with monetisation in mind. The more time you spend on these platforms, the more opportunities to advertise. As a result, harassment, manipulation and misinformation are rife in an environment where gaining followers and increasing likes is dependent on getting noticed. 


With the volume of noise and clutter on these platforms today, the more controversial, vitriolic and outrageous a post, the more likely it is to get noticed and promoted by the algorithms. Anyone remember the viral video of granny crossing the street safely?


Other CEO’s have acknowledged these dangers and are making efforts to mitigate adverse impacts. Even TikTok, a Chinese company, says they are working on changing their algorithms. Pinterest recently took the extreme step of blocking all vaccine related searches, until they can find a long-term solution.


I have nothing against Mr. Zuckerberg personally, and believe that when we get this right, social media can be a net positive force in the world. However, I don’t believe this can or will happen under Mr. Zuckerberg’s stewardship. There must be a reason why, of all the Big Tech companies, Meta has by far the longest list of “insiders-turned-critics.“


Peter Drucker, the marketing guru, famously said “Culture eats strategy for breakfast” and this is fundamentally the issue at Meta. 


It took Microsoft over a decade, two CEO changes and a Federal antitrust investigation before they were able to change their toxic ‘rank and yank’ culture. Similarly, it was not until Mr. Kalanick was forced out of Uber by powerful venture investors that the company was able to expunge its cut throat, chauvinistic and frat boy culture.


Mr. Zuckerberg holds an absolute majority of Meta’s voting shares and with the company’s dual-class voting share structure, he retains majority control in any shareholder vote. What that means is, as John Webster noted in the Duchess of Malfi, “Usually goodness flows, but if it is poisoned near the head, death spreads throughout the entire fountain.”


I am wholeheartedly a capitalist and make no bones about the fact that it is the only system, even with its many flaws, that has proven successful in lifting millions out of poverty. However, a few private companies should never have this much power to disseminate the world’s news and information thorough black boxes.


Meta has the power to manipulate the minds of people on a hitherto unimaginable scale. Between Facebook, Instagram, WhatsApp and Messenger, one company and one man control the flow of critical information for more than half the earth’s population.


With great power comes great responsibility, and as long as a reckless, irresponsible and dishonest leader like Mr. Zuckerberg is at the helm, that power will continue to be used irresponsibly.

Monday, March 16, 2020

COVID SIDE OF LIFE. Day 1: Job Today. Gone Tomorrow.

Pandemic Log: Monday, 16th March 2020


On Friday the 13th I was a contract employee at a global agency, finishing up a new business pitch. We had just been informed that everyone was being asked to work from home, starting that day.


That evening before I left the office I was told that I was being put on a new project. It was to start the following week. I ventured into the weekend grateful that my gig was being extended and that I would have a paycheque a while longer, during this uncertain and turbulent period.


Cut to Monday morning, I emailed my boss to discuss the new project and asked about my new contract. He suggested I speak with the HR head as they were responsible for sending my contract.


I contacted HR and they told me they would need to get final sign-off from the Chief Financial Officer (CFO) and would then get the renewed contract back to me.


All good.


About twenty minutes later I got an email from the head of HR saying the CFO said that because numerous clients had cancelled or postponed ongoing projects, the company was suddenly stuck with excess staff capacity and would be unable to take on an external resource.


Not good.


The world was still pretty calm when I left the office on Friday evening.


Yes, people were preparing to work from home and getting used to a strange new normal, but as the weekend progressed things got dire.


The number of cases in New York State continued to rise. Panic started to set in among state and city  officials, as the Federal government woke up to the fact that they needed to deal with this crisis on a war footing. It could not be business as unusual.


The stock market crashed; again.


Oil prices plummeted; again.


States started mandating that all restaurants, cafes and bars close.


Gatherings of 500 people or less, allowed on Friday, became no more than 10 by Monday.


Primary elections in a number of states were postponed.


Lines at grocery stores continued to grow; even as their shelves continued to empty.


I had a gig on Friday. Everything changed the following Monday.

 

Wednesday, March 21, 2018

Facebook and Division by Data in the Digital Age

(Image: theodysseyonline.com)

“The world is now awash in data and we can see consumers in a lot clearer ways.”
Max Levchin (PayPal co-founder)

There was a time not too long ago when people from all walks of life gathered around the proverbial water cooler in offices, places of worship, community centers, schools, local sporting events or watering holes. This ritual was underpinned by a shared experience based on a national or local conversation or a cultural artifact like a popular new book, advertisement or TV show that everyone had recently experienced.

It was not that people gathered around and sang Kumbaya, but that we brought a variety of viewpoints relating to the same event. I remember such gatherings being a melting pot of diverse perspectives, and passionate opinions; some that we vehemently agreed with and others we disagreed with, equally vehemently. But irrespective of where we stood on an issue, we all walked away without animosity and with a perspective we would not have otherwise had.

I am not suggesting that we left with changed minds or that we were competing to bring others around to our point of view, but that by listening, discussing and accepting the fact that there are different reactions to exactly the same content, it allowed us to build empathy and I believe helped to open minds in the long run; and being face-to-face they were also civil and respectful.

The internet, with its ability to turn the planet into a virtual global square, was meant to be the ultimate water cooler and bring us even closer together through diverse and shared experiences on a scale unimaginable before, but the opposite has transpired.

In country after country, social media feeds and discussion forums are filled with disagreement and hate. Once respected members of society like journalists, academics and scholars are engaging in shouting matches on TV screens, while family members are unfriending each other on social media. Research shows that this generation is more lonely and unhappy than any before it.

Nobody seems willing to entertain or discuss a point of view slightly different from their own. We have lost the ability for nuanced conversation and seem only to find comfort in absolutism. And we have eroded our ability to empathise with those who do not share our finite and inflexible worldviews.

It’s as if we have all stopped talking to each other, and now only talk at each other. What happened?

To begin with, it is true that we no longer reside in neighborhoods populated with a broad mix people from different walks of life. Increasingly we live, work and socialize only with people with similar income and educational backgrounds. The majority of educated urbanites have long stopped attending places of worship or congregating in local centers where they might still fraternize with a wider cross-section of society and viewpoints.

Even online we have retreated into echo chambers and digital fortresses filled with similarly-minded people, and our social rituals have been replaced with impersonal digital ones. We chat with friends on WhatsApp, visit grandma on Skype and share all significant milestones with extended family through email and social media.

While it is true income and educational segregation have been in part responsible for our growing divide, I believe that digital targeting technology, invented by the advertising and social media industry, along with the growing sophistication of how much data is being used, has contributed to our loss of empathy, inability to compromise and increasing vitriol. Not only are massive amounts of personal data being accumulated, but it is being used to divide people into groups and to manipulate behaviour.

Every advertiser and marketer has always wanted to connect with customers on a more personal level, but it was never possible to talk to us on a one-to-one basis until recently. The sophistication of digital technology allows companies to monitor every keystroke, eye movement, voice command, even physical movement, and, more worryingly, they are now able to put it all together to create a startlingly granular and deeply accurate view of our daily lives, habits and motivations on an individual level.

Like most innovations, this type of data accumulation was done for targeting of products and to deliver personalised content; so people would no longer waste time looking at diaper ads when they wanted to buy shoes. The idea was to accumulate so much data about each individual that it would allow marketers to get so precise that they would always show the right ad, with the right product message, or right piece of content, at the very moment we were looking for it.

Sounds great in theory, but nobody considered the dangerous and unintended consequences of such sophisticated tracking and predictive algorithms that now power every website, internet service and mobile app. Or the ability to use it for things other than selling us shoes and diapers.

What started as an advertising tool has now grown into an information arms race with numerous companies accumulating more and more personal data on each of us without any transparency, independent or third party oversight. People do not have the ability to opt-out and nobody has a clear idea of how this data is being used or with whom it is being shared.

Granted, most advertisers still use personal data to sell more shoes or diapers, but because the use of this technology has proliferated far beyond marketing and media and is used by virtually every industry and by governments, it has greatly increased the potential for information to fall into the wrong hands, and to be used to manipulate and influence behaviour of individuals and groups.

We need look no further than the 2016 US election. We know the effectiveness with which state-sponsored Russian actors used ad-targeting technology on platforms like Facebook, Google, Twitter and other sites to target, test and fine-tune messages that spread various bits of misinformation. Cambridge Analytica, the data analytics firm that briefly worked with Trump’s election team, legally bought and harvested personal data of 50 million Facebook users (and their friends) from an academic who had built a Facebook app, to influence and manipulate voting behaviour.

It is important to understand just how sophisticated targeting technology is today. Anyone can accurately target the 38 year old baseball loving, Democrat voting, Budweiser drinking and Nike shoe collector on the Upper East Side of Manhattan, as well as their Grandma in Bhopal, India. The targeting is both granular and precise.

In addition, you can exclude people by age, ethnicity, religious belief or political affiliation, thereby ensuring efficacy of your message among only like-minded people. Additionally, I could ensure that the message I show grandma is not even seen by her neighbours, even when they are all on the same page on the same website or watching the same TV show (known as addressable TV).

This is what I refer to as division by data, when data is used to segment and sub-segment every section of the population, with each segment further refined with more granular data until it gets down to an individual level based on which algorithms decide “what” to show people.

What this means is that what I see on my Facebook newsfeed is not what my wife, my neighbour or colleague sees. With addressable TV, companies can show different ads to different people in the same area code and building while they are watching the same programs. The same is true of our Twitter feed, news, iTunes and Netflix recommendations and even Google search.

Ask a liberal and a conservative friend to type in the exact same search query, e.g. global warming, on their respective computers and see how different the results and ‘facts’ they get are. I urge every skeptic to read this article about an experiment conducted by Dr. Epstein, a senior research psychologist at the American Institute for Behavioural Research and Technology: “Epstein conducted five experiments in two countries to find that biased rankings in search results can shift the opinions of undecided voters. If Google tweaks its algorithm to show more positive search results for a candidate, the searcher may form a more positive opinion of that candidate.”

Consider that Facebook has become the primary “source of news for 44% of Americans” and now boasts over two billion active users worldwide and Google is what the world relies on to search for news, information and facts, and both are driven by this underlying ‘personalisation and targeting’ philosophy that I call division by data. Think about the fact that the greatest source of influence on human minds is still the power of persuasion - one that is driven by repeated exposure to the same message.

This is where the notion of using data obsessively to personalise everything down to the individual level has gone horribly wrong. By treating human beings like objects and dividing them into ever smaller groups that only see content, information, news and even ‘facts’ uniquely tailored and created based on their preferences and biases, we might manage to increase ad sales, but we also increase societal divisions by reducing the ability to find common ground on issues.

In the digital age, we have effectively replaced our real and proverbial water coolers with bottles of water that can be dynamically flavoured to meet individual tastes, and with this hyper-precise targeting we have ensured that we no longer have shared experiences that human beings have relied on for centuries as a way to build bonds that lead to diversity of thought and open-mindedness.

This is a solvable problem, but until we find ways to restore our water coolers in the digital age and craft sensible new regulations on data privacy, sharing and targeting, we will continue to weaken every democracy and hamper our shared progress. 

Saturday, September 10, 2016

Facebook, Fiefdoms, Privacy and the Potential for Abuse

(Image credit: churchm.ag)
 
“All human beings have three lives: public, private, and secret.” 
Gabriel García Márquez

Let’s start by asking ourselves a simple question; what value does Facebook provide to society?

I can already hear people say 'wait a minute', and start to argue that Facebook informs, entertains, connects, and allows us to stay in touch with family and friends. Facebook is a social sharing platform that connects people. However, unlike a Warby Parker or Unilever, it does not make or sell any tangible products to improve our health or well-being.

It is true that the same can be argued about eBay, Alibaba and Airbnb. They don’t manufacture goods, but merely facilitate transactions between buyers and sellers. However, Alibaba is an online mall where third parties sell products and Airbnb’s service fills a real-world need for accommodation.

With Facebook there is one fundamental difference - you and I are the product.

Without user-generated content and our friends and family engaging with it, Facebook makes and offers nothing. It is entirely powered by our routines, my stories, your creativity, and our combined curation of third party news and articles we post. Facebook is powered by you and me.

And their entire revenue model is based on effectively mining, stealing (through an opaque privacy policy) and selling our personal information to advertisers; arguably they provide no meaningful benefit to society. As for connecting us, we already did all this, through letters, movies, television, travel, newspapers and phone calls, much before Facebook existed.

Technology has certainly made it easier to connect and as a result we have all become lazier about making the effort to stay in touch; but let’s be clear that there is no innovation in terms of how we share, build relationships or create emotional bonds that Facebook has invented.

Consider that the non-technological version of the online platform existed for millennia in the form of Roman marketplaces and even modern day malls where people broke bread, socialised and had the ability shop from multiple vendors, all under one roof.

Facebook says they offer a forum to express ourselves freely and in saying that they pretend to empower us. They claim to be a democratic and open platform designed “to give people the power to share and make the world more open and connected” (source: Facebook Mission), when in reality and behind the scenes, they are doing exactly the opposite.

They have been caught manipulating our newsfeed, by showing overwhelmingly negative or positive posts and using us as lab rats to be “part of a psychological study to examine how emotions can be spread on social media.” (Source: New York Times article).

More recently an employee claimed they routinely censor right-wing content…” (Source: PC Mag article).  Another tech consultant who worked there disclosed that “Facebook collects all content that is typed into its website, even if it is not posted…” (Source: Information Age article).

More worryingly, earlier this year the Wall Street Journal reported that Facebook was starting to spread its tentacles into the personal lives of non-Facebook users; going well beyond the four walls of their own platform by tracking people all over the web under the guise of showing more targeted ads. “Now Facebook plans to collect information about all Internet users, through “like” buttons and other pieces of code present on Web pages across the Internet.(source: Wall Street Journal).

On the heels of this announcement, we found out that WhatsApp, which Facebook bought in 2014, is going to start sharing personal user information that includes your phone number, contact list and status messages with Facebook (Source: Scroll India article). This after WhatsApp had unequivocally promised that it would protect users' privacy when they agreed to be purchased by Facebook. You can read the WhatsApp co-founder Jan Koum’s blog post and 2014 promise about how “Respect for your privacy is coded into our DNA…”

Facebook has also announced that they are going to crack down on ad blockers and click bait headlines to make room for more advertising. They intend to do this by “making its advertisements indistinguishable from the status updates, photo uploads, and other content that appears in your news feed” (Source: PC Mag article). They justified this change with the now all too familiar refrain that because Facebook is a free service, they rely on advertising to keep them going.

A free service that claims unlimited ownership of and rights use every status update, family picture and personal video. A free service that believes it has a right to mine personal data, track people around the web, and then sell all that information to third parties (in non-transparent ways). A free service that stores personal data “…for as long as it is necessary to provide products and services to you and others…” and one that defines their collection of information in the broadest terms possible; “Things you do and information you provide. Things others do and information they provide. Your networks and connections. Information about payments. Device information. Information from websites and apps that use our Services. Information from third-party partners. Facebook companies.” (Source: Facebook Privacy Policy). Free indeed!

I understand that we need to give up some privacy in a digitally connected world, particularly where we expect things for free. But there also need to be rules around what is permissible and what crosses the line. Beyond privacy, the greater issue is that so much information concentrated in the hands of one or two companies makes conditions ripe for abuse.

The point is not whether Mark Zuckerberg is trustworthy or if he truly has noble intentions. Nor am I suggesting that Facebook is an evil corporation run by hobbit in a hoodie. Facebook has already been caught abusing their power numerous times from manipulating the newsfeed to using sophisticated algorithms to pick, choose and limit news, articles, politics, entertainment and information we are able to see and share.

Like every other global corporation in history, they are not immune from the temptation to abuse power in the search for growth, expansion and profits. Their misleading and altruistically packaged attempt to create a walled off internet, with a Facebook monopoly, in the developing world is yet another example of business intentions gone totally awry. You can read my piece about it here “How Facebook Can Fix Internet.org”.

Think about the fact that, with 1.7 billion active users (a number that continues to grow), they have greater influence than any government or news organisation has ever had over our worldview. They have more personal information and greater power than the Soviet Union had on its people at the height of communism. This should concern all of us.

The point is that no single company should hold this kind of power and influence over so many people. It will not end well; human beings are corrupted by absolute power. We cannot change the nature of the beast.