
The META MESS
Niranjan Bhombe
29 November 2021

In developing countries Facebook’s user base is huge and growing.
Recently Facebook rebranded itself as Meta in a bid to shine a light on Facebook and Mark Zuckerberg’s work on virtual reality and its creation of a metaverse. Before Facebook leaps into a virtual reality social media platform it has to solve a number of flaws in its current software’s like Instagram, Facebook and WhatsApp. It appears, Facebook has detailed knowledge of its platforms being littered with discrepancies that cause harm. And often only the company can fully fathom the potential harm it can cause, detailed in a number of research papers written by Facebook employees. The Wall Street Journal managed to unearth and classify documents to expose Facebook turning a blind eye to human trafficking. Facebook watched haplessly as the Patriot Party which instigated the violence on the US capitol gathered members and many more occasions around the world where Facebook’s platforms were used to spread hatred and instigate violence.
In developing countries Facebook’s user base is huge and growing. Human traffickers in the Middle East used the site to lure women into abusive employment situations in which they were treated like slaves or were forced to perform sex work. Mexican Drug cartels were using the platform to recruit, train and pay hit men. Hate speech in Myanmar proliferated across Facebook’s platforms and the company didn’t do enough to stop incitements to violence against the minority Rohingya population, which were victims of ethnic cleansing. An Armed group in Ethiopia incited violence among ethnic minorities via Facebook. In developing countries Facebook is also used for selling organs, pornography and to squash down moments of political dissent. There are many examples of how Facebook’s incompetence has cost many people their lives, and more examples other than these are presented in the WSJ article on Facebook Papers.
In many countries where the company operates it has few or no employees who understand the local dialect required to identify dangerous or criminal intent. For example, Arabic is spoken by millions of users on Facebook, yet the content reviewers working for the company in the Middle Eastern region fail to catch abusive and violent content on Facebook. Facebook has failed to provide resources to develop automated systems called classifiers that weed out harmful content for the developing countries. The Artificial Intelligence systems that form a backbone of Facebook’s enforcement don’t cover most of the languages in developing countries. When problems have surfaced publicly, Facebook has said it addressed them by taking down offending posts. But it hasn’t fixed the systems that allowed offenders to repeat the bad behavior. Instead, priority is given to retaining users, helping business partners and at times placating authoritarian governments, whose support Facebook sometimes needs to operate within their borders, the documents show.
Facebook treats harm in developing countries as “simply the cost of doing business” in those places, said Brian Boland, a former Facebook vice president who oversaw partnerships with internet providers in Africa and Asia before resigning at the end of last year. Facebook has focused its safety efforts on wealthier markets with powerful governments and media institutions even though it has turned to poorer countries for user growth. The developing part of the world has hundreds of millions more users compared to US and Canada, and 90% of the new users come from the developing countries.
Facebook has tried to improve the scrutinizing of its platform by hiring contractors and asking employees to parse through the content posted on the company’s platform to flag and remove content that doesn’t abide by its policies. However in 2020, Facebook employees and contractors spent more than 3.2 million hours searching out and labeling or, in some cases, taking down information the company concluded was false or misleading, the documents show. Only 13% of those hours were spent working on content from outside the U.S.
Bharat is also a developing country and Facebook has a lot at stake in Bharat, with more than 300 million Facebook users and more than 400 million people on WhatsApp. The company last year said it is investing $5.7 billion to expand its operations in Bharat and help boost the country’s nascent digital economy. 1.3 billion People in Bharat have deep social and religious divisions that periodically erupt into fatal confrontations. In Bharat people speak more than 22 major languages, making content moderation challenging, as the AI powered systems aren’t apt in all the variety of languages and many people have limited digital literacy.
Facebook’s trillion-dollar business is built largely on its unique ability to keep users coming back, in part by maximizing the viral spread of posts that people will share and re-share. The issue with promoting viral content is that sometimes it leads to spread of false information. In 2019 Facebook employees set up a test account as a female Indian user. They merely followed pages and joined groups recommended by Facebook’s algorithm. The employees found out that the user’s news feed had become a constant barrage of polarizing nationalist content, misinformation, violence and gore. And the Facebook watch service seemed to recommend a bunch of soft porn.
Inflammatory content on Facebook spiked 300% above previous levels at times during the months following December 2019, a period in which religious protests swept Bharat, researchers wrote in a July 2020 report. Rumors and calls to violence spread particularly on Facebook’s WhatsApp messaging service in late February 2020, when communal violence in Delhi left 53 dead, according to the report. Most users from Hindu and Muslim communities reported that they saw a lot of content that encourages violence, hatred and conflicts.
The reports show that Facebook is privately aware that people in its largest market are targeted with inflammatory content, and that Indians say the company isn’t doing enough protecting them. The documents are a part of an extensive array of internal Facebook communications that offer an unparalleled look at how its rules favor elites, its algorithms breed discord, and its services are used to incite violence and target vulnerable people in Bharat.
People in Bharat use WhatsApp extensively to spread falsified inflammatory content. Some users have reported that if this continues without any intervention for another 10 years, WhatsApp will only be used for spreading fake news. Many Facebook employees have recommended that Facebook invest more resources to build systems that are specifically built to detect and enforce on the inflammatory content in Bharat. There is also a recommendation to create a Bank of inflammatory content in Bharat to better understand the posts Bharatiyans were sharing so that they could be flagged.
The Government of Bharat in response has threatened to jail employees of twitter and Facebook if they didn’t comply with take down requests for posts that may incite violent protests and hatred in the community. The Ministry of Electronics and Information Technology of Bharat is considering separate legislation for social media companies to curtail user harm, amid allegations that tech giant Meta (earlier known as Facebook) was promoting its subsidiary Instagram to children despite potential harm.
Mark Zuckerberg praised Bharat in December as a special and important country for Facebook Inc., saying that millions of people there use its platforms every day to stay in touch with family and friends. But Facebook or Meta’s response to protect such a vast naïve user base from the barrage of harmful content that is available on its platforms is meek. Similarly all over the world Facebook is too busy promoting its own business and increasing its user base rather than improving the quality of content on its platforms. Facebook has had many hearings in the US congress, hundreds of instances where people have abused its platform, and it still hasn’t fixed the fundamental issue with unrestricted sharing of information on all its platforms.