Top companies, including Google, Facebook, Instagram, WhatsApp, Twitter and Koo, removed over 110.88 million pieces of content and accounts over the past three months since the social-media companies started releasing their monthly compliance report under the new IT rules.

The majority of the posts or accounts have been proactively tracked by them using artificial intelligence and machine learning, and a smaller number was reported by users.

Facebook alone removed 95.13 million posts, followed by 7.09 million posts by its subsidiary Instagram between May and August 2021. Messaging app WhatsApp blocked 7.07 million accounts in the same period.

Search engine Google’s core content platform is YouTube, and the highest number of complaints the tech company has received were for copyright issues . Between May and August, Google actioned 1.47 million pieces of content.

Twitter’s total stood at 117,021, which includes both posts removed and accounts suspended. The microblogging site’s Indian counterpart Koo removed 11,911 posts after checking through 199,343 koos reported by users and its own engine.

Key categories

Across the platforms, key categories for reported posts were sexual and violent graphic content, spam, hate speech, terrorism and suicidal content, among others, depending on the type of the social-media platform.

Prasanth Sugathan, Legal Director, SFLC.in, told BusinessLine : “I think this data is definitely useful. We are getting to know more details about what kind of posts are being removed. This information is not available so granularly in the global transparency reports.

“If you track these reports on a periodic basis, you will be able to find trends and numbers on the type of posts are increasing. Both the Indian and global transparency reports published by the social-media sites reveal a lot of information even about the government requests. What kind of data they are asking to action, number of accounts they have asked information about and so on.”

Independent internet security researcher Rajshekhar Rajaharia told BusinessLine that law enforcement bodies can surely use the data to keep a track of crime rates on social media. “Based on which category has more crime rates, laws can be created or changed. This also makes the companies cautious about what data category they need to focus on to make their platform healthier,” said Rajaharia.

Facebook’s problem

While Facebook and its subsidiaries have been reporting the highest numbers, the social-media giant is currently being scrutinised globally for alleged unethical business practices when it comes to content moderation. According to whistleblower Frances Haugen’s research notes, in February 2019, Facebook had set up a test account in India to check how its algorithms impacted what people get to see in the country. Within three weeks, the test user’s home page was filled with fake and gory images of beheading and violence. There were even doctored images of India’s air strikes against Pakistan and other jingoistic content.

Facebook’s largest markets at the moment include India, Brazil and Indonesia outside the US. In fact, India is one of the few developing countries where it has on ground staff. The Wall Street Journal, in its ongoing series of investigation on Facebook, on Saturday reported that during the period of religious protests in India in the months following December 2019, inflammatory content spiked on Facebook by 300 per cent compared to previous levels.

The rumours and calls to violence were particularly spread widely through Facebook’s messaging app WhatsApp in late February 2020.

Srinivas Kodali, independent researcher and privacy rights activist, believes that this monthly data being gathered under the new IT rules ultimately remains in the control of the social-media companies.

“This data can be used to define people’s choices and influence their political opinions. It could be used to manipulate people’s choices. Also, you are giving the choice to the social-media companies to show what they want. It is not being done by an actual regulator. The decision making is lying with these private companies and actual data can be censored,” he told BusinessLine.