Twitter is experimenting with a new feature, that will warn users before they post abusive or harmful content that may get reported.
“When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful,” Twitter’s official Twitter Support account posted on Tuesday.
The feature will prompt users to alter their replies if the content typed out by them uses language that can be deemed “harmful.”
Other social media platforms also have similar features to prompt users to change abusive replies. Facebook-owned short video platform Instagram, for instance, warns users with a message that their content “looks similar to others that have been reported” if they type out abusive content.
Twitter has also recently been working on preventing abusive content on its platform. The micro-blogging platform had earlier introduced a feature that could let users limit replies to their tweets to prevent abuses.
In another update, the platform said that it was also experimenting with new layouts for replies to make its user interface easier for Twitterati.
“We’re also experimenting with placing like, Retweet, and reply icons behind a tap for replies. We’re trying this with a small group on iOS and web to see how it affects following and engaging with a convo,” it said
The social media platform will roll out the feature to select iOS and web Twitter users.
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.