Instagram on Thursday introduced a new set of safety features in a bid to fight online bullying and other kinds of online harassment.
The new features arrive as the social media platform has been criticised for becoming the center of various bullying acts, especially among young adults and teens, globally.
The Facebook-owned application said it is making it easier to block multiple people at once and offering tools to restrict who can tag users, reports Yonhap news agency.
The new tools will allow Instagram users to block comments on their posts from a certain user without the user knowing, according to Instagram.
Instagram said blocked users will still be able to see their comments on a post, but the comments will not be visible to anyone else.
The company also rolled out a new tool to control what comments can appear on users' posts so that they can guard themselves against unwanted followers.
Instagram also said it uses both artificial intelligence (AI) technology and human resources to help control abusive content and spam on the platform.
Here's how the AI works:
Machine learning-based technology first detects contents that violate the company guidelines, and reports them to a team of human officials who review the contents before deciding whether they violate Instagram's policy.
Another AI tool, to be adopted later this month, works by identifying words and phrases that have been reported in the past as offensive, according to Instagram.
The AI tool then detects offensive language in comments and will send prompts to the writers, allowing the account users to rework their comments before posting them.
In an effort to prevent suicide and self-injuries, Instagram said it does not show any hashtag related to such content.
Instagram said it has cooperated with various agencies globally to prevent suicide, including inking a partnership with the Korea Suicide Prevention Center.