Twitter has launched a new behaviour monitoring system that will hide content from accounts identified as trying to distort or disrupt public conversation on the site.
The site said the new system was designed to deal with ‘troll-like’ behaviour from a minority of users that are often reported as abusive, but in some cases are not in breach of Twitter rules, by instead relegating their posts.
As part of plans to improve the health of discussion on the platform, Twitter said it will use signals such as accounts without a confirmed email address or those that heavily tweet other accounts which do not follow them.
The company says this will help it to spot and demote disruptive content that appears in conversations and search results.
Scroll down for video
Twitter has launched a new behaviour monitoring system that will hide content from accounts identified as trying to distort or disrupt public conversation on the site (stock image)
The firm’s vice president for trust and safety, Del Harvey, told the Press Association: ‘The reason that we’re doing this is that when we actually looked at where abuse reports were coming from, the majority of reports come in from those two places – from conversations and search.
‘But what we’ve learned as we’ve dug into that a little bit more is that there were actually this tiny number of accounts – less than one per cent of accounts – that were making up the majority of reports that we were receiving for abuse.
‘Within that, some of those were breaking the rules and we take action on those, but there’s a lot that aren’t and within that grouping there is a number of accounts that are engaged in behaviours that really distort and detract from the experience that people have in those areas.’
Twitter said the majority of the new signals it will use are not visible externally.
They include behaviours such as one person signing up for multiple accounts simultaneously, or activity that Twitter says might indicate a planned co-ordinated attack on other users.
‘We started thinking how could we use a lot of the systems that we’ve been improving and building up over the past couple of years to improve people’s experience really across the board, without putting the burden on the individual to do all the work to clean things up’, said Ms Harvey.
The site said the new system was designed to deal with ‘troll-like’ behaviour from a minority of users that are often reported as abusive, but in some cases are not in breach of Twitter rules, by instead relegating their posts (stock image)
The company said early tests of the system had shown a reduction in the number of abuse reports being filed, indicating users were having a ‘better experience’ because of the reorganisation.
‘These signals will now be considered in how we organise and present content in communal areas like conversation and search’, said a blog post by Ms Harvey and Twitter product manager for health David Gasca on the new system.
‘Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on ‘Show more replies’ or choose to see everything in your search setting.
‘The result is that people contributing to the healthy conversation will be more visible in conversations and search.’
WHAT ARE TWITTER’S POLICIES?
Graphic violence and adult content
The company does not allow people to post graphic violence.
This could be any form of gory media related to death, serious injury, violence, or surgical procedures.
Adult content – that includes media that is pornographic and/or may be intended to cause sexual arousal – is also banned.
Some form of graphic violence and adult content is allowed in Tweets marked as containing sensitive media.
However, these images are not allowed in profile or header images.
Twitter may sometimes require users to remove excessively graphic violence out of respect for the deceased and their families.
The platform is not allowed to be used to further illegal activities.
Users are not allowed to use badges, including but not limited to the ‘promoted’ or ‘verified’ Twitter badges, unless provided by Twitter.
Accounts using unauthorised badges as part of their profile photos, header photos, display names, or in any way that falsely implies affiliation with Twitter or authorisation from Twitter to display these badges, may be suspended.
Users may not buy or sell Twitter usernames.
Username squatting – when people take the name of a trademark company or a celebrity – is not allowed.
Twitter also has the right to remove accounts that are inactive for more than six months.
Context matters when evaluating for abusive behaviour and determining appropriate enforcement actions.
Factors we may take into consideration include whether the behaviour is targeted at an individual; the report has been filed by the target of the abuse or a bystander or the behaviour is newsworthy and in the legitimate public interest.
Users may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people.
This includes, but is not limited to, threatening or promoting terrorism.
Users may not promote or encourage suicide or self-harm. Users may not promote child sexual exploitation.
Users may not direct abuse at someone by sending unwanted sexual content, objectifying them in a sexually explicit manner, or otherwise engaging in sexual misconduct.
Users may not use hateful images or symbols in your profile image or profile header.
Users may not publish or post other people’s private information without their express authorisation and permission.
Users may not post or share intimate photos or videos of someone that were produced or distributed without their consent.
Users may not threaten to expose someone’s private information or intimate media.
Earlier this year Twitter announced plans to focus on the health of conversation taking place on the platform.
Chief executive Jack Dorsey admitted the site had been misused in the past but was committed to ‘help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress’.
Twitter has been regularly criticised for its approach to handling abuse and offensive content, something Mr Dorsey has said it must continue to improve.
Ms Harvey said Twitter was discussing the technology now as a way of being transparent with users.
‘We’re talking about this system much earlier than we normally would talk about something like this,’ she said.
‘Part of the reason for that is because we want to be open and transparent about what we’re doing and where we’re focusing our efforts, so that we do get more feedback from people as they have this experience, so we do get more insight from a broader group of people who we get research from, or the more limited targeted feedback that we often get.
‘Part of our whole reason for talking about this, even though it’s day one of it, is so that we can continue to get that feedback and make sure we are delivering the experience that people want.’