in

Facebook shut down 1.3 billion fake accounts in the last six months


Facebook is still struggling to curb the spread of spam, hate speech, violence and terrorism on its site. 

In its first quarterly Community Standards Enforcement Report, Facebook disclosed that it disabled 1.3 billion ‘fake accounts’ over the past two quarters, many of which had ‘the intent of spreading spam or conducting illicit activities such as scams’. 

The tech giant also revealed millions of standards violations that occurred in the last six months leading up to March. 

This includes inappropriate content like vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts. 

Scroll down for video

Facebook released its first quarterly Community Standards Enforcement Report on Tuesday, saying its AI detection technology continues to improve, but has struggled with hate speech

Facebook released its first quarterly Community Standards Enforcement Report on Tuesday, saying its AI detection technology continues to improve, but has struggled with hate speech

Facebook acknowledged that its artificial intelligence detection technology ‘still doesn’t work that well,’ particularly when it comes to hate speech, and that it needs to be checked by human moderators. 

‘It’s important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what’s important and what works,’ said Guy Rosen, vice president of Product Management at Facebook, in a statement. 

‘…We have a lot of work still to do to prevent abuse’, he added. 

WHAT KINDS OF POSTS AND PHOTOS HAS FACEBOOK TAKEN ACTION AGAINST?

Facebook published its first quarterly Community Standards Enforcement Report on Tuesday. 

In it, the firm detailed how its artificial intelligence tools have succeeded – and at times failed – at effectively removing violating content from the platform.  

Here are the kinds of content it disabled in the last six months leading up to March: 

Violence 

Q4 2017: 1.2m pieces of content — 72% flagged automatically

Q1 2018: 3.4m pieces — 86% flagged automatically

Hate Speech 

Q4 2017: 1.6m pieces — 24% flagged automatically

Q1 2018: 2.5m pieces — 38% flagged automatically

Porn  

Q4 2017: 21m pieces — 94% flagged automatically

Q1 2018: 21m pieces — 96% flagged automatically

Terrorist Propaganda  

Q4 2017: 1.1m pieces — 97% flagged automatically

Q1 2018: 1.9m pieces — 99.5% flagged automatically

Spam 

Q4 2017: 727m pieces — 99.8% flagged automatically

Q1 2018: 836m pieces — 99.7% flagged automatically

Fake accounts  

Q4 2017: 694m pieces — 99% flagged automatically

Q1 2018: 583m pieces — 99% flagged automatically 

The firm has said previously that it plans to hire thousands more human moderators to ‘make Facebook safer for everyone’. 

Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by automation, which fails to interpret nuances like counter speech, self-referential comments or sarcasm.  

It took down or applied warning labels to 3.4 million pieces of violent content in the last six months — a 183% increase from the final quarter of 2017.  

Facebook said in a blog post that most of the action it takes to remove bad actors on its site is tied to spam and fake accounts. The firm disabled 1.3 billion spam accounts during the period

Facebook said in a blog post that most of the action it takes to remove bad actors on its site is tied to spam and fake accounts. The firm disabled 1.3 billion spam accounts during the period

Almost 86% of violent content was found by the firm’s technology before it was reported by users. 

The increase in violent content may be tied to a flare up in global conflict, such as recent violence in Syria, Alex Schultz, Facebook’s vice president of data analytics, told reporters at a press briefing.

‘Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook,’ Schultz explained.  

Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by AI, which fails to interpret nuances like counter speech or sarcasm

Facebook moderated 2.5 million posts for violating hate speech rules, but only 38% of these were flagged by AI, which fails to interpret nuances like counter speech or sarcasm

Facebook has faced fierce criticism from governments and rights groups for failing to do enough to stem hate speech and prevent the service from being used to promote terrorism, stir sectarian conflict and broadcast acts including murder and suicide.

It uses both software and an army of moderators to take down text, pictures and videos that violate its rules.

Rosen said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.

‘Technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages,’ Rosen said.

Facebook CEO Mark Zuckerberg (pictured) has pledged to hire more human moderators to take down content that spreads hate speech, propaganda or terrorism

Facebook CEO Mark Zuckerberg (pictured) has pledged to hire more human moderators to take down content that spreads hate speech, propaganda or terrorism

However, Facebook’s AI technology has significantly improved in its ability to spot and remove fake accounts before the public interacts with them. 

In the last six months, the firm’s AI tools found 98.5% of the fake accounts shut down by Facebook, while discovering 100% of spam.  

More than a quarter of the human race accesses Facebook, as it now counts two billion monthly users.

Under pressure from several governments, Facebook has been beefing up its moderator ranks and hopes to reach 20,000 by the end of 2018.

‘Whether it’s spam, porn or fake accounts, we’re up against sophisticated adversaries who continually change tactics to circumvent our controls,’ Rosen said.

The update marks the first Community Standards report since Facebook was hit with a massive data privacy scandal earlier this year. 

Three million Facebook users had intimate details exposed as a new data protection scandal has hit the social media platform. The quiz, called myPersonality, is an app which collected highly sensitive data, including psychometric test results

Three million Facebook users had intimate details exposed as a new data protection scandal has hit the social media platform. The quiz, called myPersonality, is an app which collected highly sensitive data, including psychometric test results

It was revealed in March that Cambridge Analytica, a research firm with ties to the Trump campaign, harvested as many as 87 million users’ data without their knowledge.   

On monday, it was revealed that three million Facebook users had their most intimate details exposed.

A popular personality app failed to provide adequate protection to the ‘anonymous’ data of participants, the latest of a string of security breaches.

The quiz, called myPersonality, collected highly sensitive data – including psychometric test results that reveal how neurotic or extrovert an individual may be.

Investigators found the information was poorly protected for four years and gaining access to it was relatively easy.

The myPersonality app has now been suspended as one of 200 Facebook has removed from its social network.  



Source link

ESPN+ is essential for the cord-cutting sports fan

Frugal People, What Are Your Favorite Money-Saving Hacks?