Facebook has faced increased scrutiny during the past year over how it handles controversial issues such user privacy, violent content and terrorist propaganda.
With the scrutiny has come questions about the role Facebook plays in society, the responsibilities it should uphold, and what changes it can make to be a safer space for users.
To provide more transparency about why Facebook makes the choices it does around some of the toughest challenges of our time, the social network, which is closing in on two billion users worldwide, has launched a new initiative called Hard Questions.
Vice President for Public Policy and Communications Elliot Schrage writes in a blog post that with the initiative, Facebook will open up about a variety of “complex subjects”. This means explaining its decision-making around these topics as well as exploring hard questions themselves.
Examples of the kinds of questions Facebook will address include how online platforms should prevent terrorists from spreading propaganda online, what should happen to a person’s online identity after they die, who gets to decide what’s controversial content or what’s deemed fake news, and how users of all ages can participate in social media in a safe way.
These are, indeed, hard questions, and Facebook says “sometimes we get it wrong” in the choices it makes.
The aim with the Hard Questions initiative is to at least give users a better understanding of how Facebook came to its decisions, and show it’s taking these issues seriously.
Facebook is inviting users to submit ideas on topics to discuss, as well as suggestions on what it can improve upon. If you have ideas or suggestions, send them to email@example.com.
Countering terrorism with tech and AI
Facebook’s first Hard Questions post went up shortly after its announcement and discusses how the company counters terrorism online.
Recent terrorist attacks have led to questions about what more tech companies, including Facebook, Google and Twitter, can do to prevent terrorists from spreading propaganda through their channels.
“Our stance is simple: There’s no place on Facebook for terrorism,” state Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, in today’s post.
Facebook uses a number of strategies to identify, remove and report terrorists and terrorist posts, they explain, including artificial intelligence.
Some of the “cutting edge techniques” Facebook uses include image matching, language understanding, identifying terrorist clusters, detecting new fake accounts created by repeat offenders, and more closely sharing data between Facebook, WhatsApp and Instagram to quickly and effectively act against terrorists and terrorist posts.
Facebook is also working with other companies, including Microsoft, Twitter and YouTube, to identify content that’s produced by or supports terrorist organizations.
Humans play an important role in Facebook’s counter terrorism efforts, Bickert and Fishman write. A staff that will grow by 3,000 over the next year works ’round the clock to review user-reported content, and Facebook employs more than 150 people from a wide range of relevant professional backgrounds who work exclusively or primarily on combating terrorism.
These are some of the efforts Facebook has undertaken to fight terrorism, and the tools and techniques the company uses are only likely to improve over time.
“We are absolutely committed to keeping terrorism off our platform,” write Bickert and Fishman, “and we’ll continue to share more about this work as it develops in the future.”