Threats common among Facebook apps users in the MENA

9 February 2021

There are threats that are more present between users in the MENA region and so it’s important to spotlight them individually, define their symptoms, and possible ways to mitigate their severity. FB employs policies tools, and technology to combat all threats, and we highlight the specific ones below.

JOSA

Child exploitation

FB does not tolerate any behavior or content that exploits children online and it develops safety programs and educational resources to help make the internet a safer place for children. Users play an essential role in stopping child exploitation on FB.  Child exploitation online can be categorized into three areas.

  • Inappropriate content: which means the dissemination and distribution of child exploitation material. Examples of inappropriate content would be creating a post that contains a picture of a child being physically abused, publishing a video of children being sexually exploited, sharing content containing any sort of child abuse, etc.
  • Inappropriate conduct: which means the act of enticing or forcing a child to take part in an inappropriate activity. Examples of inappropriate conduct would be to reply to a child’s post with an inappropriate comment, stalking a child and making them feel unsafe, harassing a child, etc.
  • Inappropriate contact: which means soliciting a child online often for the purpose of sexual gain. An example of inappropriate contact would be to seek contact with a child online with the purpose to abuse the child online or offline, and/or force them to participate in a sexual activity including sharing or viewing intimate images and/or videos.

 

How is FB fighting child exploitation?

One of FB’s most important responsibilities is keeping children safe across its family of applications. FB takes a multilateral approach to combating child exploitation on its platform.

  • Policies | Polices to provide unique protections for minors: FB platforms require everyone to be at least 13 years old before they can create an account (in some jurisdictions, this age limit may be higher). It violates our terms of services to provide a false age when creating an account. People can report an account belonging to someone under 13. FB does not allow content that sexually exploits or endangers children. When FB becomes aware of apparent child exploitation, the company reports it to the National Center for Missing and Exploited Children (NCMEC) in compliance with applicable law. FB also limits 13 to 18 years old interaction with other users and age gate the content they interact with.
  • Tools and Technology | Automatic removals: FB has designed their platforms to give people control over their own experiences — control over what they share, who they share it with, the content they see and experience, and who can contact them. These tools empower individuals to protect themselves against unwanted content, unwanted contact, and bullying and harassment online. When it comes to users aged 13-18, FB takes extra precautions. The company has designed many of its features to remind them who they’re sharing with and to limit interactions with strangers. FB is utilising machine learning technology to automatically identify child exploitative content, and proactively removes them from their platform. FB also adds confirmed child exploitative material in a shared image bank so material that matches images already in the bank can be automatically removed.
  • Partnerships | Partnership with Experts: FB does not tolerate any behavior or content that exploits children online and develops safety programs and educational resources with more than 400 organizations around the world to help make the internet a safer place for children. FB’s work has included using photo-matching technology to stop people from sharing known child exploitation images, reporting violations to the National Center for Missing and Exploited Children (NCMEC), NCMEC works with law enforcement agencies around the world to help victims, and helping the organization develop new software to help prioritize the reports it shares with law enforcement in order to address the most serious cases first.

Bullying and harassment

As a user on one of FB’s family of applications it’s important to understand what constitutes bullying. If you’re unsure if something’s meant to be a joke or is just plain bullying, the question to ask yourself is whether the behaviour hurts you. If the answer is yes, then it’s behaviour that constitutes bullying.

Bullying is the act of repetitively tormenting, bothering, and annoying a person. This can also consist of humiliating, and demeaning someone.Harassment is when the hurtful behaviour is discriminatory in nature and is based on someone’s race, religion, gender, sexual orientation, etc.

 

Examples of cyberbullying can include:

  • Spreading rumours and secrets about someone, often with the aim of humiliating them, e.g. “you sleep with all the men in your workplace”
  • Sending hurtful messages or threats, e.g. “you’re stupid”, “you deserve to die”
  • Pranking someone via video or voice call
  • Sharing embarrassing photos of a friend with the aim to ridicule them

How is FB stopping bullying and harassment?

  • Tools | Giving users more control: FB has rolled out features to give users control over their online presence. Users on Facebook and Instagram can remove offensive and hurtful comments, report a user with ease, and even report users or comments on behalf of friends or family members being bullied or harassed.
  • Technology | Notify users when they write offensive captions: On Instagram when a user attempts to post an image with a caption intending to offend, Instagram automatically detects the text caption as potentially offensive and notifies the user. The notification is a heads-up for the user that the caption resembles other content that has been reported, and that their account might be breaking community guidelines.

Non-consensual sharing of intimate images (NCII)

Non-Consensual Sharing of Intimate Images (NCII for short) is when someone’s intimate images are shared without their permission. This is also referred to as revenge porn.When someone threatens to share intimate images unless the victim agrees to do something in return such as engaging in a sexual activity or sending more intimate images, it’s categorised as sexual extortion or sextortion.

 

How is FB stopping NCII?

  • Technology | Photo-matching technology: FB leverages machine learning and artificial intelligence technology to automatically find and detect nude images, near nude images, and videos that are shared without consent on Facebook and Instagram.
  • Tools | Reporting: On Facebook, Instagram or Messenger, you can report when someone shares your intimate images without your consent or is threatening to do so by reporting them. You can learn how to report things on Facebook and Instagram (you can also learn how to report messages on Instagram). On Facebook, you also can do this by using the “Report” link that appears when you tap on the downward arrow or “...” next to the post.
  • Tools | Photo bank: If you are concerned that someone may share your intimate images on Facebook, Instagram or Messenger but they haven’t done so yet, and you have access to the images, you can contact one of FB's trusted partnershere to help you prevent anyone from sharing the images.

Gender-based violence

Gender-based violence (or GBVfor short) is threats and violentbehaviour targeted at womenonline. This can take the form of:cyberstalking, (s)extortion, doxing,cyberbullying, harassment, hatespeech, shaming, and more.

 

How is FB stopping GBV?

  • Rules | Pragmatic policies: FB has rewritten its policies with regards to hate speech for example, inconsultation with women’s safety experts. The policies are clearer andmore pragmatic and that take into account the disproportionate amount ofviolence women receive online.