• Press Release

Social Media Companies’ Removal of Abortion-Related Content May Hinder Access to Accurate Health Information

June 11, 2024

Getty

Removal of abortion-related content on social media platforms with inadequate or unclear justification can contribute to the increasing challenges in accessing abortion care and threatens the right to health and bodily autonomy, according to a new briefing from Amnesty International.

In Obstacles to Autonomy: Post-Roe Removal of Abortion Information Online, Amnesty International reveals that social media companies are failing to respect international human rights standards by removing abortion-related content without providing sufficient information and transparency regarding content removal decisions.

The briefing highlights how, since the 2022 U.S. Supreme Court decision overturning Roe v. Wade, major social media platforms, including Facebook, Instagram, and TikTok, have removed abortion-related content, including how to access abortion care.

“When tech companies remove abortion-related information, they can intensify barriers to accessing information and lead to discrimination and human rights violations against people who can become pregnant,” said Jane Eklund, Tech and Reproductive Rights Fellow with Amnesty International USA. “Access to accurate and unbiased information about abortion is an essential part of reproductive healthcare, and tech companies must do better to ensure their users can access that information.”

The briefing highlights that the removal of abortion-related content online especially harms young people because of their reliance on social media for news and information. Furthermore, after the overturning of Roe v. Wade, more than 20 states have imposed restrictions on abortion access, and some states introduced bills specifically to restrict access to abortion information online. At the time of this publication, none of those bills have passed.

Amnesty International’s research shows how, after the Court’s 2022 decision, some content with information about medication (non-surgical abortions), which are safe and account for more than half of all abortions in the U.S, was removed, temporarily hidden, or marked as “sensitive content” which may “contain graphic or violent content” on major social media platforms. Other posts were removed because the platforms said the information shared was against their community guidelines or claimed the post was attempting to buy or sell abortion medications when it was not.

For example, on April 27, 2023, an Instagram post by Ipas – an organization that seeks to increase access to safe abortions and contraception –  sharing the World Health Organization’s recommended protocol on how to have a medication abortion was removed. Instagram cited its policies on the “sale of illegal or regulated goods” as the reason for the removal, even though the post did not reference the sale of medications in any way.

As the briefing shows, in 2022, as many states were rushing to ban abortions, some Planned Parenthood posts with information on where abortion was legal or restricted were blurred and marked as “sensitive content.”

Non-profit organizations such as Plan C and telehealth abortion providers like Hey Jane experienced similar content removals, and in some cases, temporary suspension of their social media accounts with little or no explanation. More recently, in 2024, the Lilith Fund, a Texas-based abortion fund that provides support to Texans traveling out of state to access abortion care, had a post with a link to abortion resources blocked from being posted by Facebook. And, Mayday Health, a non-profit that educates people about medication abortion and how to access it, had their Instagram account temporarily suspended without any warning.

“Everybody has the right to access unbiased and medically accurate information on abortion, and tech companies have a responsibility to respect human rights and should not limit users’ access to such content posted on their platforms,” said Eklund.  

TikTok and Meta’s (Facebook and Instagram) publicly available community guidelines and content moderation policies fail to adequately inform users of how abortion-related content is moderated. According to these guidelines, TikTok allows “abortion discussed in a medical or scientific context related to procedures, surgeries, or examinations” (with no reference to other types of abortion-related content), and Meta does not explicitly mention abortion in any of its Community Standards.

Amnesty International requested further information from Meta and TikTok.

In response, Meta said it recognizes the right to health and allows organic content educating users about medication abortion. It also allows content offering guidance on legal access to pharmaceuticals on its platforms, but prohibits, “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs”.

TikTok said its policies do not prohibit or suppress topics such as reproductive health and abortion content, including access to information but it “prohibits content including medical misinformation.”

The questions and the companies’ full responses are included in the briefing.

“The responses from the companies do not line up with what appears to be happening on their platforms,” said Eklund. “Vague responses aren’t enough. Companies need to take transparent steps to ensure that their users are able to access abortion-related information on their platforms, and that members of civil society are given adequate explanation for any content that is removed.”

Meta and TikTok should be more transparent about how their community guidelines apply to abortion content. They should also improve transparency on the use of recommendation systems and content moderation algorithms. Additionally, they should proactively identify, prevent, and address any harms arising from their content moderation and the potential suppression of abortion-related content.

Contact: [email protected]