Founded in 2020, the Board is the first ever experiment by a social media platform to allow independent oversight of content moderation decisions. Created by Meta, but firmly independent from it, the Board promotes freedom of expression and other human rights online, while also strategically tackling the most harmful content on social media. The 23-person Board, conceptualized in line with the effectiveness criteria of operational grievance mechanisms and access to remedy under the UNGPs, consists of global experts, hailing from different fields such as government, law, journalism and activism. They review cases concerning the world’s most complex moderation issues and make binding decisions on which posts Meta keeps up or takes down. The Board also creates policy recommendations that better align Facebook and Instagram’s policies with international human rights principles.
AI Governance: A Collective View from the South
Paris Call: Taming the Cyber Mercenary Market
Paris Call: Protecting Critical Infrastructures Against Systemic Harms
Hostile Influence Operations Ahead of the 2022 French Elections
Harmful Content Working Group: Progress Report
Multistakeholder Workstream on Public-Private Partnerships in Fighting Ransomware threats