Â鶹´«Ã½

Share

Center Fights Digital Threats to Democracy, Human Rights

  • A voter casts her ballot in Tunisia. A

    A voter casts her ballot in Tunisia. As online interference increasingly has real-word consequences for elections in many countries, Â鶹´«Ã½ is looking at multiple ways to mitigate threats. (Photo: Â鶹´«Ã½)

Two and a half years ago, recognizing that things that take place in the online world were increasingly having real-world consequences, Â鶹´«Ã½ launched the Digital Threats to Democracy Project.

Following a pilot project to monitor social media for things like hate speech and disinformation during South Africa’s 2019 election, the Center went on to conduct digital monitoring activities in Tunisia, Cote d’Ivoire, Myanmar, Bolivia, Honduras, Ethiopia, and, recently, the United States.

"In recent years, we’ve seen how online activity can undermine democracy and human rights and increase political polarization and the likelihood of violence," said Michael Baldassaro, who heads the Digital Threats project. "If we want to safeguard democracy, we’ve got to address issues like disinformation, hate speech, online genderbased violence, and government cybersurveillance and internet shutdowns."

We asked Baldassaro to talk about what Â鶹´«Ã½ has done and plans to do to help protect democracy, human rights, and the rule of law:

Q. First, what constitutes a digital threat?

I tend to say that anything that touches the internet and can cause harm is a digital threat, but when the Center talks about digital threats, it’s referring to:

• Systematic efforts to spread disinformation online.

• Unchecked hate speech and online genderbased violence on social media platforms.

• Measures taken by governments to restrict access to online information.

• Unlawful cybersurveillance that undermines the right to privacy.

• Cyberattacks that disrupt services and expose personal data.

Q. Can you tell us a little about some of the Center’s past efforts in this area?

In Ethiopia, we collaborated with the Georgia Institute of Technology to provide training and tools to help a local partner monitor and identify disinformation and hate speech in real time and flag content for Facebook so it could downrank or remove it.

In Myanmar, we collaborated with Privacy International to analyze the laws governing data protection and cybersecurity and identify weaknesses that need to be addressed to safeguard personal data, which we then advocated for in our election observation statements.

In Bolivia, we partnered with Chequea Bolivia, a nonpartisan fact-checking organization, to debunk content that could undermine trust in the election process or suppress voter participation.

We also collaborated with the cybersecurity company Cloudflare to support the development of a tool to monitor online traffic patterns around the world to identify internet shutdowns that restrict freedom of expression and access to information.

Q. What kinds of activities are you planning now?

We are constantly establishing new partnerships and developing new tools. For example, we are collaborating with the International Fact Checking Network and Lead Stories to help nascent fact-checking initiatives around the world professionalize and establish formal relationships with social media platforms to reduce the visibility and reach of debunked content.

We’ve just entered a partnership with Microsoft to develop an artificial intelligence model that can quickly identify potentially false and misleading narratives that require attention by platforms and fact-checkers.

And we’re planning to develop media literacy and digital security educational curriculums that could be used globally to help individuals critically evaluate information and sources and practice good cyber hygiene to protect their privacy and safety online.

Q. How does this work strengthen democracy, human rights, and the rule of law?

By helping organizations around the world better identify disinformation, hate speech, and online gender-based violence, we can help blunt the reach and visibility of harmful content that could undermine elections or trigger conflict.

By shining a light on government internet shutdowns, we can help deter those who seek to silence human rights defenders and perpetrate violence.

And we see our upcoming media literacy and digital security modules as ways to empower citizens to inoculate themselves against potential harms, rather than simply relying on governments and tech companies to protect them.

It’s worth saying that none of these activities alone will fix these problems, and new and unforeseen threats emerge all the time.

We’re going to have to work together as a society to develop solutions that protect the truth and uphold democratic norms without infringing on freedom of expression or the right to privacy.


Related Resources

Story | Â鶹´«Ã½ Analyzes Misleading Content on Facebook »

Report | The Big Lie and Big Tech (PDF) »

Press Release | Â鶹´«Ã½ Releases 'The Big Lie and Big Tech' » 

Learn more about the Center's Democracy Program »

Learn more about the Center's Human Rights Program »

Sign Up For Email

Please sign up below for important news about the work of Â鶹´«Ã½ and special event invitations.

Please leave this field empty
Now, we invite you to

Video

Back To Top