Cyberwarfare and human rights

Impact of AI and the Dark Web on Democratic Process

Cássius Guimarães Chai

i

The emergence of cyber warfare has introduced a new dimension to discussions on human rights, particularly in terms of disinformation campaigns and the use of artificial intelligence (AI) on the Dark Web to shape public opinion.

Using AI in disinformation campaigns poses a significant threat to democratic processes, as it can mislead voters and undermine elections on an unprecedented scale (Swenson, 2023). In this regard, we can cite the Brazilian example during the presidential campaign back in 2018 and the very recent attempt at the legitimate electoral proceedings of 2022. This is particularly concerning because AI-generated disinformation has become increasingly sophisticated, incorporating synthetic media designed to confuse voters, defame candidates, or even incite violence (Swenson, 2023). Nonetheless, this asymmetric narrative warfare can also be perceived in armed conflicts.

The Dark Web, intentionally hidden and inaccessible through standard browsers, has also been implicated in these disinformation campaigns. Its anonymity makes it an ideal platform for spreading false information and coordinating cyberattacks (Bhattacharya, 2021).

Disinformation campaigns not only undermine democratic processes but also erode public trust in these processes. Studies show that misinformation can damage public confidence in democracy. False or exaggerated claims are frequently disseminated by foreign interests to undermine election outcomes (Brookings Institution, 2022).

Using AI and the Dark Web in disinformation campaigns raises significant human rights concerns. For instance, the right to privacy can be infringed upon by collecting and using personal data in these campaigns (RAND Corporation, 2023). Furthermore, the right to freedom of thought can be compromised when individuals or groups are influenced by information manipulation (UNODC, 2023). Countering these threats requires a multi-faceted approach. This includes efforts to detect and counter deepfakes (RAND Corporation, 2023), development of strategies to counter disinformation (RAND Corporation, 2022), and regulation of the dark web (Bhattacharya, 2021). Usefully, the tactics depleted by contemporary counter-disinformation organizations can be grouped into six high-level strategies: refutation, exposure of inauthenticity, alternative narratives, algorithmic filter manipulation, speech laws, and censorship (Stray, 2019), in the form of gag orders, for example. However, these efforts must also be balanced against the need to uphold human rights, such as freedom of expression (MIT News, 2022).

In conclusion, the intricate issue of cyber warfare’s impact on human rights demands a more critical perspective that delves deeper into the subject. While it is crucial to safeguard individuals and societies from the harmful effects of cyberattacks, it is equally important to ensure that measures to combat cyber warfare and cybercrime respect and uphold human rights. A more constructive approach to addressing this issue involves engagement and developing policies that strike a balance not only between security and human rights, but also a more coherent and effective commitment of all the international community with a truly shared equal ethical, and accountable understanding of the meaning of human dignity. Only then can we effectively combat cyber threats while upholding the fundamental values of our societies.

References:

Brookings Institution. (2022). Misinformation is eroding the public’s confidence in democracy. Retrieved from https://www.brookings.edu/articles/misinformation-is-eroding-the-publics-confidence-in-democracy/

Swenson, A. (2023). AI-generated disinformation poses threat of misleading voters in 2024 election. PBS. Retrieved from https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election

RAND Corporation. (2022). Information Warfare: Methods to Counter Disinformation. Retrieved from https://www.rand.org/pubs/external_publications/EP69000.html

Bhattacharya, D. (2021). The Dark Web and Regulatory Challenges. Manohar Parrikar Institute for Defence Studies and Analyses. Retrieved from https://www.idsa.in/issuebrief/the-dark-web-and-regulatory-challenges-dbhattacharya-230721

Brookings Institution. (2023). Despair underlies our misinformation crisis: Introducing an interactive tool. Retrieved from https://www.brookings.edu/articles/despair-underlies-our-misinformation-crisis-introducing-an-interactive-tool/

PBS. (2023). Misleading AI-generated content a top concern among state election officials for 2024. Retrieved from https://www.pbs.org/newshour/politics/misleading-ai-generated-content-a-top-concern-among-state-election-officials-for-2024

RAND Corporation. (2023). Combating Foreign Disinformation on Social Media. Retrieved from https://www.rand.org/pubs/research_reports/RR4373z1.html

Stray, J. (2019). Institutional Counter-disinformation Strategies in a Networked Democracy. Companion Proceedings of The 2019 World Wide Web Conference. Retrieved from http://jonathanstray.com/papers/Counter-disinformation%20Final.pdf

PBS. (2023). U.S. lawmakers question Meta and X over AI-generated political deepfakes ahead of 2024 election. Retrieved from https://www.pbs.org/newshour/politics/u-s-lawmakers-question-meta-and-x-over-ai-generated-political-deepfakes-ahead-of-2024-election

UNODC. (2023). Cybercrime Module 14 Key Issues: Information Warfare, Disinformation and Electoral Fraud.

Photo: Image by kjpargeter on Freepik