December 8, 2024

New Research Warns about the Need to Understand and Manage AI

Artificial Intelligence (AI) and algorithms can have detrimental effects on society, including the spread of racism, political instability, and radicalization, warns a Lancaster University academic. Professor Joe Burton, an expert in international security, argues that while AI is often viewed as a tool to counter violent extremism, it can also contribute to polarization, radicalism, and political violence, posing a threat to national security. His research, published in Elsevier’s Technology in Society Journal, explores how AI has been securitized over time and examines modern examples of AI’s polarizing and radicalizing effects.

The securitization of AI, according to Burton’s paper, has led to the perception of technology as an existential threat. This perception has influenced how AI is designed, used, and the harmful outcomes it has produced. The classic film series, The Terminator, is cited as a prime example of how popular culture has shaped public awareness of AI. The series depicts a sophisticated and malignant AI that leads to a nuclear war and an attempt to exterminate humanity. Such depictions have fostered distrust in machines and their association with biological, nuclear, and genetic threats, prompting governments and security agencies to influence the development of AI.

Professor Burton highlights the increasing autonomy of sophisticated drones, which are now capable of functions like target identification and recognition. Similarly, in the realm of cyber security, AI is extensively used in areas such as disinformation and online psychological warfare. The Russian government’s actions during the 2016 US electoral processes, in combination with the Cambridge Analytica scandal, demonstrated the potential of AI combined with big data to manipulate identity groups and encourage radical beliefs, thereby dividing societies.

While AI has shown promise in positive applications, such as tracking and tracing the virus during the pandemic, concerns have been raised about privacy and human rights. The design of AI, the data it relies on, its use, and its outcomes and impacts all present challenges, according to the paper.

Professor Burton concludes with a message to researchers in cyber security and international relations, emphasizing the need to better understand and manage the risks associated with AI. He calls for a deeper understanding of the divisive effects of AI at all stages of its development and use. It is crucial for scholars in these fields to incorporate these factors into the AI research agenda and not treat AI as a politically neutral technology.

Lancaster University, recognized by the UK’s National Cyber Security Centre, is at the forefront of cybersecurity education and research. To address the growing demand for cybersecurity professionals, the university offers certified Master’s and undergraduate degrees in cyber security. Additionally, it has launched a Cyber Executive Master’s in Business Education program to train future leaders in this field.

In conclusion, the research highlights the importance of understanding and managing the impact of AI on society. While AI has the potential for positive transformation, its risks should not be overlooked. It is crucial to navigate the ethical and societal implications of AI to ensure its responsible and beneficial deployment.

Money Singh
+ posts

Money Singh is a seasoned content writer with over four years of experience in the market research sector. Her expertise spans various industries, including food and beverages, biotechnology, chemicals and materials, defense and aerospace, consumer goods, etc. 

Money Singh

Money Singh is a seasoned content writer with over four years of experience in the market research sector. Her expertise spans various industries, including food and beverages, biotechnology, chemicals and materials, defense and aerospace, consumer goods, etc. 

View all posts by Money Singh →