Αναλύσεις

The New World Order of Artificial Intelligence - Armies with “Killer Robots”

Ethical, Political, and Legal Dilemmas on the Horizon

By Charalambos Charalambides

Master’s Student in International Law

The rapid technological advancements of the 21st century have significantly transformed various aspects of human life, with Artificial Intelligence (AI) emerging as one of the most groundbreaking innovations. Today, AI permeates everyday life, from virtual assistants on smartphones to self-driving vehicles. Unsurprisingly, the military sector has begun integrating this technology to develop new generations of weapons. These AI-driven weapons, known as autonomous weapons systems, have sparked intense debate. Humanitarian organizations often refer to them as “killer robots” due to their unpredictable behavior and the potentially irreversible consequences they pose for humanity.

This raises pressing questions:
1. What exactly are autonomous weapons?
2. How have they been used in conflict zones?
3. What ethical, legal, and political dilemmas do they bring?

What Are Autonomous Weapons?

Autonomous weapons are machines capable of operating without human intervention. They utilize AI algorithms and sensory input to identify and neutralize targets. Unlike traditional unmanned aerial vehicles (UAVs) or drones, which require manual control, autonomous weapons are pre-programmed to act independently, often based on defined “target profiles.” These systems employ facial recognition and other advanced technologies to select and eliminate targets.

Some are designed to detect potential threats and strike preemptively, especially in reconnaissance missions. Their functionality, however, isn’t limited to offensive operations. They can also serve defensive purposes. Despite remarkable technological progress, fully autonomous lethal weapons have not yet been deployed on a wide scale. Their use remains restricted and typically requires oversight or approval from a human commander.

Real-World Deployments and Conflicts

Although still emerging, autonomous weapons have already been tested in real conflicts. Nations such as China, Israel, South Korea, Russia, the United Kingdom, and the United States are actively developing and deploying these systems. Their capabilities have been put to the test in ongoing conflicts, notably in Ukraine and Gaza.

The War in Ukraine

Since the outbreak of the war, Ukraine has employed autonomous systems and AI-driven technologies both to document the conflict and to defend against Russian cyberattacks. For instance, Saker Scout drones can autonomously identify and strike 64 different types of Russian military equipment, even in areas with radio interference that disables conventional drones. In 2023, Ukraine launched mass drone strikes on Russian naval assets in Sevastopol, Crimea.

Russia, too, has heavily invested in AI and autonomous weapons. It has developed platforms such as the Orion UAV for surveillance and combat missions. In 2023, Russia conducted a large-scale drone attack on Kyiv, many of which were Iranian-made. It has also deployed “loitering munitions” or “kamikaze drones”, devices that hover over a target zone and crash into identified threats.

The Conflict in Gaza

In Gaza, Israel has employed one of the world’s most sophisticated autonomous defense systems, the Iron Dome. This system has proven highly effective in intercepting rockets and missiles, protecting cities and strategic assets. Additionally, Israel has used drones for reconnaissance missions in Hamas’ underground tunnel networks. These drones are equipped with collision-avoidance sensors and navigation software that minimize risks and casualties.

Emerging Challenges from Autonomous Weapons

The rise of autonomous weapons presents a host of pressing issues that are now under global scrutiny. These challenges fall broadly into two categories: ethical and legal.

Ethical and Political Concerns

The core ethical dilemma lies in the absence of human judgment in the use of lethal force. Machines do not possess emotions or moral reasoning, making them fundamentally different—and potentially more dangerous, than human soldiers. They cannot distinguish between situations where mercy is warranted and where force is justified, thereby disregarding human dignity and ethics. Moreover, autonomous systems may inherit or amplify biases embedded in their algorithms, leading to the targeting of innocent civilians based on race or other discriminatory factors. This could result in grave violations of international law, including crimes against humanity and war crimes.

Legal Dilemmas

Accountability is the most pressing legal question: who is responsible when an autonomous weapon commits a violation of international criminal law? Should the blame fall on the commanding officer who authorized its use, or the programmer who designed the flawed algorithm? Currently, there is a significant legal vacuum concerning the regulation and responsibility associated with autonomous weapons. Addressing this gap is urgent to prevent future abuses and ensure compliance with international law