AI as the future of warfare: is it worth the risk?

The technological revolution is obviously unstoppable, today’s robots and computers are catching up and could soon surpass humans in most business tasks. The development of artificial intelligence will certainly in the coming decades crucially shape not only the labor market but also the future of warfare. Despite this fact, computers still depend on humans as conscious beings. Human consciousness with high intelligence represents the exclusive function of a human being, the maximum reach of humanity. Organic evolution has “sailed” through consciousness for millions of years. However, thanks to accelerated technological advances, the evolution of inorganic computers, “snatched” from human control, could bypass these beaten paths and chart a much faster path to super-intelligence. Namely, humanity is on the verge of a huge revolution in science that is developing new types of non-consciousness or artificial intelligence, which will strongly influence the labor market and warfare.

The development of artificial intelligence is becoming a strategic area of ​​investment of the world’s leading countries – the United States, Russia and China, the time and open space of competition of these forces. Addressing students at the University of Yaroslavl, Russian President Vladimir Putin said: “Whoever becomes the leader in the artificial intelligence market will rule the world.” adding that the monopoly in this area creates an extremely undesirable situation. Namely, Putin believes that technological supremacy, which is shaping a new era of warfare, is becoming a fundamental global political power.

“Whoever becomes the leader in the artificial intelligence market will rule the world.”


It didn’t take long for the West to respond. Elon Musk, an American engineer, entrepreneur, visionary and founder of the Space X Corporation, in response to the Russian president, expressed his concern because, he said, “competition for superiority in the field of artificial intelligence at the national level is likely to cause World War III.” An open letter to the UN, signed by 116 leading people in the field of artificial intelligence, warns of a “third revolution” in warfare, in which autonomous technology is in fact a deadly Pandora’s box. Experts call on the United Nations to ban further scientific research and the construction of deadly robots – killers that could become a formidable weapon of terror against the civilian population. Such technology, navigation, should be on the list of weapons banned by the UN Convention on the Control of Conventional Arms (CCW), because it is “morally completely wrong.” A similar appeal, signed by more than a thousand experts, scientists and researchers, including British physicist Stephen Hawking, Apple co-founder Steve Wozniak and Elon Musk, was addressed to the public in 2015.

“The danger of AI is much greater than the danger of nuclear warheads by a lot… Mark my words. AI is far more dangerous than nukes.”



While everyone agrees that this is a technology that has the power to reset global power ratios, it remains to be seen what its practical application will actually look like. U.S. expert Michael Horowitz divided the military applications of this technology into three main directions: the first is to allow machines to operate without human supervision, the second involves processing and interpreting large amounts of data, and the third is to help contractors, if not independently, command and surveillance in combat situations.

  1. Allowing machines to operate without human supervision
RoMan army robot

The autonomy of machines on the battlefield has been a dream of the defense industry for a long time. However, in order for it to make sense (and not to be reduced to the automatic execution of war crimes), much more is needed than just wandering combat systems on the ground during the conflict – it requires pattern recognition and navigation, but also many more skills, for example, coordination with other elements of the battlefield. Today’s artificial intelligence systems, even when successful, have low level of efficiency – usually narrowly specialized, and often effective only in a clearly defined environment, without a real need for independent learning or complex reasoning. A good example of weapon systems with artificial intelligence are anti-radar missiles capable of lurking longer to ignite enemy sensors and rapid-fire systems to defend ships and ground installations from all sorts of threats from the air. The area of ​​special development in this field is automated manipulation systems, which are able to recognize things in space first and then capture them and do something with them – for example, recognize the existence of a barricade composed of various objects, which is then disassembled by capturing and releasing individual parts. This is a capability demonstrated for autonomous robots at the U.S. Army Research Laboratory (ARL) in Adelphi, Maryland – as part of the Robotics Collaborative Technology Alliance program. After machine learning and using the principle of so-called “intuitive physics”, their RoMan robots are capable of manipulating a certain range of random objects in unfamiliar terrain, while in the future they strive to make these autonomous systems work faster, using a variety of supports and their own weight to manipulate a wider range of objects, doing it the way people do such jobs.

2. Processing and interpreting large amounts of data

An important prerequisite for successful action is awareness of the situation on the battlefield. If we keep in mind that back in 2011, about 11,000 drones in US military use recorded over 327,000 hours of video – and all those numbers have exploded in recent years – it is clear that intelligence processing of all these materials is a very interesting area to use artificial intelligence. Such software solutions in 2015 surpassed the human performance in the general classification of images, while between 2015 and 2018, the effect of machine resolution of component objects on individual analyzed images also radically improved. Although only the recognition of observed objects is still questionable and subject to numerous errors, these are the fields of intensive development and research. In February 2017, the Pentagon declared that their algorithms “could do this job at a human-like level,” and set up an “algorithmic warfare” team, known as “Project Maven,” which needed machine learning and other techniques to be able to identify the objects of that suspicious activity from the various materials created during the war against the Islamic State and which Google employees didn’t want to work on due to ethical issues. Although the goal was to create “actionable intelligence”, insiders claim that its effectiveness was actually marginal, with special emphasis on a large number of false hits in visual recognition. However, this was only the beginning, while today the British company Earth-i claims for its system that it is able to identify up to 98 percent of different variants of military aircraft from satellite photos of military bases.

3. Command and surveillance in combat situations

The third direction of the influence of artificial intelligence technology is its use in decision-making processes, from the tactical level in the field to the strategic levels in the higher state spheres. A good example here is a system called the Northern Arrow of the Israeli company UNIQAI, as one of many solutions to help commanders plan missions. These are systems for processing large amounts of data on mission variables – enemy positions, weapon ranges, fuel consumption, terrain and weather conditions – data that are otherwise obtained by reviewing maps, expert tables and technical manuals, this time refined and the experiences of specific commanders . On this basis, expert systems such as the Israeli Northern Arrow or the American CADET come up with concrete action options faster than the average person, which is presented to decision makers together with an explanation of the path taken to reach certain conclusions.


When it comes to the future of warfare, experts warn that artificial intelligence has reached a point where the development of autonomous weapons – drones, unmanned tanks or machine guns, is possible within a few years. “Killer robots” could pick a target and shoot at it without human intervention. Hollywood movies like Terminator, Transformers, Robocop, or Star Wars have already shown us what a relentless cyborg fight might look like. Elon Musk continuously warns of the need to regulate artificial intelligence, whose destructive potential is the greatest threat to human existence. Moreover, he belongs to those who demand that all research in this area must be stopped. We can recall that Facebook also recently interrupted the experiment when the two algorithms began to communicate with each other in a language only understandable to them in which the researchers were not involved, nor did they understand it.

However, the reactions of practitioners are quite the opposite, unencumbered by ethical and moral dilemmas. A retired British general, Sir Richard Barrons, has sent an open letter to a group of experts, stating that killer robots are virtually inevitable and will certainly be used in wars instead of manpower in the future. Military forces will not give up the development of these weapons: “Why send a beardless young man into a war conflict, if a machine can fight in your name?” the general wondered. Profit is the main imperative here too – killer robots are significantly cheaper than soldiers who require overall logistics and on whom most of the military budgets of either states or corporations are now spent.

The development of artificial intelligence weapons is clearly unstoppable. Samsung has developed the SGR-A1, an autonomous weapon, which fires bullets independently, recognizes voice, monitors, and it is debatable whether it is already used along the South Korean border in the demilitarized zone. The Russians presented an autonomous unmanned aerial vehicle of the helicopter type “Voron 777-1”, as well as an unmanned vehicle “Uranus 9”. Russia also has “fear and trembling” unmanned autonomous submarine “ACTUV”, powered by artificial intelligence. Recently, the Kalashnikov concern also produced a fully automated warfare module “REKS-1”, which is capable of shooting down unmanned aerial vehicles. The United States, of course, has been developing this type of weapon for a long time: autonomous fighter jets, which perform all mission tasks independently, are potentially significantly more dangerous than conventional manned fighter jets. The autonomous warship “Sea Hunter”, “AEGIS” missiles, are just some of the smart weapons that, thanks to artificial intelligence, have autonomy in decision-making.

A Black Hornet nano helicopter unmanned aerial vehicle (UAV).

The intimidating drones “PD-100”, a new type of flying robots, weighing only 18 grams, which are used for personal reconnaissance, certainly attract attention. “PD-100”, also called Black hornets, open a completely new chapter in the field of wiretapping, surveillance, reconnaissance. The United States spends billions of dollars to develop autonomous or semi-autonomous weapons, which have so far only existed in movies or science fiction.

These weapons pose a real challenge to scientists, but they also raise a number of ethical issues regarding the possibility of their military or civilian use. Numerous human rights organizations have repeatedly called for a ban on the production, development and use of drones due to the unethical way of warfare. Mass recruitment from the two world wars is a thing of the past, modern warfare relies heavily on “mercenaries”, privatized agencies and modern military technology, while military power in the 21st century will increasingly rely on artificial intelligence. Drones and other autonomous weapons are gradually replacing twentieth-century mass armies, and generals are increasingly leaving important strategic decisions to algorithms.


If artificial intelligence technology is really in the range of nuclear weapons technology in terms of its importance for the defense sector, as many predict, then perhaps the basic principles that have mastered the “nuclear threat” could be useful. Primarily the emphasis was on deterrence, arms control, and security measures. First, the deterrent has been influential in nuclear weapons because of the general awareness of the global consequences of using such weapons – but in the case of artificial intelligence, such general detrimental effects are not clearly identifiable, if potentially present. Second, Cold War-era arms control systems relied on general transparency. However, unlike nuclear weapons, the software code of some artificial intelligence systems is not visible from the satellite, and even if it is shown to the opposite side – it is likely that this will also reveal its properties and effectiveness. And thus, only the stated third element remains available – security measures that ensure that these funds are not used without special attention and authorization. However, security measures suitable for the control of artificial intelligence technology have yet to be developed – with an emphasis on the general human purposes of their use, which, despite military purposes, must be fraudulent and clear enough to be monitored at least in principle (understanding the way individual decisions are made). Although the spending of the US Pentagon’s money for the development of AI technologies was only a fraction compared to the 20-30 billion USD invested by US private companies in 2016 – state participation is still of great importance and influence. And while Western countries in principle insist on keeping people in the decision-making loop of diverse artificial intelligence systems, it is clear that this condition nevertheless significantly slows down the operation of such circuits. Of course, in military applications, there will be a tendency to speed up the decision-making process and give maximum autonomy to such, perhaps incomplete, systems – a challenge that has yet to be addressed in practice. While the Pentagon plans to hire ethicists for this purpose, some experts advise that the solution may lie in introducing a certain fundamental value orientation into future artificial intelligence systems, where the priority of certain general human interests over work goals and cold machine logic should be ensured.

Leave a Reply

Your email address will not be published. Required fields are marked *