Artificial Intelligence has begun to reshape the landscape of modern warfare. From automated reconnaissance to AI-assisted decision-making on the battlefield, the technology is evolving rapidly - and its implications are far-reaching. This article examines how AI has impacted the landscape of warfare from an objective point of view, starting with the attack chain in modern warfare and comparing it to the cyber attack chain, then analyzing what is currently being done in this space, examining big tech's involvement, and exploring the ethics and legality of AI use by the military.

The Attack Chain in Modern Warfare

To understand what Artificial Intelligence is trying to achieve on the battlefield, we must first understand the attack chain. The cyber attack chain was inspired by the attack chain used in modern warfare, and the two share a number of structural similarities.

The traditional attack chain in modern warfare consists of several key steps:

  • Identification of target: Reconnaissance and data collecting activities are performed to identify a target, similar to the reconnaissance phase of the cyber attack chain.
  • Dispatch forces: Assets are dispatched to engage the target based on the intelligence gathered.
  • Initiation of attack: Assets engage the target using the most appropriate tactics and methods available.
  • Destruction of target: The operation concludes with the achievement of strategic objectives.

The manual attack chain may take days, weeks, or months to carry out an operation and require a significant amount of defense funding. However, the use of AI and LLMs will help automate this process in order to cut down planning and money spent by a significant margin.

The Cyber Attack Chain

The cyber attack chain mirrors the military model and consists of several well-defined stages:

  • Reconnaissance: Running recon activities to identify vulnerabilities in the target environment.
  • Weaponisation: Crafting malware and special tooling designed to exploit identified weaknesses.
  • Delivery: Exploiting vulnerabilities to deliver a payload to targeted systems.
  • Installation and Exploitation: Installing persistence mechanisms and performing post-exploitation activities to maintain access and expand control.

The parallel between military and cyber attack chains is no coincidence. Both follow a structured methodology of intelligence gathering, preparation, execution, and exploitation. AI is poised to accelerate every step of both chains.

Automation of the Attack Chain

Automation of the attack chain will require significantly less manpower, funding, and time to carry out complex operations. What used to be done by a large group of mid-level officers can now be done by a large language model in about a tenth of the time.

The models train on an immense amount of data from satellite imagery and intelligence on deployed friendly and enemy assets to quickly output options that a military unit can act upon. Language Models will summarize operational plans and submit the orders to the relative parties for approval.

Through dialogue with a ChatGPT-style digital assistant, an operator can use various techniques to conduct reconnaissance by dispatching recon drones and even jamming enemy communications. The model can evaluate the enemy's strengths, weaknesses, and capabilities, and deploy reaper drones for visual intelligence while suggesting counter strategies after detecting armored units.

If signals are jammed, AI can take control of the drone and use automation to complete objectives. This level of autonomy highlights both the promise and the risk of AI-driven military systems.

What Big Tech Is Currently Doing

Big tech companies such as Microsoft, Amazon, Google, and IBM are getting involved with companies such as Palantir and C3 to integrate data analytics platforms with their cloud services, making this technology available for military operations and factory use.

Palantir's Artificial Intelligence Platform (AIP)

Palantir's Artificial Intelligence Platform (AIP) is a state-of-the-art platform which incorporates LLMs similar to OpenAI's GPT-4 into military decision-making processes. AIP has a very similar interface to GPT-4 where an operator can converse with the language models to make strategic decisions and explore options on the battlefield.

The platform operates on three core principles:

  • Deployment to classified systems: The platform is designed to function within highly secure, classified military environments.
  • Control: Maintaining operator oversight at every stage of the decision-making process.
  • Guardrails for trust and compliance: Built-in safeguards to ensure responsible use and regulatory adherence.

As can be seen from Palantir's example, automation is good and will create more initiative for armies, but too much automation may create negative impacts. If the information that's being returned is constantly scrutinized by the commander leading the operation and his officers, then we can retain control over the technology and limit the power AI has over various assets.

The Ethics and Legality of AI in Warfare

To determine the ethical and legal precautions we have to explore how these decisions are being made on the battlefield. The lethal step of the attack chain is the key aspect to answer these questions.

If a commander is blindly and constantly saying yes to everything the model outputs, it's a human saying "yes" and approving orders, but it's not really human control. This is particularly concerning when giving AI access to systems controlling weapons of mass destruction.

Neglectful or improperly trained commanders and officers who do not scrutinize options may cause too much automation, which may result in giving too much control of a military unit to a computer. Too many operations and being too reliant on software may result in friendly fire or making critical strategic mistakes.

The question of meaningful human oversight is central to the ethical debate. Formal approval alone does not constitute real control - the quality and attentiveness of that approval matters enormously. A rubber-stamp approach to AI-generated military recommendations undermines the very principle of human command authority.

AI and the Space Domain

The application of AI extends beyond terrestrial warfare into the space domain. Military satellites equipped with AI can process satellite imagery and intelligence data at unprecedented speeds, providing real-time strategic assessments that would previously have taken teams of analysts days or weeks to compile.

AI-controlled assets in space represent a new frontier in military capability. The integration of machine learning models with satellite networks enables automated surveillance, communications management, and even orbital asset coordination - all areas where speed and precision are paramount.

The Future of AI in Warfare

AI will revolutionize the landscape of warfare - it's just a matter of time. Similarly to when the rifle was first invented, we are still in a testing phase at this point in time and there is still a lot of research and development required.

When the rifle was first invented, there were certain situations where bows were more effective at achieving certain goals and other situations where the rifle was more effective. AI in warfare is at a similar crossroads - powerful in some applications, still unreliable in others.

With time, reliability will increase. And when that time comes, we must maintain oversight and control of the technology. The balance between leveraging AI's efficiency and maintaining meaningful human command authority will determine whether our future on the battlefield remains under human direction - or whether our fate begins to resemble something out of a science fiction franchise.

Conclusion

The growth of AI in warfare is not a distant hypothetical - it is happening now. From Palantir's AIP platform enabling battlefield decision-making to the automation of traditional attack chains, AI is fundamentally changing how military operations are planned and executed.

The key challenges ahead are not purely technical. They are ethical, legal, and organizational. Ensuring that human oversight remains meaningful, that commanders are properly trained to scrutinize AI outputs, and that guardrails are in place to prevent over-reliance on automated systems - these are the tasks that will define the responsible integration of AI into military operations.

The technology will continue to advance. The question is whether our governance, training, and ethical frameworks will advance with it.