Nasa Q1p7bh3shj8 Unsplash

How the Overuse of AI Is Potentially Misleading End-Users

Artificial Intelligence (AI) is a broad field of computer science focused on creating systems or machines that exhibit capabilities typically associated with human intelligence.

The field is composed of multiple sub-fields like Machine learning, Deep learning, Natural language processing, Computer vision, Robotics and Automation among others. An AI system typically analyses vast amounts of data to extract patterns and draw conclusions in the hope of being able to recognise those patterns in new data to draw similar conclusions.

Despite genuine breakthroughs in the field, the term “AI” has become a buzzword, often overused or misused. Marketing and product announcements tend to attach the AI label to software that may rely on basic algorithms or simple rule-based systems leading end-users to develop unrealistic expectations of the system’s capabilities.

Some of those announcements border on the world of magic solutions, letting end-users assume that AI outputs are always accurate or unbiased. While in reality, AI Models learn from a training dataset which may or may not have been properly generalised for all possible use cases. This overconfidence in the AI outputs can result in a blind trust in the AI system's recommendations, which could inevitably lead to critical errors, especially in high-stake sectors like defense.

The belief that AI will replace human jobs or eliminate the need for human judgement is misleading. AI systems, with their inherent biased nature, require a human in the loop and should be seen as an augmentation tool, more than an automation tool. These systems can rapidly analyse large amounts of data and possibly highlight conclusions and improve situation awareness, but they can not replace skilled personnel. There are also the ethical and legal implications of relying on an AI without proper human verification and accountability, particularly in the defense field.

More recently the popularisation of generative AI with tools like ChatGPT or Gemini, for text generation, and Dall-E or Midjourney, for image generation, has made popular the idea that those algorithms are capable of real intelligence. We should not forget that despite the clever use of words and pixel manipulation of those AI systems, they are just generators, incapable of thinking, introspection, planning or even strategy. 

When an image generator creates an image, it doesn’t know in advance what will the image look like, the process of diffusion that generates the image is guided by the words in the prompt, like a drop of colourant diffuses in a glass of water following the laws of physics. Repeat the process with the same prompt and original noise image, and you will obtain exactly the same result, pixel by pixel.

In the same way, a text generator generates the next token (which could be a word or not) based on the previous words (including the user prompt)... repeat the same process and you will obtain the same word. Commercial products add some randomisation during the next token selection to create the illusion of change or aleatory, hoping that the butterfly effect will end up generating a totally different text, and give the illusion of intelligence.

Text generators are incapable of real reasoning, and because of that, they are also unable to realise that they do not know something. The statistical process of selection for the next token has always an answer, and it is never “I do not know”, making the system unreliable when we need to make decisions based on facts. The system will inevitably invent information, generating the now-known hallucinations.

Using a text generator to plan a defense strategy, would be a terrible error to commit. 

Many AI solutions in complex fields (like autonomous vehicles or real-time threat detection) are still under development. Companies may market them as field-ready or battle-ready long before they have been robustly tested, potentially putting users at risk.

Finally, there is the challenge of changing environments. AI models often perform poorly when moved from controlled lab settings into real-world scenarios that are dynamic, unpredictable, and data-scarce (e.g., battlefield conditions). Overstating AI’s readiness for these challenging environments can mislead buyers and operators alike.

At MARSS we believe in a user-centered approach, where the AI systems in use are augmentation tools. The operators get a better situation awareness picture without the danger of misleading or generated information.