Home » GEÓ Latest Geopolitical News » AI IN DEFENCE: Artificial intelligence and humans have different risk tolerances when data is scarce …Analysis

AI IN DEFENCE: Artificial intelligence and humans have different risk tolerances when data is scarce …Analysis

AI IN DEFENCE: Machine learning doesn't rely on the pride factor when assessing potential threat
AI IN DEFENCE: Artificial intelligence and humans have different risk tolerances when data is scarce – Analysis GEOPoliticalMatters.com – Newsteam City of London Newsroom (Remote) Unexpected Findings in AI v Human Activity comparisons – Analysis #AI #ArtificialIntelligence #MachineLearning #Defence #Technology Recent findings by the US Defense Intelligence Agency (DIA) has revealed that in a real comparison scenario where humans and AI were looking at enemy activity, Artificial Intelligence (AI) was actually more cautious than humans about its conclusions of possible outcomes, especially when data is limited, turning conventional thinking on its head.
Artificial intelligence and humans have different risk tolerances when data is scarce.
While the recent findings are being classified as “preliminary”, they offer an important snapshot into how humans and AI will complement one another in critical defence and security scenarios. The DIAs technical director of Machine-Assisted Analytic Rapid-Repository System, [MARS] Terry Busch, revealed the experiments findings to Defenseone.com  The initial phase examined the use of available data to determine whether a particular ship was in U.S. waters. “Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in the United States.” The conclusion is that Humans and machines using available data can reach similar conclusions. The second phase tested a different methodology examining conviction. The question was, would humans and machines be equally certain in their conclusions if less data were available? For this phase, the test severed the connection to the Automatic Identification System, [AIS] which tracks ships worldwide. Previous thinking had been that with less data availability, human analysis would be less certain in its conclusions but the DIA Research has found the complete opposite. “Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that AI and those responsible for the machine learning, took far less risk — in confidence — than the human analysis did. “The machine actually does a better job of lowering its confidence than the humans do…” The experiment provides a snapshot of how humans and AI will team for important analytical tasks, but it also highlights  how human judgement can be clouded when pride is involved.

Author

  • GEÓ NewsTeam

    Broadcasting Daily from Gibraltar Newsroom our dedicated desk editors and newsdesk team of Professional Journalists and Staff Writers work hand in hand with our established network of highly respected Correspondents & regional/sector specialist Analysts strategically located around the Globe (HUMINT) Our individual Desk Editors all have specific subject authority as Journalists, Researchers and Analysts covering AI, Autonomous Transport, Banking & Finance Technology, Cybersecurity, GeoCrime, Defence 3.0, Energy & Renewables, BioEconomy and Transport & Logistics. Contact the NewsTeam at [email protected]

GEÓ