Artificial Incompetence: When AI Makes Us Worse, Not Better
By Seweryn Cieslik, Chief Delivery Officer, NM Group
The geospatial industry is moving rapidly toward AI-first workflows. Automated classification, asset extraction, risk scoring, digital twins, near-real-time analytics. On paper, it all looks like progress.
But there is a risk we are not discussing enough.
Not artificial intelligence.
Artificial incompetence.
This happens when organisations begin to rely on automated outputs in operational decision-making without fully understanding, validating, or challenging them. The system still produces numbers and maps, but the people using them no longer know what assumptions sit underneath, how confident the outputs really are, or where the failure points lie.
In utilities, this is not an abstract concern. Geospatial data feeds directly into vegetation management, asset risk, network capacity planning, emergency response and regulatory reporting. When those decisions are driven by AI outputs that cannot be clearly explained or independently assured, the consequences do not stay digital. They become outages, safety incidents, compliance failures and reputational damage.
The most dangerous systems are those that appear good enough.
They are fast.
They are visually convincing.
They are statistically impressive.
Over time, human review steps are removed, exception rates fall, and trust becomes implicit rather than earned. Organisations begin acting on information they can no longer defend.
This is how artificial incompetence takes hold. Not through flawed technology, but through weakened governance, insufficient validation and diminished human supervision.
What Needs to Change?
AI should accelerate good decisions, not replace responsibility for them. Without clear accountability, traceability and independent assurance, automation quietly shifts risk from systems to people, and from the present into the future.
To avoid artificial incompetence, organisations must:
- Maintain human oversight in safety-critical workflows
- Implement transparent validation and confidence scoring
- Preserve audit trails and traceability
- Regularly stress-test models against real-world edge cases
- Separate operational decision authority from model outputs
At NM Group, we embrace innovation – but not without discipline. Tomorrow’s risks are often the result of today’s decisions. That is why we take a cautious and methodical approach to AI adoption, focused on finding the right balance between human judgment and machine processing, so that trust is built on evidence, not assumption.
