Back to Blog
·6 min read·Case Study

How Anglo American Cut Fleet Downtime 40% with Predictive Maintenance

Client: Anglo AmericanIndustry: MiningDuration: 2020–2022

The problem worth solving

Anglo American runs one of the world's largest mining fleets — 100+ heavy vehicles operating across multiple continents. When a haul truck fails unexpectedly 40 minutes from the nearest maintenance depot, three things happen at once: the ore stops moving, the operator's safety profile shifts, and the company burns cash at a rate that makes the Series A of most AI startups look like a rounding error.

The existing maintenance regime was reactive. Vehicles ran until something broke. The organisation knew it needed predictive analytics. What they didn't have was a clear path from “we have sensor data” to “the right person gets the right alert early enough to act.”

What we actually built

Daniel acted as Engineering Manager and Technical Lead on the engagement, scaling the team from 2 to 10 engineers over two years.

1. Real-time telemetry ingestion from 100+ vehicles

Continuous sensor streams — vibration, temperature, pressure, drivetrain telemetry. The first six weeks were mostly data plumbing, not AI.

2. ML models for component-level failure prediction

Separate models for separate failure modes — drivetrain, hydraulics, cooling. Each with its own confidence threshold and lookahead window.

3. Automated simulation and monitoring layer

A monitoring platform that translated model output into early warnings of component degradation — with severity, expected time-to-failure, and recommended action.

4. Workflow integration with the maintenance team

Alerts wired directly into the existing maintenance planning workflow. Not a new interface. The people who already planned maintenance got a ranked list of vehicles needing attention, with reasons.

5. Scalable architecture for future asset classes

The same pattern — sensor ingest, component models, alert routing — has since been extended beyond the haul fleet.

Why it worked

Three things, in order of importance.

One named operations sponsor. Every decision about scope, thresholds, and what “good enough” meant flowed through one senior maintenance leader. When the model generated too many false positives in week four, that sponsor made the call on acceptable noise ratios — in a single meeting.

Kill criteria signed off at month 3. We agreed upfront what would kill the project. We hit the threshold. But naming the kill criteria matters even when you don't kill the project — it keeps scope honest.

Scope discipline. We did not try to prevent every failure mode. We picked a small number of high-frequency, high-cost component failures where the ROI case was clear.

Outcome

40%
Downtime reduction
100+
Vehicles monitored
2 → 10
Team scaled

Patterns worth stealing

  • Don't start with AI. Start with the maintenance decision. Model the current decision before you model the data.
  • Data plumbing is the project. Three weeks of clean ingestion beats three months of model tuning on messy data.
  • Build for the second use case from day one. Most teams rebuild the plumbing each time. Don't.
  • Name the kill criteria. Projects that cannot fail will zombie.

Get our AI Readiness Checklist

Twelve questions to answer before starting any predictive project. Free.

Thinking about predictive maintenance?

The pattern above is portable

We've applied versions of this architecture across mining, utilities, and industrial manufacturing. 30 minutes to talk through what it would look like in your context.

Book a Discovery Call