Methodology Reference

Plant Profit Leak
Scorecard Methodology

The full methodology behind the Plant Profit Leak Scorecard: five leak zones, fifteen questions, and a written diagnosis within seventy-two hours. Published in full because the value of a diagnostic depends on its rigour being inspectable, not on its inputs being secret.

Zones assessed 5
Questions 15
Turnaround 72 hours
Investment $149 AUD
Practitioner Aaron Ridgway

A structured assessment of where industrial plants leak controllable profit.

The Plant Profit Leak Scorecard is a structured assessment of the five operational zones where industrial plants most commonly leak controllable profit. It is delivered as a fifteen-question intake form completed by the operations leader of the site. The completed intake is reviewed by a practitioner with thirteen years of frontline industrial operations experience. The output is a one-page written report delivered within seventy-two hours.

Two versions exist:

  • Free Plant Profit Leak Scorecard. Five-question web form with instant grade and a short report. Used for initial self-assessment.
  • Paid Plant Profit Leak Scorecard ($149). Fifteen-question structured intake with practitioner-reviewed written report, ranked leak hypotheses, and a recommended first investigation. Delivered within seventy-two hours. The $149 fee credits in full toward an Operational Profit Leak Diagnostic if booked within thirty days.

The Scorecard is not a Diagnostic. The distinction matters because they answer different questions. The Scorecard answers "where is my plant most likely leaking profit." The Diagnostic answers "exactly how much, ranked, with a sequenced recovery plan."

Five leak zones. Three sub-questions each. Equal weighting.

The Scorecard evaluates operational state across five leak zones. These zones were selected because they are the five most common structural causes of controllable margin loss in industrial operations across ports, mining services, and manufacturing.

01 Throughput
Zone 01
Throughput

Bottlenecks, flow interruptions, and capacity lost to poor sequencing.

Sub-questions
  • Whether throughput rate is measured against a defined design capacity
  • How frequently throughput is interrupted by upstream or downstream conditions
  • Whether the operation has a quantified figure for one hour of unrealised throughput
Checking: Does leadership know the current capacity utilisation, and is throughput loss a quantified figure or an intuition.
02 Downtime
Zone 02
Downtime

Unplanned breakdowns, reactive maintenance share, and reliability drag.

Sub-questions
  • Reactive work as a percentage of weekly maintenance hours
  • Mean time between failures on the site's most critical asset
  • Whether downtime is logged with cause codes that allow Pareto analysis
Checking: Is unplanned downtime a measured operational metric or an accepted cost of doing business.
03 Labour drag
Zone 03
Labour drag

Waiting, supervision overload, and overtime driven by poor front-end planning.

Sub-questions
  • Schedule compliance rate (work completed as planned, divided by work scheduled)
  • Overtime trend over the previous six months
  • Supervisor time spent on planning interruptions versus execution coordination
Checking: Is overtime a leading indicator of structural issues, or is it being managed as a cost line item.
04 Rework
Zone 04
Rework

Quality failures, handover breakdowns, and repeated work between shifts or teams.

Sub-questions
  • Rate of repeat work orders within thirty days of closure
  • Documented handover process between shifts
  • Frequency of cross-shift escalation events
Checking: Is the operation's first-time-right rate visible, and is rework a tracked operational metric.
05 Rhythm
Zone 05
Management Rhythm

Daily meetings, KPI cadence, and visible execution discipline.

Sub-questions
  • Whether a daily coordination routine of less than thirty minutes exists
  • Whether weekly schedule lock occurs at a defined point
  • Whether KPI reporting is reviewed by a sponsor with authority to act
Checking: Does the site have a functioning operating rhythm, or is it running on individual heroics.

Maturity bands. Forty-five point scale. Grade is calibration, not conclusion.

Each of the fifteen questions has four response options corresponding to operational maturity levels: high, mid-high, mid-low, and low. Responses are scored 0, 1, 2, or 3 respectively. The maximum score is forty-five.

The composite score maps to a grade band:

Score range Grade Indicative read
38 to 45 A Operating well across all zones. Marginal improvements available. Diagnostic likely not necessary.
30 to 37 B Strong in most zones, with one or two areas of structural exposure. Diagnostic may be useful but not urgent.
22 to 29 C Mixed. Two or three zones showing meaningful loss. Diagnostic recommended to quantify.
14 to 21 D Significant exposure across multiple zones. Diagnostic strongly recommended. Internal action without diagnosis is unlikely to hold.
0 to 13 F Crisis state. Operation is in survival mode. Immediate Diagnostic recommended along with executive briefing on findings.

The grade is not the output. The output is the ranked leak hypothesis written by the practitioner reviewing the intake. The grade is calibration, not conclusion.

Five sections. One page. Delivered as a PDF within seventy-two hours.

  1. Overall leak risk rating across the five zones. Visual heat map showing which zones are exposed and which are operating well.
  2. Top three suspected profit leaks, ranked. Each leak names the zone, the specific operational pattern, and a plain-language description of why this leak is structurally likely given the responses.
  3. Likely commercial impact mechanism for each leak. Not a dollar figure (that requires the Diagnostic). The mechanism, expressed as: "this pattern typically loses [type of margin] through [operational consequence] at the rate of [order of magnitude per period]."
  4. Recommended first investigation. A specific, plain-language action the operation can take internally within seven days to verify the largest exposure. If the investigation produces signal, that becomes the basis for either internal action or a Diagnostic engagement. If it produces no signal, the Scorecard hypothesis was wrong and the buyer is no worse off.
  5. Recommendation on whether a full Diagnostic is warranted. A written, defended recommendation with reasoning. Roughly thirty per cent of completed Scorecards conclude with "you do not need a Diagnostic, run the recommended first investigation internally for ninety days and recheck."

What the Scorecard is not.

This section exists because scorecards in this market are often perceived as primarily lead-generation tools. The limits of the methodology are documented directly so the buyer can match the right tool to the right decision.

Not a Diagnostic. It produces hypotheses, not quantified loss figures. The Diagnostic uses your CMMS data, financial records, and structured site interviews to convert hypotheses into validated dollar amounts. The Scorecard gets you to the question "is the loss large enough to warrant the Diagnostic." It does not answer "how much exactly."

Not a maturity model. Maturity scores are cross-industry benchmarks. The Scorecard is an exposure assessment. It does not tell you how mature you are. It tells you where the structural risk of margin loss is concentrated for your specific operation.

Not a technical reliability assessment. It does not assess equipment condition, asset criticality, or technical failure modes. Those are reliability engineering tasks that require asset-level data the Scorecard does not request.

Not a strategy review. It does not assess portfolio optimisation, capital allocation, or organisational structure. Operations leaders looking for those reviews need a strategy or corporate finance partner.

Not a behavioural audit. It does not assess team culture, leadership quality, or interpersonal dynamics. It assesses operational systems and rhythm. Behavioural and cultural factors are inferred from operational outputs, not measured directly.

If your decision needs any of the above, the Scorecard is the wrong starting point.

Built for a specific class of industrial operation.

The Scorecard works best for sites that match the profile below. If your operation does not match, the methodology will produce weaker results because the five leak zones assume a maintenance function and asset-heavy operating context.

Good fit

The Scorecard works here

  • Manufacturing, processing, mining services, port operations, or heavy industry. Asset-heavy with a maintenance function.
  • Twenty to five hundred people on site. Annual operating spend above five million dollars.
  • Reactive work share above forty per cent, or a backlog suspected to be costing money but not quantified.
  • Operations director or plant manager has authority to act on findings.
  • Leadership is open to an outside read of the operation.
Not the right fit

Use a different starting point

  • Software, SaaS, or office-based operations. No maintenance function to assess.
  • Sites under ten people. The five leak zones do not apply at that scale because the operating rhythm is necessarily ad-hoc.
  • Sites looking for a generic business health check. The Scorecard is operational, not commercial.
  • Sites needing a full implementation plan. The Diagnostic exists for that.
  • Sites where leadership is still negotiating whether external review is acceptable.

Why the methodology takes this shape.

I built the Scorecard methodology around five zones because in thirteen years of operating across industrial sites, I have not seen a controllable profit leak that did not originate in one of these five categories. Reactive maintenance, backlog drift, schedule compliance failure, structural overtime growth, and absence of operating rhythm. The same patterns repeat across ports, mining services, manufacturing, and heavy industry.

I built the Scorecard as a paid product (rather than purely free) for two reasons. First, the time required to review an intake and write a defended report is not trivial. Pricing it at $149 sets the right expectation about effort and depth. Second, free assessments produce free assessments. Buyers who pay $149 take the report seriously and are more likely to act on the recommended first investigation, which is the operational point of the entire process.

The seventy-two hour turnaround exists because operational decisions move on weekly cadences. A Scorecard that takes three weeks to deliver does not change a buyer's next decision cycle. A Scorecard delivered within seventy-two hours can.

The fee credits toward a Diagnostic engagement because the Diagnostic is the natural progression for buyers whose Scorecard surfaces enough signal. Crediting the fee removes the friction of "did I just pay $149 for nothing if I now want the Diagnostic." For buyers whose Scorecard concludes with "no Diagnostic needed," the $149 was for a written diagnosis they can act on internally, which is the same value either way.

How the Scorecard fits with existing programs.

Most plants run more than one improvement program at any given time. The Scorecard is intentionally an "operating discipline and rhythm" lens. That is its scope and its limit. Here is how it relates to the work that is most often already in flight.

Active CMMS rollout (SAP, Maximo, Pronto, IFS)
Complementary. The Scorecard assesses whether the operating rhythm and discipline around the CMMS are producing the data quality the system requires. Most CMMS rollouts fail not because the system is wrong, but because the discipline around closure rules and intake screening is wrong. The Scorecard surfaces this.
Active lean or TPM program
Complementary. Lean and TPM programs typically assess waste and equipment effectiveness. The Scorecard assesses operating discipline and rhythm. Both layers matter. A lean program with weak operating rhythm produces tools without traction.
Active reliability engineering program
Complementary. Reliability engineering assesses asset health and failure modes. The Scorecard assesses whether the work management discipline around the assets is producing trustworthy intake for the reliability work. A reliability program with low schedule compliance is a reliability program that is not landing.
Active corporate strategy review
Downstream input, not substitute. Strategy reviews look at portfolio choices and capital allocation. The Scorecard quantifies operational exposure that should inform those choices but does not replace the strategic frame.

Operational pattern recognition, not theoretical framework.

The Scorecard methodology was developed by Aaron Ridgway, who has thirteen years of frontline industrial operations experience across ports, mining services, and manufacturing in Queensland, Australia. The five leak zones are the categories where the outcomes below consistently originated.

Ports infrastructure, prior employment

$150K labour value recovered

Cleared 1,000+ overdue work orders across five port sites through systematic backlog triage. Six-month recovery without adding headcount. North Queensland Bulk Ports.

Mining infrastructure, prior employment

$1M+ project recovery

Recovered two parallel mining infrastructure projects trending over budget and over schedule. Both projects returned to on-budget, on-schedule status within thirty days. Walz Group on BHP supply chain.

Process manufacturing, prior employment

$100K bottom-line in two months

Cleared twelve months of stagnant maintenance backlog at a $6 million per year glass manufacturer with the same twenty-person team. No headcount additions, no capital spend. Porters Group.

Manufacturing operations, prior employment

Schedule compliance 50% to 80%

Lifted weekly schedule compliance from low fifties to above eighty per cent through schedule lock discipline, intake screening, and a fifteen-minute daily coordination routine.

Run your operation through the Scorecard.

Fifteen-question intake. Practitioner-reviewed report within seventy-two hours. $149 AUD, credited in full toward a Diagnostic if you progress within thirty days. If the Scorecard concludes with "you do not need a Diagnostic," that recommendation is in writing and the report is yours.