The Three Tests for Predictive Intelligence
How to tell real predictive tools from dashboards in costume.
Part three of four in the Foundation Series. Issue 01 named the Reaction Tax. Issue 02 showed you where it hides. This issue gives you the buyer's discipline to stop paying for tools that don't actually fix it.
The Market Is Now Noisy
Almost every business software product launched in the last eighteen months claims to be predictive. "AI-powered." "Forward-looking." "Anticipates churn." "Surfaces risk before it happens." Most of these products are not lying. They are also not, by any operationally useful definition, predictive.
What most of them are is descriptive software with a generative AI summarisation layer bolted on. The underlying engine still tells you what already happened. The new layer tells you the same thing in fluent English. That is not the same as telling you what is about to happen, in time to act on it.
The reason this matters is that you are about to spend money on at least one of these tools. The wrong test will leave you with a stack that feels modern and behaves the same way the old one did. Three tests will separate the real ones from the rest.
Test 1: The Time Test
Ask the vendor a single question: When the system raises an alert, has the event happened yet, or is the system telling me it is about to happen?
If the answer is the first, you are looking at descriptive software. If the answer is the second, you are looking at something predictive. Most tools fail this test the moment you push on it. The alert fires when the metric crosses a threshold, which means the metric has already moved. Useful, but late.
A tool that passes the Time Test fires the alert while the metric is still inside the threshold, on the basis that the trajectory is forecast to cross it. The difference is days or weeks of decision lead time.
Worked Example
Fails the Time Test: A customer success tool that alerts you when engagement drops below a defined level. The problem has already happened.
Passes the Time Test: A tool that alerts you that engagement is forecast to drop below the level inside the next thirty days, based on trajectory. You can still prevent the problem.
Descriptive software fires the alert after the metric moves. Predictive software fires it before.
Test 2: The Action Test
The second test is about what the tool does once it has produced an insight. Ask the vendor: Does the system recommend a specific action, or does it hand me a chart and walk away?
A surprising number of "predictive" products produce a forecast or a risk score and then leave the user to figure out what to do with it. The forecast is technically predictive. The product is operationally useless, because the user still has to do all the cognitive work of translating the prediction into a decision.
A tool that passes the Action Test does the translation for you. It does not just say "this campaign is at risk." It says "pause the bottom-quartile audience segment by Thursday, reallocate the spend to the top-quartile, expected recovery is X." That recommendation may be wrong. It may need to be overridden. But it converts the prediction into a decision, which is the only thing that closes the gap between knowing and doing.
The reason most tools fail this test is that recommending an action requires the vendor to be willing to be wrong in public. Be wary of any predictive tool that has no specific action recommendation behind its alerts. It usually means the vendor does not trust the prediction enough to bet on it.
Test 3: The Accountability Test
The third test is the hardest one to pass. Ask: Does the system grade itself against what actually happened?
A real predictive system runs a closed loop. It makes a prediction. The world produces an outcome. The system compares the two, surfaces its hit rate to the user, and updates accordingly. You should be able to look at any predictive tool you are evaluating and ask, in plain language, how often its predictions have been right over the last quarter.
If the vendor cannot give you a clean answer, the system is not learning from itself, which means it is not predictive in any operationally meaningful sense. It is forecasting in a vacuum.
This test is also the most useful one for negotiating. Ask for the accuracy data before you sign. A vendor who is confident in the product will produce it. A vendor who is not will get vague about "the model continues to improve" and "every customer is different." That answer is the answer.
Ask This Week
Run the three tests on every tool already in your stack that has the word "predictive" or "AI-powered" in its marketing. The results are usually uncomfortable. They are also clarifying.
What We Hold Ourselves To
We will not pretend PresciaIQ has solved every one of these tests on every product line. We will say that we use the same three tests on our own roadmap, and that any product we ship under the IQ family has to clear all three before we put it in front of a customer.
AdsIQ runs the Time Test on campaign performance and the Action Test on spend reallocation. BuildPredictIQ runs the Time Test on supplier delays and schedule risk, and the Accountability Test on its forecasts against project actuals.
If you ever evaluate one of our products and decide we have failed one of these tests, we want to know. The buyer's discipline is the same discipline that keeps us honest.
Coming Next — Issue 04
The Action Gap
Even when you have a tool that passes all three tests, most predictive deployments still fail. They fail at the action gap — the space between the alert and the decision. We will explain why, and what to do about it.
Read Issue 04 →