top of page
BB White and Orange.png

WHO DECIDES WHEN AI IS CONFIDENT ENOUGH TO ACT?

  • 2 days ago
  • 1 min read
doctor looking at mobile phone

A clinician gets an alert: a patient may be septic. But how certain is the AI that just pinged his phone?


Most daily users might agree that AIs are not yet adept at calibrating their uncertainty. And an agent that can't tell you when it's sure - or when it's guessing - needs far more supervision. Especially in a hospital, an underwriting team, a pharmacovigilance unit or a trading floor.


So, as systems evolve, where should the calibration for certainty live?


The default assumption is the model itself. Better prompts, more context, bigger underlying systems. The model should then learn to know what it does know – and what it doesn't.


But a new position paper, signed by 30 senior machine learning researchers - many of them highly cited - argues this is effort is misplaced.


Decision rigour, they argue, belongs in a layer above the model - an orchestration layer that maintains beliefs across multiple inputs, updates them as evidence arrives and chooses actions with the benefit of greater awareness.


And there's empirical evidence to the thesis. Johns Hopkins's TREWS sepsis system maintains calibrated beliefs across the data stream, decides when a clinician needs an alert and only pings phones when the certainty threshold has been passed


What's its impact? Across five hospitals and 590,736 patients, in-hospital mortality for sepsis is down 18.7% - with detection occurring up to six hours earlier.


Which hints at what the approach may be able to deliver in millions of other scenarios. It's certainly one to keep looking at. 


 
 
BB White and Orange.png
Get in touch bubble roll.png
Get in touch bubble.png
Button overlay.jpg

Home

Further reading

Careers

Contact us

BB White and Orange.png
bottom of page