Artificial intelligence in the emergency department

Digital systems suggest possible diagnoses to doctors. Instead of making suggestions, they will now be evaluating doctors' findings.

Wolf Hautz examines the role of digital assistance systems in making diagnoses in emergency departments.

In medical emergencies, clinical pictures are complex and lives are at stake. In these situations, doctors can make mistakes. According to Wolf Hautz of the University Emergency Centre at the Inselspital Bern, roughly one in ten diagnoses in the emergency department proves to be wrong.

Errors are not only caused by stress and a lack of time, they also happen because people are prone to what is referred to as the "satisfaction of search". Hautz illustrates this with the example of a paraglider taken to hospital with serious injuries after an unsuccessful landing. The X-ray clearly showed fractures of the shoulder blade and ribs. The pneumothorax was also clearly visible. "You expect this after an accident of this kind," explains Hautz. That’s why even an experienced team – in emergency medicine or elsewhere – wouldn't carry on searching. They've found what they were looking for. However, the image analysis made with a clinical decision support system (CDDS) discovered COVID-19 in the paraglider's left lung. Artificial intelligence provided the medical team with valuable evidence of what else the fit young man was suffering from and so protected staff and other patients from contracting COVID.

Aside from specific cases such as this, it is not known to what degree the use of CDDS actually improves the quality of medical diagnoses and, by extension, the health of individual patients. Wolf Hautz and his team are examining this question as part of the NRP 77 programme. “Systems like these are producing amazing results today,” he remarks. "But is that what doctors and nursing staff on the ground need?"

If the development teams are unaware of what questions the people working at the bedside have, they will programme algorithms that fail to address the needs of everyday medical practice, Hautz postulates. This is why he is not only proposing intensive collaboration between developers and users but is also suggesting that we turn around the concept underlying clinical decision support systems. Today's systems deliver suggestions for possible diagnoses together with the associated probabilities. The doctors then make a decision – frequently trusting the machine. With a reversal of these roles, doctors suggest diagnoses and AI merely assists them in making the right decisions. Not only does this ensure that the onus of responsibility rests with humans, it also means that the latter actually feel responsible.

Short presentation by Wolf Hautz at the 2nd Swiss Congress for Tele-Emergency Medicine and Digital Health 2021.External Link Icon

Project