Doctors could soon get a helping hand from an AI-based tool for detecting brain aneurysms—bulges in blood vessels in the brain that can leak or burst, potentially leading to a stroke, brain damage, or death. The AI tool, built around an algorithm called HeadXNet, was developed by researchers at Stanford University.
In tests, the tool improved clinicians’ ability to correctly identify aneurysms at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms. It also improved consensus among the interpreting clinicians. Although the success of the tool in these experiments is promising, the researchers—who have expertise in machine learning, radiology, and neurosurgery—caution that further investigation is needed to evaluate the tool before using it in clinics, given the variety of differences in scanner hardware and imaging protocols used in various hospitals. The researchers plan to address such problems by working with several hospitals.
Combing brain scans for signs of an aneurysm can mean scrolling through hundreds of images. Aneurysms come in many sizes and shapes and balloon out at tricky angles. Some register as no more than a blip within the movie-like succession of computerized tomography (CT) angiograms.
Andrew Ng, Kristen Yeom, Christopher Chute, Pranav Rajpurkar, and Allison Park look at a brain scan that was used to train and test their AI-based tool, which helps identify brain aneurysms. (Courtesy: L.A. Cicero)
“Searching for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake,” says Kristen Yeom, associate professor of radiology. “Given inherent challenges of complex neurovascular anatomy and potentially fatal outcomes of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging.”
Every artificially intelligent algorithm has strengths and weaknesses that reflect its programming and training. In the case of this tool, researchers focused on its ability to identify the presence of aneurysms rather than on detecting their absence. As a result, the tool improved clinicians’ ability to see aneurysm but didn’t affect their ability to identify scans without them.
This outcome was exactly what the researchers wanted, but the decision ran the risk of making radiologists worse at identifying aneurysm-free scans, so the researchers watched closely for this potential shortcoming.
In this brain scan, the location of an aneurysm is indicated by HeadXNet using a transparent red highlight. (Courtesy: Allison Park)
As the team continues to build AI tools for health care, they hope to better understand how the ways they program and train their algorithms can augment the expertise of clinicians without unintended consequences.
The central challenge in constructing the tool was creating an artificial intelligence algorithm that could accurately process large stacks of 3D images. To train their algorithm, the team outlined everything clinically significant. They manually labelled every voxel (the 3D equivalent of a pixel) to indicate whether or not it was part of an aneurysm.
Following the training, the algorithm decides for each voxel of a scan whether there is an aneurysm present. The end result is that the conclusions are overlaid on top of the scan. This representation of the algorithm’s decision lets clinicians still see what the scans look like without HeadXNet’s input. Rather than just having the algorithm say that a scan contained an aneurysm, this brought the exact locations of the aneurysms to the clinician’s attention.
Eight clinicians tested the tool by evaluating a set of 115 brain scans for aneurysm, once with the help of the tool and once without. With the tool, clinicians correctly identified more aneurysms, and therefore reduced the “miss” rate, and the clinicians were more likely to agree with one another. The AI tool did not influence how long it took the clinicians to decide on a diagnosis or their ability to correctly identify scans without aneurysms—a guard against telling someone they have an aneurysm when they don’t.
The machine learning methods at the heart of tool could be trained to identify other diseases inside and outside the brain. For example, a future version could focus on speeding up identifying aneurysms after they have burst, saving precious time in an urgent situation. But a considerable hurdle remains in combining any AI tools with daily clinical workflow in radiology labs across hospitals.