A single CT scan lasting only a few seconds yields thousands of images. Looking for small abnormalities in such a large amount of data is no simple task for radiologists, but the emerging discipline of biomedical informatics puts difficulties like this one front and center. “This is not about replacing doctors,” explains Daniela Stan Raicu, an associate professor in DePaul’s College of Computing and Digital Media (CDM). “It’s about helping physicians in their diagnoses.”
Biomedical informatics brings a new approach to understanding and managing complex medical information, often through the use of technological processes and algorithms. In DePaul’s Medical Informatics Processing Lab, housed at CDM and co-directed by Raicu and Associate Professor Jacob Furst, student researchers collaborate with other institutions, including Argonne National Laboratory, the University of Chicago and Northwestern University, on cross-disciplinary collaborative projects. Thanks to DePaul’s Alliance for Health Sciences, a partnership with Rosalind Franklin University of Medicine and Science, several current projects apply computer science technologies to the process of analyzing medical images. At the recent MedIX Workshop, students shared highlights from this ongoing research.
PhD student Valerie Simonis kicked off the presentations with “Identification of Gene Function Using Image Analysis,” in which she discussed the software program she wrote to track the movements of C. elegans, a worm that is used in neurological studies. Simonis’ “worm tracker” will provide Dr. Hongkyun Kim, an assistant professor at Rosalind Franklin, with quantitative evidence to support his hypothesis that the worm’s behavior is modulated by the presence or absence of food, and that this behavioral modulation is determined by specific sets of neural circuits. As the worm moves across the surface of a dish, Simonis’ program controls a motorized video camera, ensuring that it stays centered on the worm. This is preferable to having a stationary camera capture everything from a wider vantage point, as Simonis explains with a sports analogy: “When you’re watching the entire arena in a hockey game, how feasible is it that you can see the puck? You have to zoom in to see the action.” Once the images have been collected, Ron Niehaus (CDM MS ’14) will begin analyzing the frames. After extracting features of the worm’s shape, size and motion, Niehaus will apply pattern recognition and data mining techniques to the numerical data. The eventual goal is to generate classification models that will automatically, and accurately, annotate the video recordings of the worm’s movements with behaviors exhibited.
In the second presentation, student researcher Payam Pourashraf and research assistant Xiaotao Fang (CDM MS ’14) shared details of their project “The Use of Surface Topography to Create New Models of 3D Skeletal Reconstructions.” Their research is focused on pectus excavatum, an abnormal formation of the rib cage and sternum that gives the chest a caved-in appearance. In 0.1-0.5 percent of cases, the condition can cause such a severe impairment of heart and lung functions that surgery is necessary. Current diagnosis of the severity of the condition is done with CT scans, which have been shown to increase cancer risk, especially in females. Thus, Pourashraf and Fang have been working on a digital scanning process that creates an accurate picture of the torso and cross-sections of the torso, effectively duplicating the results of the CT scan without its attendant risks. The challenge is that the scanner cannot produce a 360-degree image, only images of the front and back of the torso; therefore, the team is using mathematical analyses of 25,000 data points to join the two images together. They are also investigating the use of physical markers, such as reflective labels placed at the top and bottom of the sternum, to indicate the amount of shortening that has taken place.
For the final presentation of the day, Niehaus and Pourashraf introduced their research related to computer-aided diagnosis (CAD) of nodules found in the lungs. Using CAD to assist with analysis can potentially streamline the diagnostic process after a patient undergoes a CT scan to determine if nodules are malignant or not, while also reducing costs. Through implementation of CAD, radiologists and other diagnosticians will have more, and potentially better, visuals on which to base diagnoses, a database of similar cases to use for comparison and enhanced information to use in guiding their analyses. The researchers anticipate that creating more efficient and accessible early detection, decreasing false positive results and lessening the need for analyses from multiple radiologists will all lead to a decrease in costs. In analyzing data from the Lung Image Database Consortium, funded by the National Cancer Institute, Niehaus and Pourashraf were able to automatically extract information directly from the CT scan images to characterize the nodules, develop computer programs to classify nodules according to useful semantics, and ascertain the complexity of a case, which will help determine whether analysis from one radiologist is sufficient or if further analyses are needed.