top of page
Search
Ornela Bardhi

Artificial Intelligence (AI) in medical imaging


In recent years, artificial intelligence (AI) has gained entry in various everyday life aspects, from simple Google searches and language recognition tools on smartphones, to algorithms for self-driving cars, or playing different games such as Atari or Go. Apart from these applications, we have seen an increase usage of AI in medical field. The increasing amount of patient data, including patient records, lab reports and medical images, does not mean an improvement in patient care. This creates challenges: how do we obtain meaningful insights from all this data we have and can gather? How do we make sure the patient history is understood in its entirety? Do the patients’ records alone help us provide patients with the adequate care?

Well, medical imaging has shown to have played an immense importance in healthcare throughout the history. It not only helps in diagnosing a disease, planning treatment, assessing results, but also preventing illness, usually through screening programs. Aggregating it with demographic and other healthcare data, they can bring novel insights and help scientists discover breakthrough treatments.

A lot of research has been done in automating the delivery of the medical imaging results. These results still rely on professional radiologists being present when finalizing them or help them be more efficient on their job and deliver them quicker.

"According to a recent poll, over 50% of global healthcare leaders expect the role of AI in monitoring and diagnosis to expand".

Sardanelli F et al. (2017)

A review on deep learning (DL) applications in medical image paper published last year Litjens et al. (2017) shows that AI algorithms will have a significant impact on this field in the near future. The application areas span from brain, eye, chest, digital pathology and microscopy, to breast, cardiac, abdomen, musculoskeletal etc. These algorithms are for all types of imagery machine that is being used nowadays: computed tomography (CT), ultrasound, MRI, x-Ray, microscope, cervigram, photographs, endoscopy/ colonoscopy, tomosynthesis (TS), mammography etc.

When handing radiology over to artificial intelligence sounds appealing

Majority of these applications deal with either classification, segmentation or detection problems and convolutional neural networks (CNNs), recurrent neural networks (RNNs), auto-encoders (AE) or stacked auto-encoders (SAE), restricted Boltzmann machines (RBM) and deep belief networks are the most used architectures for these settings. A representation of how these architectures work is depicted below.

Node graphs of 1D representations of architectures

My work on colorectal cancer (CRC), Bardhi et al. (2017), shows how we can use DL algorithms to detect colon polyps. Colonoscopy is the preferred technique for both, screening and prevention of CRC. According to the 2012 results from the International Agency for Research on Cancer identified CRC as the 4th highest estimated number of deaths worldwide with around 696 thousand cases. Usually, CRC begins as a growth on the inner surface of the colon known as a polyp, which left undiagnosed, can develop into cancer. During a colonoscopy the medical personnel can identify and remove colon polyps, making it a successful preventative procedure. Although colonoscopy has shown such progress in reduction of mortality and incidence of CRC, the miss rate of colon polyps remains still high. Computer-aided polyp detection provides a tool to assist colonoscopists reduce polyp miss rates. However, DL algorithms can reduce it even more. In our study we have used CNN-AE model to detect colon polyps.

Besides academia, industry has been working a lot on this field too. The US Food and Drug Administration (FDA) on April 11 this year gave permission to a company to market an AI-powered diagnostic device for ophthalmology. The software is designed to detect greater than a mild level of diabetic retinopathy, which causes vision loss and affects millions of people. It occurs when high blood sugar damages blood vessels in the retina. The program uses an AI algorithm to analyze images of the adult eye taken with a special retinal camera. A doctor uploads the images to a cloud server, and the software then delivers a positive or negative result. Earlier this year the FDA cleared an AI-based clinical decision support software designed to analyze computed tomography (CT) results that may notify providers of a potential stroke in their patients. Other devices and software are said to be approved by them soon.

184 views0 comments
bottom of page