Radiology, magnetic resonance imaging, echocardiography, tomography–these are just a few examples of medicine fields where imaging is absolutely essential. Modern medicine simply couldn’t exist without images. But medical imaging is far more complicated than just “taking pictures”. It entails the entire medical image analysis process. And this is where deep learning and artificial intelligence come in. How can we benefit from deep learning for medical image analysis?

Medical images created with the usage of various types of specialized equipment, allow doctors to observe the structures and the tissues of the human body and the course of a number of physiological processes. The proper analysis of these images and the interpretation of information extracted from them contributes significantly to making the correct diagnosis. Medical image analysis is particularly crucial in such human health issues as cancers, degenerations, cysts, abscesses, gangrenes, and inflammations.

There are several techniques and methods in use for medical imaging. These are X-raying, radiography, magnetic resonance imaging, nuclear medicine, ultrasonography, elastography, echocardiography, optoacoustic imaging, and scanning laser ophthalmoscopy. IBM estimates that medical images are the largest and fastest-growing data source in the entire healthcare industry. It accounts for at least 90 percent of all medical data![1] No wonder data scientists are thinking of ways to speed up the image analysis process and make it more detailed.

Think of a typical radiologist, working in a hospital. They receive hundreds of images to analyze daily. And then, they have to get the diagnosis ready. Not only is it time-consuming but, also, at some point, when the doctor’s tiredness level goes up, the probability of making an error goes up drastically as well. Just as in the drug development process, human health and sometimes even life are at stake. This is why scientists are trying to harness deep learning in medical image analysis. This will take this burden off physician’s heads. At least partly.

Medical Image Analysis, doctor, hospital

DICOM clears the way for medical image computing

The abbreviation DICOM stands for Digital Imaging and Communications in Medicine. It is a global standard for the communication and management of medical imaging information and related data[2] for the medical units worldwide. It’s most commonly used for storing and transmitting medical images. It enables the integration of medical imaging devices such as scanners, servers, workstations, printers, network hardware, picture archiving and communication systems.

What’s interesting, DICOM finds an application in image processing for almost every type of medical image, including CT, MRI, ultrasound, X-rays, fluoroscopy, angiography, mammography, breast tomosynthesis, PET, SPECT, endoscopy, microscopy, and OCT. It also consists of the patient’s data, so it’s an ultimate source of knowledge about each patient’s condition and disease.

A DICOM standard is a merit of the American College of Radiology and National Electrical Manufacturers Association. It was first introduced in 1985 and has been developing ever since.

Every DICOM dataset is divided into two parts:

  • one containing information about the file (Dicom-Meta-Information-Header),
  • data for one Service-Object Pair Instance (Dicom-Data-Set)
  • Within datasets, there are data elements–the basic units of data. They consist of the following elements:
  • Data Element Tag–a unique identifier composed of two numbers: a group number and a group element number
  • Value Representation–data type enabling the correct interpretation
  • Value Length–element size
  • Value Field (optional)–the field with proper data

DICOM is crucial in our deliberations. It standardizes the whole medical imaging. That makes deep learning in medical image analysis possible. However, it’s an enormous assignment to standardize the entire imaging process, so DICOM is developed by 30 different workgroups (WG), which are responsible for various fields. For instance, WG-07 is accountable for the radiotherapy part. But the truth is, DICOM has opened the door for artificial intelligence and deep learning in medical image analysis.

medical image computing in radiology, woman, doctor

Deep Learning for medical image analysis

Deep learning is an improvement of artificial intelligence, consisting of more layers that permit higher levels of abstraction and improved predictions from data. Currently, it is emerging as the leading machine-learning tool in the general imaging and computer vision domains[3].

You may find it interesting – Computer Vision in Healthcare.

In medical imaging, accurate diagnosis depends on both image acquisition and image interpretation. Image acquisition has improved substantially over recent years, with devices acquiring data at faster rates and increased resolution. The image interpretation process, however, has only recently begun to benefit from computer technology. Deep learning and medical image computing are about to change this situation. For instance, convolutional neural networks (CNNs) have already proven to be a powerful tool in deep learning for medical image analysis. Deep learning image analysis CNNs automatically learn even the most complicated abstractions obtained from medical images.

Deep learning CNNs and machine learning image analysis, are the key enablers to improving diagnosis, by facilitating identification of the findings that require treatment and to support the physician’s workflow. Deep learning algorithms are faster, more accurate and, what’s particularly essential, unlike human doctors, tireless. That makes them much more efficient than human doctors or radiologists. They can verify hundreds of pictures every day without any trace of tiredness. That solution becomes necessary when you realize that each patient’s image collection can contain up to 250GB of data!

One of the companies that is working on implementing deep learning in medical image computing is IBM[4]. IBM researchers try to apply deep learning to discover ways to overcome some of the technical challenges that AI can face when analyzing X-rays and other medical images. IBM staff work on such issues as teaching algorithms from incomplete data and recognizing obscure abnormalities, which still is a challenge for the AI algorithms.

Deep Learning for medical image analysis, artificial intelligence algorithm

Machine Learning in image analysis

Deep learning and machine learning are related to technologies. Generally speaking, machine learning is a subset of artificial intelligence. It is associated with creating AI algorithms that can change (learn) themselves, without any human intervention to get the desired result. Deep learning is a subset of machine learning. It is more advanced technology, where algorithms are created and function on many levels, each providing a different interpretation of the data. This network of algorithms is called an artificial neural network, and it resembles the neural connections that exist in the human brain.

Try reading – Artificial Intelligence and Big Data in Healthcare.

You can teach machine learning algorithms by marking medical images (for instance, representing a healthy tissue and a tissue attacked by cancer) in order to determine the characteristics and differences of both tissues. Such data is sufficient for the further automatic training of the machine learning algorithm. So the role of a machine learning specialist ends here. On the other hand, when it comes to deep learning, the situation looks a bit different. In this case, no previous teaching is needed. The medical image is sent through varying levels of neural networks, and each network hierarchically determines the specific features of the images[5]. That’s why we say that deep learning is just a more advanced version of machine learning.

Machine Learning in image analysis

Application from the Polish market

One of the companies implementing AI to medical image analysis is Netrix SA, and its project is called e-Medicus. e-Medicus is a not-ready-yet system for registration and analysis of data originating from X-ray images and classification of cancer cells.

This system will support the activities of healthcare units that perform radiological examinations in the field of recording and analyzing X-ray images, and the results of computed tomography tests. The analysis of radiological images performed by e-Medicus will comprise their processing, image identification, and segmentation. Thanks to this analysis, it will be possible to track changes in the patient’s condition over time, and the photo analysis process itself will be automated.

The e-Medicus IT system consists of computational intelligence algorithms, Internet agents, segmenting algorithms, framework, data warehouses, and a visualization system. E-Medicus will register and analyze medical images in the form of numerical functions. This will allow identification of any changes in the patient’s state and appropriate classification of data. The visual presentation of the disease development process will be done using the level sets. The project will develop and apply new procedures and algorithms in the field of theoretical computer science and numerical mathematics using the advantages of neural networks, genetic algorithms, and semantic networks. The company is still working on this project. The prototype hasn’t been shown yet.

Medical imaging application from the Polish market

The challenges of deep learning

We can expect that his technology will grow rapidly. Nonetheless, there still are several unsolved challenges that the clinical application of deep learning in medical image analysis face.

  • The first obstacle is obtaining patients’ data. There are approximately 8 billion people all over the world, and still, a significant part of them do not have access to primary healthcare. Consequently, data scientists cannot get as many patients as then need to train a comprehensive deep learning model.
  • The second problem is data quality. Even though scientists have made an enormous step forward, healthcare data is highly heterogeneous and incomplete. Training a good deep learning model with such variegate datasets is quite a challenge.
  • The third problem is complexity. The diseases are highly dissimilar, and for most of the diseases, there is still no complete knowledge of their causes and how they progress. What’s worse, the diseases are always progressing and changing over time in a non-deterministic way.

As a result, the 100% perfect deep learning image analysis model still doesn’t exist and won’t for many years to come. Does this mean that we should throw in the towel? By no means! Human life is at stake! And many great scientists who tirelessly work on the deep learning algorithms are keenly aware of this fact.

Imagine the machine learning/deep learning image analysis that’s 100% accurate. Maybe in the future, there will be an AI algorithm that could evaluate every pixel of a contrast-enhanced computed tomography and could correlate with all available demographic and clinical data absorbed from the electronic medical record. That would be the next great shift in modern medicine.

Are you keen to talk about implementing deep learning or artificial intelligence in your business? You’re in the right place! Just drop us a line, and let’s chat about the challenges you face.

AI software development

[1] https://www.hcinnovationgroup.com/population-health-management/news/13027814/ibm-unveils-watsonpowered-imaging-solutions-at-rsna
[2] https://en.wikipedia.org/wiki/DICOM
[3] https://ieeexplore.ieee.org/abstract/document/7463094
[4] https://www.ibm.com/blogs/research/2018/09/medical-image-analysis/
[5] https://parsers.me/deep-learning-machine-learning-whats-the-difference/

Grow your businness with machine learning and big data solutions.

Our team of experts will turn your data into business insights.

growth illustration

Planning AI or BI project? Get an Estimate

Get a quick estimate of your AI or BI project within 1 business day. Delivered straight to your inbox.