Converting Several DICOM Files to a 3D PyTorch Tensor

I’m working on a project involving brain scans in DICOM format. They are basically slices of the brain which if stacked on one another, would kind of represent a 3d model of the entire brain. It would actually be a bit more than just a 3d model, containing voxel information throughout the entire matrix.

I wanted to create a 3d tensor of the entire brain, something that could be fed into a PyTorch model without losing or compressing any data.

My first issue was file formatting. They needed to be stacked in the correct order so that the tensor would accurately represent the scan. The issue is that the file formatting was icky. The first scan was named “Image-1.dcm”, the tenth scan was “Image-10.dcm”, and the 100th scan was “Image-100.dcm”. This means that when sorted, the images would be arranged in the order [‘Image-1.dcm’, ‘Image-10.dcm’, ‘Image-11.dcm’, … ] The digits of the files needed to be padded with zeros so they would be sorted accurately, eg. “Image-1.dcm” needed to be renamed to “Image-001.dcm”. I needed to be able to rename recursively, since the actual dataset contained gigabytes of data. After some research and a quick python script, that task was complete.

#!/usr/bin/env python3
import sys,os,re

for file in sys.argv[1:]:
    if __file__ in file: continue
    words = re.split('Image-',file)
    words[-1] = words[-1].zfill(7)
    new_name = words[0] + "Image-" + words[1]

find . -name "*Image-*.dcm" -exec ../ {} \;

First, I load the libraries.

import fastai
from fastai.medical.imaging  import *
from import *
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning) # Necessary for, which throws lots of warnings

Load a list of files, sort them, then retrieve the first file for testing.

files = get_dicom_files("train/00000/FLAIR")
sorted_files = sorted(files)
file = files[0]

For each individual DICOM file, you can retrieve an image array by using dcmread, a function contained within’s library( which itself uses pydicom.dcmread). You read the file, convert it to a uint16 array, then because PyTorch doesn’t like that format, you then convert it to a numpy int array.

arr = file.dcmread().to_uint16().astype(int)
(512, 512)

What you now have is a numpy array containing the data of the single DICOM image. This can now easily be converted to a tensor.

t = torch.tensor(arr)
tensor([[0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0],
        [0, 0, 0,  ..., 0, 0, 0]])

From here, it’s just a matter of iterating through the files, creating an array of numpy arrays, converting that to a 3d numpy array, then converting it to a tensor. There may be a more efficient way to do it, but this is what I found could be done.

out = []
for file in sorted(files):
    arr = file.dcmread().to_uint16().astype(int)

_ = np.array(out)
tens = torch.tensor(_)
torch.Size([400, 512, 512])

And there you have it. A tensor containing the slices of an entire brain scan that can then be fed into a PyTorch model.

Leave a comment

Your email address will not be published.