Description
Related to Model/Framework(s)
(PyTorch/Segmentation/nnUNet)
Describe the bug
in your nnunet implementation you discuss in your brats 2021 and brats 2022 notebook that you add a 5th channel to distinguish the background and foreground voxels as i quote from your notebook preprocessing section :
"To distinguish between background voxels and normalized voxels which have values close to zero, we add an input channel with one-hot encoding for foreground voxels and stacked with the input data. As a result, each example has 5 channels."
but i reviewed your preprocessor.py code and found the following piece of code lines 114 to 121 :
if self.args.ohe:
mask = np.ones(image.shape[1:], dtype=np.float32)
for i in range(image.shape[0]):
zeros = np.where(image[i] <= 0)
mask[zeros] *= 0.0
image = self.normalize_intensity(image).astype(np.float32)
mask = np.expand_dims(mask, 0)
image = np.concatenate([image, mask])
the problem that i see is the line zeros = np.where(image[i] <= 0)
why is it <= then, this means that you are saying any negative values set to zero also and the original images has a lot of negative values after subtracting the mean and dividing by the std, so my suggestion is to just say zeros = np.where(image[i] == 0)
to do what you intended to do originally . also i attached images of the ohe channel before and after my modification with the original input
To Reproduce
Steps to reproduce the behavior:
just run either the brats 2021 or brats 2022 notebook
Expected behavior
i attached images of the correct behavior which is making the image as 1s and the background 0s
images of the case
input image which is example BraTS2021_00000 slice 85
the right behavior after my suggestion
the wrong output of the existing code