Description
Summary
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 68, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 480, in run
result = self._run_interface(execute=True)
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 564, in _run_interface
return self._run_command(execute)
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 644, in _run_command
result = self._interface.run(cwd=outdir)
File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/base/core.py", line 521, in run
runtime = self._run_interface(runtime)
File "/opt/conda/lib/python3.6/site-packages/nipype/interfaces/utility/wrappers.py", line 144, in _run_interface
out = function_handle(**args)
File "<string>", line 13, in extract_ts_coords
File "/opt/conda/lib/python3.6/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 275, in fit_transform
return self.fit().transform(imgs, confounds=confounds)
File "/opt/conda/lib/python3.6/site-packages/nilearn/input_data/base_masker.py", line 176, in transform
return self.transform_single_imgs(imgs, confounds)
File "/opt/conda/lib/python3.6/site-packages/nilearn/input_data/nifti_spheres_masker.py", line 321, in transform_single_imgs
verbose=self.verbose)
File "/opt/conda/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 483, in __call__
return self._cached_call(args, kwargs)[0]
File "/opt/conda/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 430, in _cached_call
out, metadata = self.call(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/sklearn/externals/joblib/memory.py", line 675, in call
output = self.func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/nilearn/input_data/base_masker.py", line 66, in filter_and_extract
imgs = _utils.check_niimg(imgs, atleast_4d=True, ensure_ndim=4)
File "/opt/conda/lib/python3.6/site-packages/nilearn/_utils/niimg_conversions.py", line 271, in check_niimg
niimg = load_niimg(niimg, dtype=dtype)
File "/opt/conda/lib/python3.6/site-packages/nilearn/_utils/niimg.py", line 116, in load_niimg
dtype = _get_target_dtype(niimg.get_data().dtype, dtype)
File "/opt/conda/lib/python3.6/site-packages/nibabel/dataobj_images.py", line 202, in get_data
data = np.asanyarray(self._dataobj)
File "/opt/conda/lib/python3.6/site-packages/numpy/core/numeric.py", line 544, in asanyarray
return array(a, dtype, copy=False, order=order, subok=True)
File "/opt/conda/lib/python3.6/site-packages/nibabel/arrayproxy.py", line 293, in __array__
raw_data = self.get_unscaled()
File "/opt/conda/lib/python3.6/site-packages/nibabel/arrayproxy.py", line 288, in get_unscaled
mmap=self._mmap)
File "/opt/conda/lib/python3.6/site-packages/nibabel/volumeutils.py", line 523, in array_from_file
data_bytes = bytearray(n_bytes)
MemoryError
and relatedly:
Node: meta.wb_functional_connectometry.extract_ts_wb_coords_node
Interface: nipype.interfaces.utility.wrappers.Function
Traceback:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/plugins/base.py", line 338, in _local_hash_check
cached, updated = self.procs[jobid].is_cached()
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 303, in is_cached
hashed_inputs, hashvalue = self._get_hashval()
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 497, in _get_hashval
self._get_inputs()
File "/opt/conda/lib/python3.6/site-packages/nipype/pipeline/engine/nodes.py", line 524, in _get_inputs
results = loadpkl(results_file)
File "/opt/conda/lib/python3.6/site-packages/nipype/utils/filemanip.py", line 646, in loadpkl
unpkl = pickle.load(pkl_file)
MemoryError
which happens downstream, and gives me a message to post it as a nipype issue
Actual behavior
Memory error
Expected behavior
No memory error
How to replicate the behavior
Complex to replicate completely, but it occurs when the extract_ts_wb_node is parallelized to ~20 or more threads. Also, I have attempted to explicitly restrict mem usage on the node with no luck:
extract_ts_wb_node.interface.mem_gb = 2
extract_ts_wb_node.interface.num_threads = 1
or even:
extract_ts_wb_node.interface.mem_gb = 20
extract_ts_wb_node.interface.num_threads = 2
Doesn't change anything.
Here's the actual function in the node (two versions-- 1 with caching and 1 without, both of which cause the workflow to break):
version A) spheres_masker = input_data.NiftiSpheresMasker(seeds=coords, radius=float(node_size), allow_overlap=True, standardize=True, verbose=1, memory="%s%s" % ('SpheresMasker_cache_', str(ID)), memory_level=2)
version B) spheres_masker = input_data.NiftiSpheresMasker(seeds=coords, radius=float(node_size), allow_overlap=True, standardize=True, verbose=1)
Script/Workflow details
Meta-workflow ('meta') is triggered as a nested workflow in the imp_est node of single_subject_wf:
https://github.com/dPys/PyNets/blob/master/pynets/pynets_run.py
which further calls the wb_functional_connectometry workflow from:
https://github.com/dPys/PyNets/blob/master/pynets/workflows.py
and the whole thing breaks when it hits extract_ts_wb_node (line 110) of:
https://github.com/dPys/PyNets/blob/master/pynets/graphestimation.py
Platform details:
{'pkg_path': '/opt/conda/lib/python3.6/site-packages/nipype', 'commit_source': 'installation', 'commit_hash': 'fed0bd94f', 'nipype_version': '1.0.4', 'sys_version': '3.6.5 | packaged by conda-forge | (default, Apr 6 2018, 13:39:56) \n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]', 'sys_executable': '/opt/conda/bin/python', 'sys_platform': 'linux', 'numpy_version': '1.14.3', 'scipy_version': '1.1.0', 'networkx_version': '2.1', 'nibabel_version': '2.3.0', 'traits_version': '4.6.0'}
1.0.4
Execution environment
- Container: Singularity container
- My python environment inside container: python3.6
- My python environment outside container None
I've tried logging with the 'callback' logger and it doesn't even log anything since this bug seems to occur at several layers deep. Please help.