This section will introduce you to all of the key players in Nipype. Basic concepts that you need to learn to\n",
" fully understand and appreciate Nipype. Once you understand this section, you will know all that you need to know\n",
" to create any kind of Nipype workflow.
This section will introduce you to all of the key players in Nipype. Basic concepts that you need to learn to\n",
" fully understand and appreciate Nipype. Once you understand this section, you will know all that you need to know\n",
" to create any kind of Nipype workflow.
\n",
" Install Nipype\n",
diff --git a/notebooks/advanced_aws.ipynb b/notebooks/advanced_aws.ipynb
new file mode 100644
index 0000000..f5ca670
--- /dev/null
+++ b/notebooks/advanced_aws.ipynb
@@ -0,0 +1,166 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Using Nipype with Amazon Web Services (AWS)\n",
+ "\n",
+ "Several groups have been successfully using Nipype on AWS. This procedure\n",
+ "involves setting a temporary cluster using StarCluster and potentially\n",
+ "transferring files to/from S3. The latter is supported by Nipype through\n",
+ "`DataSink` and `S3DataGrabber`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using DataSink with S3\n",
+ "\n",
+ "The `DataSink` class now supports sending output data directly to an AWS S3\n",
+ "bucket. It does this through the introduction of several input attributes to the\n",
+ "`DataSink` interface and by parsing the `base_directory` attribute. This class\n",
+ "uses the [boto3](https://boto3.readthedocs.org/en/latest/) and\n",
+ "[botocore](https://botocore.readthedocs.org/en/latest/) Python packages to\n",
+ "interact with AWS. To configure the `DataSink` to write data to S3, the user must\n",
+ "set the ``base_directory`` property to an S3-style filepath.\n",
+ "\n",
+ "For example:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.interfaces.io import DataSink\n",
+ "ds = DataSink()\n",
+ "ds.inputs.base_directory = 's3://mybucket/path/to/output/dir'"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "With the `\"s3://\"` prefix in the path, the `DataSink` knows that the output\n",
+ "directory to send files is on S3 in the bucket `\"mybucket\"`. `\"path/to/output/dir\"`\n",
+ "is the relative directory path within the bucket `\"mybucket\"` where output data\n",
+ "will be uploaded to (***Note***: if the relative path specified contains folders that\n",
+ "don’t exist in the bucket, the `DataSink` will create them). The `DataSink` treats\n",
+ "the S3 base directory exactly as it would a local directory, maintaining support\n",
+ "for containers, substitutions, subfolders, `\".\"` notation, etc. to route output\n",
+ "data appropriately.\n",
+ "\n",
+ "There are four new attributes introduced with S3-compatibility: ``creds_path``,\n",
+ "``encrypt_bucket_keys``, ``local_copy``, and ``bucket``."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ds.inputs.creds_path = '/home/neuro/aws_creds/credentials.csv'\n",
+ "ds.inputs.encrypt_bucket_keys = True\n",
+ "ds.local_copy = '/home/neuro/workflow_outputs/local_backup'"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "``creds_path`` is a file path where the user's AWS credentials file (typically\n",
+ "a csv) is stored. This credentials file should contain the AWS access key id and\n",
+ "secret access key and should be formatted as one of the following (these formats\n",
+ "are how Amazon provides the credentials file by default when first downloaded).\n",
+ "\n",
+ "Root-account user:\n",
+ "\n",
+ "\tAWSAccessKeyID=ABCDEFGHIJKLMNOP\n",
+ "\tAWSSecretKey=zyx123wvu456/ABC890+gHiJk\n",
+ "\n",
+ "IAM-user:\n",
+ "\n",
+ "\tUser Name,Access Key Id,Secret Access Key\n",
+ "\t\"username\",ABCDEFGHIJKLMNOP,zyx123wvu456/ABC890+gHiJk\n",
+ "\n",
+ "The ``creds_path`` is necessary when writing files to a bucket that has\n",
+ "restricted access (almost no buckets are publicly writable). If ``creds_path``\n",
+ "is not specified, the DataSink will check the ``AWS_ACCESS_KEY_ID`` and\n",
+ "``AWS_SECRET_ACCESS_KEY`` environment variables and use those values for bucket\n",
+ "access.\n",
+ "\n",
+ "``encrypt_bucket_keys`` is a boolean flag that indicates whether to encrypt the\n",
+ "output data on S3, using server-side AES-256 encryption. This is useful if the\n",
+ "data being output is sensitive and one desires an extra layer of security on the\n",
+ "data. By default, this is turned off.\n",
+ "\n",
+ "``local_copy`` is a string of the filepath where local copies of the output data\n",
+ "are stored in addition to those sent to S3. This is useful if one wants to keep\n",
+ "a backup version of the data stored on their local computer. By default, this is\n",
+ "turned off.\n",
+ "\n",
+ "``bucket`` is a boto3 Bucket object that the user can use to overwrite the\n",
+ "bucket specified in their ``base_directory``. This can be useful if one has to\n",
+ "manually create a bucket instance on their own using special credentials (or\n",
+ "using a mock server like [fakes3](https://github.com/jubos/fake-s3)). This is\n",
+ "typically used for developers unit-testing the DataSink class. Most users do not\n",
+ "need to use this attribute for actual workflows. This is an optional argument.\n",
+ "\n",
+ "Finally, the user needs only to specify the input attributes for any incoming\n",
+ "data to the node, and the outputs will be written to their S3 bucket."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "workflow.connect(inputnode, 'subject_id', ds, 'container')\n",
+ "workflow.connect(realigner, 'realigned_files', ds, 'motion')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "So, for example, outputs for `sub001`’s `realigned_file1.nii.gz` will be in:\n",
+ "\n",
+ " s3://mybucket/path/to/output/dir/sub001/motion/realigned_file1.nii.gz"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using S3DataGrabber\n",
+ "Coming soon..."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/z_development_interface.ipynb b/notebooks/advanced_command_line_interface.ipynb
similarity index 58%
rename from notebooks/z_development_interface.ipynb
rename to notebooks/advanced_command_line_interface.ipynb
index 52d2eff..1152f56 100644
--- a/notebooks/z_development_interface.ipynb
+++ b/notebooks/advanced_command_line_interface.ipynb
@@ -4,34 +4,34 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "http://nipype.readthedocs.io/en/latest/devel/cmd_interface_devel.html"
+ "# Nipype Command Line Interface\n",
+ "\n",
+ "The Nipype Command Line Interface allows a variety of operations:"
]
},
{
- "cell_type": "markdown",
+ "cell_type": "code",
+ "execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
- "http://nipype.readthedocs.io/en/latest/devel/matlab_interface_devel.html"
+ "%%bash\n",
+ "nipypecli"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "http://nipype.readthedocs.io/en/latest/devel/python_interface_devel.html"
+ "
\n",
+ "**Note**: These have replaced previous nipype command line tools such as `nipype_display_crash`, `nipype_crash_search`, `nipype2boutiques`, `nipype_cmd` and `nipype_display_pklz`.\n",
+ "
"
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
"kernelspec": {
- "display_name": "Python 3",
+ "display_name": "Python [default]",
"language": "python",
"name": "python3"
},
@@ -45,7 +45,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.2"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/advanced_execution_configuration.ipynb b/notebooks/advanced_execution_configuration.ipynb
deleted file mode 100644
index d0a9235..0000000
--- a/notebooks/advanced_execution_configuration.ipynb
+++ /dev/null
@@ -1,147 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Execution Configuration Options\n",
- "\n",
- "Nipype gives you many liberties on how to create workflows, but the execution of them uses a lot of default parameters. But you have of course all the freedom to change them as you like.\n",
- "\n",
- "Nipype looks for the configuration options in the local folder under the name ``nipype.cfg`` and in ``~/.nipype/nipype.cfg`` (in this order). It can be divided into **Logging** and **Execution** options. A few of the possible options are the following:\n",
- "\n",
- "### Logging\n",
- "\n",
- "- **workflow_level**: How detailed the logs regarding workflow should be\n",
- "- **log_to_file**: Indicates whether logging should also send the output to a file\n",
- "\n",
- "### Execution\n",
- "\n",
- "- **stop_on_first_crash**: Should the workflow stop upon first node crashing or try to execute as many nodes as possible?\n",
- "- **remove_unnecessary_outputs**: This will remove any interface outputs not needed by the workflow. If the required outputs from a node changes, rerunning the workflow will rerun the node. Outputs of leaf nodes (nodes whose outputs are not connected to any other nodes) will never be deleted independent of this parameter.\n",
- "- **use_relative_paths**: Should the paths stored in results (and used to look for inputs) be relative or absolute. Relative paths allow moving the whole working directory around but may cause problems with symlinks. \n",
- "- **job_finished_timeout**: When batch jobs are submitted through, SGE/PBS/Condor they could be killed externally. Nipype checks to see if a results file exists to determine if the node has completed. This timeout determines for how long this check is done after a job finish is detected. (float in seconds; default value: 5)\n",
- "- **poll_sleep_duration**: This controls how long the job submission loop will sleep between submitting all pending jobs and checking for job completion. To be nice to cluster schedulers the default is set to 2\n",
- "\n",
- "\n",
- "For the full list, see [Configuration File](http://nipype.readthedocs.io/en/latest/users/config_file.html)."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Global, workflow & node level\n",
- "\n",
- "The configuration options can be changed globally (i.e. for all workflows), for just a workflow, or for just a node. The implementations look as follows (note that you should first create directories if you want to change `crashdump_dir` and `log_directory`):"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### At the global level:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype import config, logging\n",
- "import os\n",
- "os.makedirs('/output/log_folder', exist_ok=True)\n",
- "os.makedirs('/output/crash_folder', exist_ok=True)\n",
- "\n",
- "config_dict={'execution': {'remove_unnecessary_outputs': 'true',\n",
- " 'keep_inputs': 'false',\n",
- " 'poll_sleep_duration': '60',\n",
- " 'stop_on_first_rerun': 'false',\n",
- " 'hash_method': 'timestamp',\n",
- " 'local_hash_check': 'true',\n",
- " 'create_report': 'true',\n",
- " 'crashdump_dir': '/output/crash_folder',\n",
- " 'use_relative_paths': 'false',\n",
- " 'job_finished_timeout': '5'},\n",
- " 'logging': {'workflow_level': 'INFO',\n",
- " 'filemanip_level': 'INFO',\n",
- " 'interface_level': 'INFO',\n",
- " 'log_directory': '/output/log_folder',\n",
- " 'log_to_file': 'true'}}\n",
- "config.update_config(config_dict)\n",
- "logging.update_logging(config)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### At the workflow level:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype import Workflow\n",
- "wf = Workflow(name=\"config_test\")\n",
- "\n",
- "# Change execution parameters\n",
- "wf.config['execution']['stop_on_first_crash'] = 'true'\n",
- "\n",
- "# Change logging parameters\n",
- "wf.config['logging'] = {'workflow_level' : 'DEBUG',\n",
- " 'filemanip_level' : 'DEBUG',\n",
- " 'interface_level' : 'DEBUG',\n",
- " 'log_to_file' : 'True',\n",
- " 'log_directory' : '/output/log_folder'}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### At the node level:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype import Node\n",
- "from nipype.interfaces.fsl import BET\n",
- "\n",
- "bet = Node(BET(), name=\"config_test\")\n",
- "\n",
- "bet.config = {'execution': {'keep_unnecessary_outputs': 'false'}}"
- ]
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python [default]",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.5"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/notebooks/advanced_interfaces_caching.ipynb b/notebooks/advanced_interfaces_caching.ipynb
index 156bc26..f76dd58 100644
--- a/notebooks/advanced_interfaces_caching.ipynb
+++ b/notebooks/advanced_interfaces_caching.ipynb
@@ -4,22 +4,39 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Memory caching\n",
+ "# Interface caching\n",
"\n",
- "In [Workflow notebook](basic_worflow.ipynb) you learnt about ``Workflows`` that specify processing by an execution graph and offer efficient recomputing. However, sometimes you might want to use ``Interfaces`` that gives better control of the execution of each step and can be easily combine with any Python code. Unfortunately, ``Interfaces`` do not offer any caching and you always dully recompute your task. \n",
+ "This section details the interface-caching mechanism, exposed in the `nipype.caching` module."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Interface caching: why and how\n",
+ "\n",
+ "* `Pipelines` (also called `workflows`) specify processing by an execution graph. This is useful because it opens the door to dependency checking and enable\n",
+ " - to minimize recomputations, \n",
+ " - to have the execution engine transparently deal with intermediate file manipulations.\n",
+ "\n",
+ " They however do not blend in well with arbitrary Python code, as they must rely on their own execution engine.\n",
+ "\n",
+ "\n",
+ "* `Interfaces` give fine control of the execution of each step with a thin wrapper on the underlying software. As a result that can easily be inserted in Python code. \n",
+ "\n",
+ " However, they force the user to specify explicit input and output file names and cannot do any caching.\n",
"\n",
- "Solution to this problem can be a ``caching`` mechanism supported by Nipype. Nipype caching relies on the ``Memory`` class and creates an execution context that is bound to a disk cache.\n",
- "When you instantiate the class you should provide ``base_dir`` (that has to be an existing directory) and additional subdirectory called ``nipype_mem`` will be automatically created. "
+ "This is why nipype exposes an intermediate mechanism, `caching` that provides transparent output file management and caching within imperative Python code rather than a workflow."
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "%%bash\n",
- "mkdir -p /output/workingdir_mem"
+ "## A big picture view: using the [`Memory`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#memory) object\n",
+ "\n",
+ "nipype caching relies on the [`Memory`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#memory) class: it creates an\n",
+ "execution context that is bound to a disk cache:"
]
},
{
@@ -29,14 +46,16 @@
"outputs": [],
"source": [
"from nipype.caching import Memory\n",
- "mem = Memory(base_dir='/output/workingdir_mem')"
+ "mem = Memory(base_dir='.')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "If we want to ask for caching for the ``BET`` interface, we can use ``cache`` method that takes interfaces classes as an argument."
+ "Note that the caching directory is a subdirectory called `nipype_mem` of the given `base_dir`. This is done to avoid polluting the base director.\n",
+ "\n",
+ "In the corresponding execution context, nipype interfaces can be turned into callables that can be used as functions using the [`Memory.cache`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.cache) method. For instance if we want to run the fslMerge command on a set of files:"
]
},
{
@@ -46,32 +65,58 @@
"outputs": [],
"source": [
"from nipype.interfaces import fsl\n",
- "bet_mem = mem.cache(fsl.BET)"
+ "fsl_merge = mem.cache(fsl.Merge)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, ``bet_mem`` can be applied as a function with inputs of the ``BET`` interface as the function arguments. Those inputs are given as keyword arguments, bearing the same name as the name in the inputs specs of the interface."
+ "Note that the [`Memory.cache`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.cache) method takes interfaces **classes**, and not instances.\n",
+ "\n",
+ "The resulting `fsl_merge` object can be applied as a function to parameters, that will form the inputs of the `merge` fsl commands. Those inputs are given as keyword arguments, bearing the same name as the name in the inputs specs of the interface. In IPython, you can also get the argument list by using the `fsl_merge?` synthax to inspect the docs:"
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "bet_mem(in_file=\"/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz\",\n",
- " out_file=\"/output/sub-01_T1w_brain.nii.gz\",\n",
- " mask=True)"
+ "```python\n",
+ "In [3]: fsl_merge?\n",
+ "String Form:PipeFunc(nipype.interfaces.fsl.utils.Merge,\n",
+ " base_dir=/home/varoquau/dev/nipype/nipype/caching/nipype_mem)\n",
+ "Namespace: Interactive\n",
+ "File: /home/varoquau/dev/nipype/nipype/caching/memory.py\n",
+ "Definition: fsl_merge(self, **kwargs)\n",
+ "Docstring: Use fslmerge to concatenate images\n",
+ "\n",
+ "Inputs\n",
+ "------\n",
+ "\n",
+ "Mandatory:\n",
+ "dimension: dimension along which the file will be merged\n",
+ "in_files: None\n",
+ "\n",
+ "Optional:\n",
+ "args: Additional parameters to the command\n",
+ "environ: Environment variables (default={})\n",
+ "ignore_exception: Print an error message instead of throwing an exception in case the interface fails to run (default=False)\n",
+ "merged_file: None\n",
+ "output_type: FSL output type\n",
+ "\n",
+ "Outputs\n",
+ "-------\n",
+ "merged_file: None\n",
+ "Class Docstring:\n",
+ "...\n",
+ "```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "As you can seen ``bet`` command was run as expected. We can now check the content of caching file:"
+ "Thus `fsl_merge` is applied to parameters as such:"
]
},
{
@@ -80,14 +125,16 @@
"metadata": {},
"outputs": [],
"source": [
- "! ls -lh /output/workingdir_mem/nipype_mem"
+ "filepath = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'\n",
+ "\n",
+ "results = fsl_merge(dimension='t', in_files=[filepath, filepath])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "A special subdirectory for our interface has been created. Let's try to run this command again:"
+ "The results are standard nipype nodes results. In particular, they expose an `outputs` attribute that carries all the outputs of the process, as specified by the docs."
]
},
{
@@ -96,18 +143,14 @@
"metadata": {},
"outputs": [],
"source": [
- "bet_mem(in_file=\"/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz\",\n",
- " out_file=\"/output/sub-01_T1w_brain.nii.gz\",\n",
- " mask=True)"
+ "results.outputs.merged_file"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, the ``bet`` command was not run, but precomputed outputs were collected!\n",
- "\n",
- "If you created cached results that you're not going reuse, you can use [Memory.clear_runs_since()](http://nipy.org/nipype/0.10.0/users/caching_tutorial.html#nipype.caching.Memory.clear_runs_since) to flush the cache. Note, that if you use the method without any argument it will remove results used before current date, so will keep the results we've just calculated, let's check:"
+ "Finally, and most important, if the node is applied to the same input parameters, it is not computed, and the results are reloaded from the disk:"
]
},
{
@@ -116,33 +159,52 @@
"metadata": {},
"outputs": [],
"source": [
- "mem.clear_runs_since()\n",
- "bet_mem(in_file=\"/data/ds000114/sub-01/ses-test/anat/sub-01_ses-test_T1w.nii.gz\",\n",
- " out_file=\"/output/sub-01_T1w_brain.nii.gz\",\n",
- " mask=True)"
+ "results = fsl_merge(dimension='t', in_files=[filepath, filepath])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "As you can see, Nipype again collected the old results. If we want to remove everything, we have to put some future date:"
+ "Once the [`Memory`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#memory) is set up and you are applying it to data, an important thing to keep in mind is that you are using up disk cache. It might be useful to clean it using the methods that [`Memory`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#memory) provides for this: [`Memory.clear_previous_runs`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.clear_previous_runs), [`Memory.clear_runs_since`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.clear_runs_since)."
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Example\n",
+ "\n",
+ "A full-blown example showing how to stage multiple operations can be found in the [`caching_example.py`](http://nipype.readthedocs.io/en/latest/_downloads/howto_caching_example.py) file."
+ ]
+ },
+ {
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "mem.clear_runs_since(year=2020, month=1, day=1)"
+ "## Usage patterns: working efficiently with caching\n",
+ "\n",
+ "The goal of the `caching` module is to enable writing plain Python code rather than workflows. Use it: instead of data grabber nodes, use for instance the `glob` module. To vary parameters, use `for` loops. To make reusable code, write Python functions.\n",
+ "\n",
+ "One good rule of thumb to respect is to avoid the usage of explicit filenames apart from the outermost inputs and outputs of your processing. The reason being that the caching mechanism of `nipy.caching` takes care of generating the unique hashes, ensuring that, when you vary parameters, files are not overridden by the output of different computations.\n",
+ "\n",
+ "
\n",
+ "**Debuging**: \n",
+ "If you need to inspect the running environment of the nodes, it may be useful to know where they were executed. With `nipype.caching`, you do not control this location as it is encoded by hashes. \n",
+ "To find out where an operation has been persisted, simply look in it's output variable: \n",
+ "```out.runtime.cwd```\n",
+ "
\n",
+ "\n",
+ "Finally, the more you explore different parameters, the more you risk creating cached results that will never be reused. Keep in mind that it may be useful to flush the cache using [`Memory.clear_previous_runs`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.clear_previous_runs) or [`Memory.clear_runs_since`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html#nipype.caching.memory.Memory.clear_runs_since)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "You can also check [Memory.clear_runs_since()](http://nipy.org/nipype/0.10.0/users/caching_tutorial.html#nipype.caching.Memory.clear_runs_since)."
+ "## API reference\n",
+ "\n",
+ "For more info about the API, go to [`caching.memory`](http://nipype.readthedocs.io/en/latest/api/generated/nipype.caching.memory.html)."
]
}
],
diff --git a/notebooks/advanced_mipav.ipynb b/notebooks/advanced_mipav.ipynb
new file mode 100644
index 0000000..88c9ee4
--- /dev/null
+++ b/notebooks/advanced_mipav.ipynb
@@ -0,0 +1,54 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Using MIPAV, JIST, and CBS Tools\n",
+ "\n",
+ "If you are trying to use MIPAV, JIST or CBS Tools interfaces you need to configure CLASSPATH environmental variable correctly. It needs to include extensions shipped with MIPAV, MIPAV itself and MIPAV plugins.\n",
+ "\n",
+ "For example, in order to use the standalone MCR version of spm, you need to ensure that the following commands are executed at the beginning of your script:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```\n",
+ "# location of additional JAVA libraries to use\n",
+ "JAVALIB=/Applications/mipav/jre/Contents/Home/lib/ext/\n",
+ "\n",
+ "# location of the MIPAV installation to use\n",
+ "MIPAV=/Applications/mipav\n",
+ "# location of the plugin installation to use\n",
+ "# please replace 'ThisUser' by your user name\n",
+ "PLUGINS=/Users/ThisUser/mipav/plugins\n",
+ "\n",
+ "export CLASSPATH=$JAVALIB/*:$MIPAV:$MIPAV/lib/*:$PLUGINS\n",
+ "```"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/advanced_sphinx_ext.ipynb b/notebooks/advanced_sphinx_ext.ipynb
new file mode 100644
index 0000000..84ea2b8
--- /dev/null
+++ b/notebooks/advanced_sphinx_ext.ipynb
@@ -0,0 +1,148 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Sphinx extensions\n",
+ "\n",
+ "To help users document their **Nipype**-based code, the software is shipped\n",
+ "with a set of extensions (currently only one) to customize the appearance\n",
+ "and simplify the generation process."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# `nipype.sphinxext.plot_workflow` - Workflow plotting extension\n",
+ "\n",
+ "A directive for including a nipype workflow graph in a Sphinx document.\n",
+ "\n",
+ "This code is forked from the plot_figure sphinx extension of matplotlib.\n",
+ "\n",
+ "By default, in HTML output, `workflow` will include a .png file with a link to a high-res .png. In LaTeX output, it will include a .pdf. The source code for the workflow may be included as **inline content** to the directive `workflow`:\n",
+ "\n",
+ " .. workflow ::\n",
+ " :graph2use: flat\n",
+ " :simple_form: no\n",
+ "\n",
+ " from nipype.workflows.dmri.camino.connectivity_mapping import create_connectivity_pipeline\n",
+ " wf = create_connectivity_pipeline()\n",
+ " \n",
+ "For example, the following graph has been generated inserting the previous code block in this documentation:\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Options\n",
+ "\n",
+ "The ``workflow`` directive supports the following options:\n",
+ "\n",
+ "- `graph2use`: {`'hierarchical'`, `'colored'`, `'flat'`, `'orig'`, `'exec'`} \n",
+ " Specify the type of graph to be generated.\n",
+ "\n",
+ "\n",
+ "- `simple_form`: `bool` \n",
+ " Whether the graph will be in detailed or simple form.\n",
+ "\n",
+ "\n",
+ "- `format`: {`'python'`, `'doctest'`} \n",
+ " Specify the format of the input\n",
+ "\n",
+ "\n",
+ "- `include-source`: `bool` \n",
+ " Whether to display the source code. The default can be changed using the `workflow_include_source` variable in conf.py\n",
+ "\n",
+ "\n",
+ "- `encoding`: `str` \n",
+ " If this source file is in a non-UTF8 or non-ASCII encoding, the encoding must be specified using the `:encoding:` option. The encoding will not be inferred using the ``-*- coding -*-`` metacomment.\n",
+ "\n",
+ "Additionally, this directive supports all of the options of the `image` directive, except for `target` (since workflow will add its own target). These include `alt`, `height`, `width`, `scale`, `align` and `class`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Configuration options\n",
+ "\n",
+ "The workflow directive has the following configuration options:\n",
+ "\n",
+ "- `graph2use` \n",
+ " Select a graph type to use\n",
+ "\n",
+ "\n",
+ "- `simple_form` \n",
+ " determines if the node name shown in the visualization is either of the form nodename (package) when set to True or nodename.Class.package when set to False.\n",
+ "\n",
+ "\n",
+ "- `wf_include_source` \n",
+ " Default value for the include-source option\n",
+ "\n",
+ "\n",
+ "- `wf_html_show_source_link` \n",
+ " Whether to show a link to the source in HTML.\n",
+ "\n",
+ "\n",
+ "- `wf_pre_code` \n",
+ " Code that should be executed before each workflow.\n",
+ "\n",
+ "\n",
+ "- `wf_basedir` \n",
+ " Base directory, to which ``workflow::`` file names are relative to. (If None or empty, file names are relative to the directory where the file containing the directive is.)\n",
+ "\n",
+ "\n",
+ "- `wf_formats` \n",
+ " File formats to generate. List of tuples or strings: \n",
+ " [(suffix, dpi), suffix, ...] \n",
+ " that determine the file format and the DPI. For entries whose DPI was omitted, sensible defaults are chosen. When passing from the command line through sphinx_build the list should be passed as suffix:dpi,suffix:dpi, ....\n",
+ "\n",
+ "\n",
+ "- `wf_html_show_formats` \n",
+ " Whether to show links to the files in HTML.\n",
+ "\n",
+ "\n",
+ "- `wf_rcparams` \n",
+ " A dictionary containing any non-standard rcParams that should be applied before each workflow.\n",
+ "\n",
+ "\n",
+ "- `wf_apply_rcparams` \n",
+ " By default, rcParams are applied when `context` option is not used in a workflow directive. This configuration option overrides this behavior and applies rcParams before each workflow.\n",
+ "\n",
+ "\n",
+ "- `wf_working_directory` \n",
+ " By default, the working directory will be changed to the directory of the example, so the code can get at its data files, if any. Also its path will be added to `sys.path` so it can import any helper modules sitting beside it. This configuration option can be used to specify a central directory (also added to `sys.path`) where data files and helper modules for all code are located.\n",
+ "\n",
+ "\n",
+ "- `wf_template` \n",
+ " Provide a customized template for preparing restructured text."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/advanced_spmmcr.ipynb b/notebooks/advanced_spmmcr.ipynb
new file mode 100644
index 0000000..67685dd
--- /dev/null
+++ b/notebooks/advanced_spmmcr.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Using SPM with MATLAB Common Runtime (MCR)\n",
+ "\n",
+ "In order to use the standalone MCR version of spm, you need to ensure that the following commands are executed at the beginning of your script:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.interfaces import spm\n",
+ "matlab_cmd = '/opt/spm12/run_spm12.sh /opt/mcr/v92/ script'\n",
+ "spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can test it by calling:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "spm.SPMCommand().version"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "If you want to enforce the standalone MCR version of spm for nipype globally, you can do so by setting the following environment variables:\n",
+ "\n",
+ "- *`SPMMCRCMD`* \n",
+ " Specifies the command to use to run the spm standalone MCR version. You may still override the command as described above.\n",
+ "\n",
+ "\n",
+ "- *`FORCE_SPMMCR`* \n",
+ " Set this to any value in order to enforce the use of spm standalone MCR version in nipype globally. Technically, this sets the `use_mcr` flag of the spm interface to True.\n",
+ "\n",
+ "Information about the MCR version of SPM8 can be found at: http://en.wikibooks.org/wiki/SPM/Standalone"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/basic_data_input.ipynb b/notebooks/basic_data_input.ipynb
index ef75103..70c5239 100644
--- a/notebooks/basic_data_input.ipynb
+++ b/notebooks/basic_data_input.ipynb
@@ -86,6 +86,117 @@
"source": [
"# DataGrabber\n",
"\n",
+ "`DataGrabber` is an interface for collecting files from hard drive. It is very flexible and supports almost any file organization of your data you can imagine.\n",
+ "\n",
+ "You can use it as a trivial use case of getting a fixed file. By default, `DataGrabber` stores its outputs in a field called outfiles."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import nipype.interfaces.io as nio\n",
+ "datasource1 = nio.DataGrabber()\n",
+ "datasource1.inputs.base_directory = '/data/ds000114'\n",
+ "datasource1.inputs.template = 'sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'\n",
+ "datasource1.inputs.sort_filelist = True\n",
+ "results = datasource1.run()\n",
+ "results.outputs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Or you can get at all NIfTI files containing the word `'fingerfootlips'` in all directories starting with the letter `'s'`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import nipype.interfaces.io as nio\n",
+ "datasource2 = nio.DataGrabber()\n",
+ "datasource2.inputs.base_directory = '/data/ds000114'\n",
+ "datasource2.inputs.template = 's*/ses-test/func/*fingerfootlips*.nii.gz'\n",
+ "datasource2.inputs.sort_filelist = True\n",
+ "results = datasource2.run()\n",
+ "results.outputs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Two special inputs were used in these previous cases. The input `base_directory`\n",
+ "indicates in which directory to search, while the input `template` indicates the\n",
+ "string template to match. So in the previous case `DataGrabber` is looking for\n",
+ "path matches of the form `/data/ds000114/s*/ses-test/func/*fingerfootlips*.nii.gz`.\n",
+ "\n",
+ "
\n",
+ "**Note**: When used with wildcards (e.g., `s*` and `*fingerfootlips*` above) `DataGrabber` does not return data in sorted order. In order to force it to return data in sorted order, one needs to set the input `sorted = True`. However, when explicitly specifying an order as we will see below, `sorted` should be set to `False`.\n",
+ "
\n",
+ "\n",
+ "More useful cases arise when the template can be filled by other inputs. In the\n",
+ "example below, we define an input field for `DataGrabber` called `subject_id`. This is\n",
+ "then used to set the template (see `%d` in the template)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "datasource3 = nio.DataGrabber(infields=['subject_id'])\n",
+ "datasource3.inputs.base_directory = '/data/ds000114'\n",
+ "datasource3.inputs.template = 'sub-%02d/ses-test/func/*fingerfootlips*.nii.gz'\n",
+ "datasource3.inputs.sort_filelist = True\n",
+ "datasource3.inputs.subject_id = [1, 7]\n",
+ "results = datasource3.run()\n",
+ "results.outputs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This will return the functional images from subject 1 and 7 for the task `fingerfootlips`. We can take this a step further and pair subjects with task."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "datasource4 = nio.DataGrabber(infields=['subject_id', 'run'])\n",
+ "datasource4.inputs.base_directory = '/data/ds000114'\n",
+ "datasource4.inputs.template = 'sub-%02d/ses-test/func/*%s*.nii.gz'\n",
+ "datasource4.inputs.sort_filelist = True\n",
+ "datasource4.inputs.run = ['fingerfootlips', 'linebisection']\n",
+ "datasource4.inputs.subject_id = [1, 7]\n",
+ "results = datasource4.run()\n",
+ "results.outputs"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This will return the functional image of subject 1, task `'fingerfootlips'` and the functional image of subject 7 for the `'linebisection'` task."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## A more realistic use-case\n",
+ "\n",
"``DataGrabber`` is a generic data grabber module that wraps around ``glob`` to select your neuroimaging data in an intelligent way. As an example, let's assume we want to grab the anatomical and functional images of a certain subject.\n",
"\n",
"First, we need to create the ``DataGrabber`` node. This node needs to have some input fields for all dynamic parameters (e.g. subject identifier, task identifier), as well as the two desired output fields ``anat`` and ``func``."
@@ -169,6 +280,13 @@
"You'll notice that we use ``%s``, ``%02d`` and ``*`` for placeholders in the data paths. ``%s`` is a placeholder for a string and is filled out by ``task_name`` or ``ses_name``. ``%02d`` is a placeholder for a integer number and is filled out by ``subject_id``. ``*`` is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the ``DataGrabber`` node."
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Above, two more fields are introduced: `field_template` and `template_args`. These fields are both dictionaries whose keys correspond to the `outfields` keyword. The `field_template` reflects the search path for each output field, while the `template_args` reflect the inputs that satisfy the template. The inputs can either be one of the named inputs specified by the `infields` keyword arg or it can be raw strings or integers corresponding to the template. For the `func` output, the **%s** in the `field_template` is satisfied by `subject_id` and the **%d** is field in by the list of numbers."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -306,7 +424,39 @@
"source": [
"# SelectFiles\n",
"\n",
- "`SelectFiles` is a more flexible alternative to `DataGrabber`. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.\n",
+ "`SelectFiles` is a more flexible alternative to `DataGrabber`. It is built on Python [format strings](http://docs.python.org/2/library/string.html#format-string-syntax), which are similar to the Python string interpolation feature you are likely already familiar with, but advantageous in several respects. Format strings allow you to replace named sections of template strings set off by curly braces (`{}`), possibly filtered through a set of functions that control how the values are rendered into the string. As a very basic example, we could write"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "msg = \"This workflow uses {package}.\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "and then format it with keyword arguments:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(msg.format(package=\"FSL\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`SelectFiles` uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.\n",
"\n",
"Let's focus again on the data we want to import:\n",
"\n",
@@ -413,6 +563,26 @@
" 'sub-0[1,2]/ses-test/anat/sub-0[1,2]_ses-test_T1w.nii.gz'"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### `force_lists`\n",
+ "\n",
+ "There's an additional parameter, `force_lists`, which controls how `SelectFiles` behaves in cases where only a single file matches the template. The default behavior is that when a template matches multiple files they are returned as a list, while a single file is returned as a string. There may be situations where you want to force the outputs to always be returned as a list (for example, you are writing a workflow that expects to operate on several runs of data, but some of your subjects only have a single run). In this case, `force_lists` can be used to tune the outputs of the interface. You can either use a boolean value, which will be applied to every output the interface has, or you can provide a list of the output fields that should be coerced to a list.\n",
+ "\n",
+ "Returning to our previous example, you may want to ensure that the `anat` files are returned as a list, but you only ever will have a single `T1` file. In this case, you would do"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sf = SelectFiles(templates, force_lists=[\"anat\"])"
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -620,7 +790,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_data_input_bids.ipynb b/notebooks/basic_data_input_bids.ipynb
index 9f0daaa..e86d999 100644
--- a/notebooks/basic_data_input_bids.ipynb
+++ b/notebooks/basic_data_input_bids.ipynb
@@ -516,7 +516,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_data_output.ipynb b/notebooks/basic_data_output.ipynb
index 40e021f..ff86ab9 100644
--- a/notebooks/basic_data_output.ipynb
+++ b/notebooks/basic_data_output.ipynb
@@ -21,7 +21,145 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Preparation\n",
+ "# DataSink\n",
+ "\n",
+ "A workflow working directory is like a **cache**. It contains not only the outputs of various processing stages, it also contains various extraneous information such as execution reports, hashfiles determining the input state of processes. All of this is embedded in a hierarchical structure that reflects the iterables that have been used in the workflow. This makes navigating the working directory a not so pleasant experience. And typically the user is interested in preserving only a small percentage of these outputs. The [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) interface can be used to extract components from this `cache` and store it at a different location. For XNAT-based storage, see [XNATSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#nipype-interfaces-io-xnatsink).\n",
+ "\n",
+ "
\n",
+ "Unlike other interfaces, a [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink)'s inputs are defined and created by using the workflow connect statement. Currently disconnecting an input from the [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) does not remove that connection port.\n",
+ "
\n",
+ "\n",
+ "Let's assume we have the following workflow.\n",
+ "\n",
+ "\n",
+ "\n",
+ "The following code segment defines the [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) node and sets the `base_directory` in which all outputs will be stored. The `container` input creates a subdirectory within the `base_directory`. If you are iterating a workflow over subjects, it may be useful to save it within a folder with the subject id.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "datasink = pe.Node(nio.DataSink(), name='sinker')\n",
+ "datasink.inputs.base_directory = '/path/to/output'\n",
+ "workflow.connect(inputnode, 'subject_id', datasink, 'container')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "If we wanted to save the realigned files and the realignment parameters to the same place the most intuitive option would be:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "workflow.connect(realigner, 'realigned_files', datasink, 'motion')\n",
+ "workflow.connect(realigner, 'realignment_parameters', datasink, 'motion')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "However, this will not work as only one connection is allowed per input port. So we need to create a second port. We can store the files in a separate folder."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "workflow.connect(realigner, 'realigned_files', datasink, 'motion')\n",
+ "workflow.connect(realigner, 'realignment_parameters', datasink, 'motion.par')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The period (.) indicates that a subfolder called par should be created. But if we wanted to store it in the same folder as the realigned files, we would use the `.@` syntax. The @ tells the [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) interface to not create the subfolder. This will allow us to create different named input ports for [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) and allow the user to store the files in the same folder."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "workflow.connect(realigner, 'realigned_files', datasink, 'motion')\n",
+ "workflow.connect(realigner, 'realignment_parameters', datasink, 'motion.@par')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The syntax for the input port of [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) takes the following form:\n",
+ "\n",
+ " string[[.[@]]string[[.[@]]string] ...]\n",
+ " where parts between paired [] are optional."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## MapNode\n",
+ "\n",
+ "In order to use [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) inside a MapNode, it's inputs have to be defined inside the constructor using the `infields` keyword arg."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Parameterization\n",
+ "\n",
+ "As discussed in [Iterables](basic_iteration.ipynb), one can run a workflow iterating over various inputs using the iterables attribute of nodes. This means that a given workflow can have multiple outputs depending on how many iterables are there. Iterables create working directory subfolders such as `_iterable_name_value`. The `parameterization` input parameter controls whether the data stored using [DataSink](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html#datasink) is in a folder structure that contains this iterable information or not. It is generally recommended to set this to `True` when using multiple nested iterables."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Substitutions\n",
+ "\n",
+ "The ``substitutions`` and ``regexp_substitutions`` inputs allow users to modify the output destination path and name of a file. Substitutions are a list of 2-tuples and are carried out in the order in which they were entered. Assuming that the output path of a file is:\n",
+ "\n",
+ " /root/container/_variable_1/file_subject_realigned.nii\n",
+ "\n",
+ "we can use substitutions to clean up the output path.\n",
+ "\n",
+ "```python\n",
+ "datasink.inputs.substitutions = [('_variable', 'variable'),\n",
+ " ('file_subject_', '')]\n",
+ "```\n",
+ "\n",
+ "This will rewrite the file as:\n",
+ "\n",
+ " /root/container/variable_1/realigned.nii\n",
+ "\n",
+ "\n",
+ "
\n",
+ "**Note**: In order to figure out which substitutions are needed it is often useful to run the workflow on a limited set of iterables and then determine the substitutions.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Realistic Example\n",
+ "\n",
+ "## Preparation\n",
"\n",
"Before we can use `DataSink` we first need to run a workflow. For this purpose, let's create a very short preprocessing workflow that realigns and smooths one functional image of one subject."
]
@@ -143,7 +281,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# DataSink\n",
+ "## How to use `DataSink`\n",
"\n",
"`DataSink` is Nipype's standard output module to restructure your output files. It allows you to relocate and rename files that you deem relevant.\n",
"\n",
@@ -403,7 +541,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_debug.ipynb b/notebooks/basic_debug.ipynb
new file mode 100644
index 0000000..853f21c
--- /dev/null
+++ b/notebooks/basic_debug.ipynb
@@ -0,0 +1,99 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Debugging Nipype Workflows\n",
+ "\n",
+ "Throughout [Nipype](http://nipy.org/nipype/) we try to provide meaningful error messages. If you run into an error that does not have a meaningful error message please let us know so that we can improve error reporting.\n",
+ "\n",
+ "Here are some notes that may help debugging workflows or understanding performance issues.\n",
+ "\n",
+ "1. Always run your workflow first on a single iterable (e.g. subject) and\n",
+ " gradually increase the execution distribution complexity (Linear->MultiProc-> \n",
+ " SGE).\n",
+ "\n",
+ "- Use the debug config mode. This can be done by setting:\n",
+ "\n",
+ " ```python\n",
+ " from nipype import config\n",
+ " config.enable_debug_mode()\n",
+ " ```\n",
+ "\n",
+ " as the first import of your nipype script. To enable debug logging use:\n",
+ "\n",
+ " ```python\n",
+ " from nipype import logging\n",
+ " logging.update_logging(config)\n",
+ " ```\n",
+ " \n",
+ " **Note:** Turning on debug will rerun your workflows and will rerun them after debugging is turned off.\n",
+ "\n",
+ "- There are several configuration options that can help with debugging.\n",
+ " See [Configuration File](config_file.ipynb) for more details:\n",
+ "\n",
+ " keep_inputs\n",
+ " remove_unnecessary_outputs\n",
+ " stop_on_first_crash\n",
+ " stop_on_first_rerun\n",
+ "\n",
+ "- When running in distributed mode on cluster engines, it is possible for a\n",
+ " node to fail without generating a crash file in the crashdump directory. In\n",
+ " such cases, it will store a crash file in the `batch` directory.\n",
+ "\n",
+ "- All Nipype crashfiles can be inspected with the `nipypecli crash`\n",
+ " utility.\n",
+ "\n",
+ "- The `nipypecli search` command allows you to search for regular expressions\n",
+ " in the tracebacks of the Nipype crashfiles within a log folder.\n",
+ "\n",
+ "- Nipype determines the hash of the input state of a node. If any input\n",
+ " contains strings that represent files on the system path, the hash evaluation\n",
+ " mechanism will determine the timestamp or content hash of each of those\n",
+ " files. Thus any node with an input containing huge dictionaries (or lists) of\n",
+ " file names can cause serious performance penalties.\n",
+ "\n",
+ "- For HUGE data processing, `stop_on_first_crash: False`, is needed to get the\n",
+ " bulk of processing done, and then `stop_on_first_crash: True`, is needed for\n",
+ " debugging and finding failing cases. Setting `stop_on_first_crash: False`\n",
+ " is a reasonable option when you would expect 90% of the data to execute\n",
+ " properly.\n",
+ "\n",
+ "- Sometimes nipype will hang as if nothing is going on and if you hit `Ctrl+C`\n",
+ " you will get a `ConcurrentLogHandler` error. Simply remove the pypeline.lock\n",
+ " file in your home directory and continue.\n",
+ "\n",
+ "- One many clusters with shared NFS mounts synchronization of files across\n",
+ " clusters may not happen before the typical NFS cache timeouts. When using\n",
+ " PBS/LSF/SGE/Condor plugins in such cases the workflow may crash because it\n",
+ " cannot retrieve the node result. Setting the `job_finished_timeout` can help:\n",
+ "\n",
+ " ```python\n",
+ " workflow.config['execution']['job_finished_timeout'] = 65\n",
+ " ```"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/basic_error_and_crashes.ipynb b/notebooks/basic_error_and_crashes.ipynb
index d84ddc0..8e40138 100644
--- a/notebooks/basic_error_and_crashes.ipynb
+++ b/notebooks/basic_error_and_crashes.ipynb
@@ -157,7 +157,7 @@
"source": [
"When running in terminal you can also try options that **enable the Python or Ipython debugger when re-executing: `-d` or `-i`**.\n",
"\n",
- "**If you don't want to have an option to rerun the crashed workflow, you can change the format of crashfile to a text format.** You can either change this in a configuration file (you can read more [here](http://nipype.readthedocs.io/en/0.13.1/users/config_file.html#config-file)), or you can directly change the `wf.config` dictionary before running the workflow."
+ "**If you don't want to have an option to rerun the crashed workflow, you can change the format of crashfile to a text format.** You can either change this in a configuration file (you can read more [here](basic_execution_configuration.ipynb)), or you can directly change the `wf.config` dictionary before running the workflow."
]
},
{
@@ -279,7 +279,7 @@
"outputs": [],
"source": [
"from nipype.algorithms.misc import Gunzip\n",
- "from nipype.pipeline.engine import Node\n",
+ "from nipype import Node\n",
"\n",
"files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',\n",
" '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz']\n",
@@ -310,7 +310,7 @@
"metadata": {},
"outputs": [],
"source": [
- "from nipype.pipeline.engine import MapNode\n",
+ "from nipype import MapNode\n",
"gunzip = MapNode(Gunzip(), name='gunzip', iterfield=['in_file'])\n",
"gunzip.inputs.in_file = files"
]
@@ -652,7 +652,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_execution_configuration.ipynb b/notebooks/basic_execution_configuration.ipynb
new file mode 100644
index 0000000..6fdec6f
--- /dev/null
+++ b/notebooks/basic_execution_configuration.ipynb
@@ -0,0 +1,420 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Execution Configuration Options\n",
+ "\n",
+ "Nipype gives you many liberties on how to create workflows, but the execution of them uses a lot of default parameters. But you have of course all the freedom to change them as you like.\n",
+ "\n",
+ "Nipype looks for the configuration options in the local folder under the name ``nipype.cfg`` and in ``~/.nipype/nipype.cfg`` (in this order). It can be divided into **Logging** and **Execution** options. A few of the possible options are the following:\n",
+ "\n",
+ "### Logging\n",
+ "\n",
+ "- **`workflow_level`**: How detailed the logs regarding workflow should be \n",
+ " (possible values: ``INFO`` and ``DEBUG``; default value: ``INFO``)\n",
+ "\n",
+ "\n",
+ "- **`utils_level`**: How detailed the logs regarding nipype utils, like file operations (for example overwriting warning) or the resource profiler, should be \n",
+ " (possible values: ``INFO`` and ``DEBUG``; default value: ``INFO``)\n",
+ "\n",
+ "\n",
+ "- **`interface_level`**: How detailed the logs regarding interface execution should be \n",
+ " (possible values: ``INFO`` and ``DEBUG``; default value: ``INFO``)\n",
+ "\n",
+ "\n",
+ "- **`filemanip_level`** (deprecated as of 1.0): How detailed the logs regarding file operations (for example overwriting warning) should be \n",
+ " (possible values: ``INFO`` and ``DEBUG``)\n",
+ "\n",
+ "\n",
+ "- **`log_to_file`**: Indicates whether logging should also send the output to a file \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`log_directory`**: Where to store logs. \n",
+ " (string, default value: home directory)\n",
+ "\n",
+ "\n",
+ "- **`log_size`**: Size of a single log file. \n",
+ " (integer, default value: 254000)\n",
+ "\n",
+ "\n",
+ "- **`log_rotate`**: How many rotation should the log file make. \n",
+ " (integer, default value: 4)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Execution\n",
+ "\n",
+ "- **`plugin`**: This defines which execution plugin to use. \n",
+ " (possible values: ``Linear``, ``MultiProc``, ``SGE``, ``IPython``; default value: ``Linear``)\n",
+ "\n",
+ "\n",
+ "- **`stop_on_first_crash`**: Should the workflow stop upon first node crashing or try to execute as many\n",
+ " nodes as possible? \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`stop_on_first_rerun`**: Should the workflow stop upon first node trying to recompute (by that we mean rerunning a node that has been run before - this can happen due changed inputs and/or hash_method since the last run). \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`hash_method`**: Should the input files be checked for changes using their content (slow, but 100% accurate) or just their size and modification date (fast, but potentially prone to errors)? \n",
+ " (possible values: ``content`` and ``timestamp``; default value: ``timestamp``)\n",
+ "\n",
+ "\n",
+ "- **`keep_inputs`**: Ensures that all inputs that are created in the nodes working directory are\n",
+ " kept after node execution \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`single_thread_matlab`**: Should all of the Matlab interfaces (including SPM) use only one thread? This is useful if you are parallelizing your workflow using MultiProc or IPython on a single multicore machine. \n",
+ " (possible values: ``true`` and ``false``; default value: ``true``)\n",
+ "\n",
+ "\n",
+ "- **`display_variable`**: Override the ``$DISPLAY`` environment variable for interfaces that require an X server. This option is useful if there is a running X server, but ``$DISPLAY`` was not defined in nipype's environment. For example, if an X server is listening on the default port of 6000, set ``display_variable = :0`` to enable nipype interfaces to use it. It may also point to displays provided by VNC, [xnest](http://www.x.org/archive/X11R7.5/doc/man/man1/Xnest.1.html) or [Xvfb](http://www.x.org/archive/X11R6.8.1/doc/Xvfb.1.html). \n",
+ " If neither ``display_variable`` nor the ``$DISPLAY`` environment variable are set, nipype will try to configure a new virtual server using Xvfb. \n",
+ " (possible values: any X server address; default value: not set)\n",
+ "\n",
+ "\n",
+ "- **`remove_unnecessary_outputs`**: This will remove any interface outputs not needed by the workflow. If the\n",
+ " required outputs from a node changes, rerunning the workflow will rerun the\n",
+ " node. Outputs of leaf nodes (nodes whose outputs are not connected to any\n",
+ " other nodes) will never be deleted independent of this parameter. \n",
+ " (possible values: ``true`` and ``false``; default value: ``true``)\n",
+ "\n",
+ "\n",
+ "- **`try_hard_link_datasink`**: When the DataSink is used to produce an orginized output file outside\n",
+ " of nipypes internal cache structure, a file system hard link will be\n",
+ " attempted first. A hard link allow multiple file paths to point to the\n",
+ " same physical storage location on disk if the conditions allow. By\n",
+ " refering to the same physical file on disk (instead of copying files\n",
+ " byte-by-byte) we can avoid unnecessary data duplication. If hard links\n",
+ " are not supported for the source or destination paths specified, then\n",
+ " a standard byte-by-byte copy is used. \n",
+ " (possible values: ``true`` and ``false``; default value: ``true``)\n",
+ "\n",
+ "\n",
+ "- **`use_relative_paths`**: Should the paths stored in results (and used to look for inputs)\n",
+ " be relative or absolute. Relative paths allow moving the whole\n",
+ " working directory around but may cause problems with\n",
+ " symlinks. \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`local_hash_check`**: Perform the hash check on the job submission machine. This option minimizes\n",
+ " the number of jobs submitted to a cluster engine or a multiprocessing pool\n",
+ " to only those that need to be rerun. \n",
+ " (possible values: ``true`` and ``false``; default value: ``true``)\n",
+ "\n",
+ "\n",
+ "- **`job_finished_timeout`**: When batch jobs are submitted through, SGE/PBS/Condor they could be killed\n",
+ " externally. Nipype checks to see if a results file exists to determine if\n",
+ " the node has completed. This timeout determines for how long this check is\n",
+ " done after a job finish is detected. (float in seconds; default value: 5)\n",
+ "\n",
+ "\n",
+ "- **`remove_node_directories`** (EXPERIMENTAL): Removes directories whose outputs have already been used\n",
+ " up. Doesn't work with IdentiInterface or any node that patches\n",
+ " data through (without copying) \n",
+ " (possible values: ``true`` and ``false``; default value: ``false``)\n",
+ "\n",
+ "\n",
+ "- **`stop_on_unknown_version`**: If this is set to True, an underlying interface will raise an error, when no\n",
+ " version information is available. Please notify developers or submit a patch.\n",
+ "\n",
+ "\n",
+ "- **`parameterize_dirs`**: If this is set to True, the node's output directory will contain full\n",
+ " parameterization of any iterable, otherwise parameterizations over 32\n",
+ " characters will be replaced by their hash. \n",
+ " (possible values: ``true`` and ``false``; default value: ``true``)\n",
+ "\n",
+ "\n",
+ "- **`poll_sleep_duration`**: This controls how long the job submission loop will sleep between submitting\n",
+ " all pending jobs and checking for job completion. To be nice to cluster\n",
+ " schedulers the default is set to 2 seconds.\n",
+ "\n",
+ "\n",
+ "- **`xvfb_max_wait`**: Maximum time (in seconds) to wait for Xvfb to start, if the _redirect_x\n",
+ " parameter of an Interface is True.\n",
+ "\n",
+ "\n",
+ "- **`crashfile_format`**: This option controls the file type of any crashfile generated. Pklz\n",
+ " crashfiles allow interactive debugging and rerunning of nodes, while text\n",
+ " crashfiles allow portability across machines and shorter load time. \n",
+ " (possible values: ``pklz`` and ``txt``; default value: ``pklz``)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Resource Monitor\n",
+ "\n",
+ "- **`enabled`**: Enables monitoring the resources occupation (possible values: ``true`` and\n",
+ " ``false``; default value: ``false``). All the following options will be\n",
+ " dismissed if the resource monitor is not enabled.\n",
+ "\n",
+ "\n",
+ "- **`sample_frequency`**: Sampling period (in seconds) between measurements of resources (memory, cpus)\n",
+ " being used by an interface \n",
+ " (default value: ``1``)\n",
+ "\n",
+ "\n",
+ "- **`summary_file`**: Indicates where the summary file collecting all profiling information from the\n",
+ " resource monitor should be stored after execution of a workflow.\n",
+ " The ``summary_file`` does not apply to interfaces run independently.\n",
+ " (unset by default, in which case the summary file will be written out to \n",
+ " ``/resource_monitor.json`` of the top-level workflow).\n",
+ "\n",
+ "\n",
+ "- **`summary_append`**: Append to an existing summary file (only applies to workflows). \n",
+ " (default value: ``true``, possible values: ``true`` or ``false``)."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Example\n",
+ "\n",
+ " [logging]\n",
+ " workflow_level = DEBUG\n",
+ "\n",
+ " [execution]\n",
+ " stop_on_first_crash = true\n",
+ " hash_method = timestamp\n",
+ " display_variable = :1\n",
+ "\n",
+ " [monitoring]\n",
+ " enabled = false\n",
+ " \n",
+ "`Workflow.config` property has a form of a nested dictionary reflecting the structure of the `.cfg` file."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import Workflow\n",
+ "myworkflow = Workflow(name='myworkflow')\n",
+ "myworkflow.config['execution'] = {'stop_on_first_rerun': 'True',\n",
+ " 'hash_method': 'timestamp'}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can also directly set global config options in your workflow script. An\n",
+ "example is shown below. This needs to be called before you import the\n",
+ "pipeline or the logger. Otherwise logging level will not be reset."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import config\n",
+ "cfg = dict(logging=dict(workflow_level = 'DEBUG'),\n",
+ " execution={'stop_on_first_crash': False,\n",
+ " 'hash_method': 'content'})\n",
+ "config.update_config(cfg)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Enabling logging to file\n",
+ "\n",
+ "By default, logging to file is disabled. One can enable and write the file to\n",
+ "a location of choice as in the example below."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from nipype import config, logging\n",
+ "config.update_config({'logging': {'log_directory': os.getcwd(),\n",
+ " 'log_to_file': True}})\n",
+ "logging.update_logging(config)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The logging update line is necessary to change the behavior of logging such as\n",
+ "output directory, logging level, etc."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Debug configuration\n",
+ "\n",
+ "To enable debug mode, one can insert the following lines:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import config, logging\n",
+ "config.enable_debug_mode()\n",
+ "logging.update_logging(config)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this mode the following variables are set:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "config.set('execution', 'stop_on_first_crash', 'true')\n",
+ "config.set('execution', 'remove_unnecessary_outputs', 'false')\n",
+ "config.set('logging', 'workflow_level', 'DEBUG')\n",
+ "config.set('logging', 'interface_level', 'DEBUG')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Global, workflow & node level\n",
+ "\n",
+ "The configuration options can be changed globally (i.e. for all workflows), for just a workflow, or for just a node. The implementations look as follows (note that you should first create directories if you want to change `crashdump_dir` and `log_directory`):"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### At the global level:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import config, logging\n",
+ "import os\n",
+ "os.makedirs('/output/log_folder', exist_ok=True)\n",
+ "os.makedirs('/output/crash_folder', exist_ok=True)\n",
+ "\n",
+ "config_dict={'execution': {'remove_unnecessary_outputs': 'true',\n",
+ " 'keep_inputs': 'false',\n",
+ " 'poll_sleep_duration': '60',\n",
+ " 'stop_on_first_rerun': 'false',\n",
+ " 'hash_method': 'timestamp',\n",
+ " 'local_hash_check': 'true',\n",
+ " 'create_report': 'true',\n",
+ " 'crashdump_dir': '/output/crash_folder',\n",
+ " 'use_relative_paths': 'false',\n",
+ " 'job_finished_timeout': '5'},\n",
+ " 'logging': {'workflow_level': 'INFO',\n",
+ " 'filemanip_level': 'INFO',\n",
+ " 'interface_level': 'INFO',\n",
+ " 'log_directory': '/output/log_folder',\n",
+ " 'log_to_file': 'true'}}\n",
+ "config.update_config(config_dict)\n",
+ "logging.update_logging(config)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### At the workflow level:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import Workflow\n",
+ "wf = Workflow(name=\"config_test\")\n",
+ "\n",
+ "# Change execution parameters\n",
+ "wf.config['execution']['stop_on_first_crash'] = 'true'\n",
+ "\n",
+ "# Change logging parameters\n",
+ "wf.config['logging'] = {'workflow_level' : 'DEBUG',\n",
+ " 'filemanip_level' : 'DEBUG',\n",
+ " 'interface_level' : 'DEBUG',\n",
+ " 'log_to_file' : 'True',\n",
+ " 'log_directory' : '/output/log_folder'}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### At the node level:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import Node\n",
+ "from nipype.interfaces.fsl import BET\n",
+ "\n",
+ "bet = Node(BET(), name=\"config_test\")\n",
+ "\n",
+ "bet.config = {'execution': {'keep_unnecessary_outputs': 'false'}}"
+ ]
+ }
+ ],
+ "metadata": {
+ "anaconda-cloud": {},
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/basic_function_nodes.ipynb b/notebooks/basic_function_interface.ipynb
similarity index 52%
rename from notebooks/basic_function_nodes.ipynb
rename to notebooks/basic_function_interface.ipynb
index f195035..d2cb537 100644
--- a/notebooks/basic_function_nodes.ipynb
+++ b/notebooks/basic_function_interface.ipynb
@@ -4,34 +4,81 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Function Node\n",
+ "# Function Interface\n",
"\n",
"Satra once called the `Function` module, the \"do anything you want card\". Which is a perfect description. Because it allows you to put any code you want into an empty node, which you than can put in your workflow exactly where it needs to be.\n",
"\n",
+ "## A Simple Function Interface\n",
+ "\n",
"You might have already seen the `Function` module in the [example section in the Node tutorial](basic_nodes.ipynb#Example-of-a-simple-node). Let's take a closer look at it again."
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The most important component of a working `Function` interface is a Python function. There are several ways to associate a function with a `Function` interface, but the most common way will involve functions you code yourself as part of your Nipype scripts. Consider the following function:"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
- "# Import Node and Function module\n",
- "from nipype import Node, Function\n",
- "\n",
"# Create a small example function\n",
"def add_two(x_input):\n",
- " return x_input + 2\n",
+ " return x_input + 2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This simple function takes a value, adds 2 to it, and returns that new value.\n",
+ "\n",
+ "Just as Nipype interfaces have inputs and outputs, Python functions have inputs, in the form of parameters or arguments, and outputs, in the form of their return values. When you define a Function interface object with an existing function, as in the case of ``add_two()`` above, you must pass the constructor information about the function's inputs, its outputs, and the function itself. For example,"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Import Node and Function module\n",
+ "from nipype import Node, Function\n",
"\n",
"# Create Node\n",
"addtwo = Node(Function(input_names=[\"x_input\"],\n",
" output_names=[\"val_output\"],\n",
" function=add_two),\n",
- " name='add_node')\n",
- "\n",
- "addtwo.inputs.x_input =4\n",
- "addtwo.run()\n",
+ " name='add_node')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Then you can set the inputs and run just as you would with any other interface:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "addtwo.inputs.x_input = 4\n",
+ "addtwo.run()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
"addtwo.result.outputs"
]
},
@@ -49,11 +96,78 @@
"outputs": [],
"source": [
"addtwo = Node(Function(function=add_two), name='add_node')\n",
- "addtwo.inputs.x_input =4\n",
- "addtwo.run()\n",
+ "addtwo.inputs.x_input = 8\n",
+ "addtwo.run()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
"addtwo.result.outputs"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using External Packages\n",
+ "\n",
+ "Chances are, you will want to write functions that do more complicated processing, particularly using the growing stack of Python packages geared towards neuroimaging, such as [Nibabel](http://nipy.org/nibabel/), [Nipy](http://nipy.org/), or [PyMVPA](http://www.pymvpa.org/).\n",
+ "\n",
+ "While this is completely possible (and, indeed, an intended use of the Function interface), it does come with one important constraint. The function code you write is executed in a standalone environment, which means that any external functions or classes you use have to be imported within the function itself:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_n_trs(in_file):\n",
+ " import nibabel\n",
+ " f = nibabel.load(in_file)\n",
+ " return f.shape[-1]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Without explicitly importing Nibabel in the body of the function, this would fail.\n",
+ "\n",
+ "Alternatively, it is possible to provide a list of strings corresponding to the imports needed to execute a function as a parameter of the `Function` constructor. This allows for the use of external functions that do not import all external definitions inside the function body."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Advanced Use\n",
+ "\n",
+ "To use an existing function object (as we have been doing so far) with a Function interface, it must be passed to the constructor. However, it is also possible to dynamically set how a Function interface will process its inputs using the special ``function_str`` input.\n",
+ "\n",
+ "This input takes not a function object, but actually a single string that can be parsed to define a function. In the equivalent case to our example above, the string would be"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "add_two_str = \"def add_two(val):\\n return val + 2\\n\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Unlike when using a function object, this input can be set like any other, meaning that you could write a function that outputs different function strings depending on some run-time contingencies, and connect that output the the ``function_str`` input of a downstream Function interface."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
diff --git a/notebooks/basic_graph_visualization.ipynb b/notebooks/basic_graph_visualization.ipynb
index a574002..d5074d2 100644
--- a/notebooks/basic_graph_visualization.ipynb
+++ b/notebooks/basic_graph_visualization.ipynb
@@ -288,7 +288,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_interfaces.ipynb b/notebooks/basic_interfaces.ipynb
index aecd3ac..9ae7eb9 100644
--- a/notebooks/basic_interfaces.ipynb
+++ b/notebooks/basic_interfaces.ipynb
@@ -35,7 +35,7 @@
"
Do not have inputs/outputs, but expose them from the interfaces wrapped inside
\n",
" \n",
"
\n",
- "
Do not cache results (unless you use [interface caching](http://nipype.readthedocs.io/en/latest/users/caching_tutorial.html))
\n",
+ "
Do not cache results (unless you use [interface caching](advanced_interfaces_caching.ipynb))
\n",
"
Cache results
\n",
"
\n",
"
\n",
@@ -317,7 +317,7 @@
"\n",
"***Second***, a list of all possible input parameters.\n",
"\n",
- " Inputs::\n",
+ " Inputs:\n",
"\n",
" [Mandatory]\n",
" in_file: (an existing file name)\n",
@@ -417,7 +417,7 @@
"\n",
"And ***third***, a list of all possible output parameters.\n",
"\n",
- " Outputs::\n",
+ " Outputs:\n",
"\n",
" inskull_mask_file: (a file name)\n",
" path/name of inskull mask (if generated)\n",
diff --git a/notebooks/basic_iteration.ipynb b/notebooks/basic_iteration.ipynb
index 44c1471..dcf9fb8 100644
--- a/notebooks/basic_iteration.ipynb
+++ b/notebooks/basic_iteration.ipynb
@@ -4,15 +4,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
- "\n",
"# Iterables\n",
"\n",
"Some steps in a neuroimaging analysis are repetitive. Running the same preprocessing on multiple subjects or doing statistical inference on multiple files. To prevent the creation of multiple individual scripts, Nipype has as execution plugin for ``Workflow``, called **``iterables``**. \n",
"\n",
- "The main homepage has a [nice section](http://nipype.readthedocs.io/en/latest/users/mapnode_and_iterables.html) about ``MapNode`` and ``iterables`` if you want to learn more. Also, if you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out [synchronize and intersource](http://nipype.readthedocs.io/en/latest/users/joinnode_and_itersource.html#synchronize).\n",
+ "\n",
+ "\n",
+ "If you are interested in more advanced procedures, such as synchronizing multiple iterables or using conditional iterables, check out the `synchronize `and `intersource` section in the [`JoinNode`](basic_joinnodes.ipynb) notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Realistic example\n",
"\n",
- "For example, let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm."
+ "Let's assume we have a workflow with two nodes, node (A) does simple skull stripping, and is followed by a node (B) that does isometric smoothing. Now, let's say, that we are curious about the effect of different smoothing kernels. Therefore, we want to run the smoothing node with FWHM set to 2mm, 8mm and 16mm."
]
},
{
@@ -383,7 +390,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_joinnodes.ipynb b/notebooks/basic_joinnodes.ipynb
index 90c4f6d..02d6627 100644
--- a/notebooks/basic_joinnodes.ipynb
+++ b/notebooks/basic_joinnodes.ipynb
@@ -4,11 +4,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
+ "# JoinNode, synchronize and itersource\n",
"\n",
- "# JoinNode\n",
+ "JoinNode have the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a `JoinNode` merges them back into on node. A `JoinNode` generalizes `MapNode` to operate in conjunction with an upstream `iterable` node to reassemble downstream results, e.g.:\n",
"\n",
- "JoinNode have the opposite effect of [iterables](basic_iteration.ipynb). Where `iterables` split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out [JoinNode, synchronize and itersource](http://nipype.readthedocs.io/en/latest/users/joinnode_and_itersource.html) from the main homepage."
+ ""
]
},
{
@@ -56,14 +56,190 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the JoinNode where the information to join will be entering the node."
+ "As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the `JoinNode` where the information to join will be entering the node."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## More realistic example\n",
+ "This example assumes that interface `A` has one output *subject*, interface `B` has two inputs *subject* and *in_file* and one output *out_file*, interface `C` has one input *in_file* and one output *out_file*, and interface `D` has one list input *in_files*. The *images* variable is a list of three input image file names.\n",
+ "\n",
+ "As with *iterables* and the `MapNode` *iterfield*, the *joinfield* can be a list of fields. Thus, the declaration in the previous example is equivalent to the following:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "d = JoinNode(interface=D(),\n",
+ " joinsource=\"b\",\n",
+ " joinfield=[\"in_files\"],\n",
+ " name=\"d\")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The *joinfield* defaults to all of the JoinNode input fields, so the declaration is also equivalent to the following:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "d = JoinNode(interface=D(),\n",
+ " joinsource=\"b\",\n",
+ " name=\"d\")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In this example, the node `C` *out_file* outputs are collected into the `JoinNode` `D` *in_files* input list. The *in_files* order is the same as the upstream `B` node iterables order.\n",
+ "\n",
+ "The `JoinNode` input can be filtered for unique values by specifying the *unique* flag, e.g.:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "d = JoinNode(interface=D(),\n",
+ " joinsource=\"b\",\n",
+ " unique=True,\n",
+ " name=\"d\")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## `synchronize`\n",
+ "\n",
+ "The `Node` `iterables` parameter can be be a single field or a list of fields. If it is a list, then execution is performed over all permutations of the list items. For example:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "b.iterables = [(\"m\", [1, 2]), (\"n\", [3, 4])]\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "results in the execution graph:\n",
+ "\n",
+ "\n",
+ "\n",
+ "where `B13` has inputs *m* = 1, *n* = 3, `B14` has inputs *m* = 1, *n* = 4, etc.\n",
+ "\n",
+ "The `synchronize` parameter synchronizes the iterables lists, e.g.:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "b.iterables = [(\"m\", [1, 2]), (\"n\", [3, 4])]\n",
+ "b.synchronize = True\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "results in the execution graph:\n",
+ "\n",
+ "\n",
+ "\n",
+ "where the iterable inputs are selected in lock-step by index, i.e.:\n",
+ "\n",
+ " (*m*, *n*) = (1, 3) and (2, 4)\n",
+ "\n",
+ "for `B13` and `B24`, resp."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## `itersource`\n",
+ "\n",
+ "The `itersource` feature allows you to expand a downstream `iterable` based on a mapping of an upstream `iterable`. For example:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "```python\n",
+ "a = Node(interface=A(), name=\"a\")\n",
+ "b = Node(interface=B(), name=\"b\")\n",
+ "b.iterables = (\"m\", [1, 2])\n",
+ "c = Node(interface=C(), name=\"c\")\n",
+ "d = Node(interface=D(), name=\"d\")\n",
+ "d.itersource = (\"b\", \"m\")\n",
+ "d.iterables = [(\"n\", {1:[3,4], 2:[5,6]})]\n",
+ "my_workflow = Workflow(name=\"my_workflow\")\n",
+ "my_workflow.connect([(a,b,[('out_file','in_file')]),\n",
+ " (b,c,[('out_file','in_file')])\n",
+ " (c,d,[('out_file','in_file')])\n",
+ " ])\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "results in the execution graph:\n",
+ "\n",
+ "\n",
+ "\n",
+ "In this example, all interfaces have input `in_file` and output `out_file`. In addition, interface `B` has input *m* and interface `D` has input *n*. A Python dictionary associates the `B` node input value with the downstream `D` node *n* iterable values.\n",
+ "\n",
+ "This example can be extended with a summary `JoinNode`:\n",
+ "\n",
+ "```python\n",
+ "e = JoinNode(interface=E(), joinsource=\"d\",\n",
+ " joinfield=\"in_files\", name=\"e\")\n",
+ "my_workflow.connect(d, 'out_file',\n",
+ " e, 'in_files')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "resulting in the graph:\n",
+ "\n",
+ "\n",
+ "\n",
+ "The combination of `iterables`, `MapNode`, `JoinNode`, `synchronize` and `itersource` enables the creation of arbitrarily complex workflow graphs. The astute workflow builder will recognize that this flexibility is both a blessing and a curse. These advanced features are handy additions to the Nipype toolkit when used judiciously."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## More realistic `JoinNode` example\n",
"\n",
"Let's consider another example where we have one node that iterates over 3 different numbers and generates randome numbers. Another node joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again."
]
@@ -239,7 +415,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden",
+ "solution2": "shown",
"solution2_first": true
},
"outputs": [],
@@ -251,7 +427,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -263,7 +439,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -286,7 +462,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -331,7 +507,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -353,7 +529,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -369,7 +545,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -384,7 +560,7 @@
"execution_count": null,
"metadata": {
"scrolled": false,
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -396,7 +572,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -408,7 +584,7 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
- "solution2": "hidden"
+ "solution2": "shown"
},
"outputs": [],
"source": [
@@ -434,7 +610,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_mapnodes.ipynb b/notebooks/basic_mapnodes.ipynb
index b7a552a..452abb1 100644
--- a/notebooks/basic_mapnodes.ipynb
+++ b/notebooks/basic_mapnodes.ipynb
@@ -4,11 +4,27 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
- "\n",
"# MapNode\n",
"\n",
- "If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a **``MapNode``**. A ``MapNode`` is quite similar to a normal ``Node``, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. (The main homepage has a [nice section](http://nipype.readthedocs.io/en/latest/users/mapnode_and_iterables.html) about ``MapNode`` and ``iterables`` if you want to learn more).\n",
+ "If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a **``MapNode``**. A ``MapNode`` is quite similar to a normal ``Node``, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs.\n",
+ "\n",
+ "Imagine that you have a list of items (lets say files) and you want to execute the same node on them (for example some smoothing or masking). Some nodes accept multiple files and do exactly the same thing on them, but some don't (they expect only one file). `MapNode` can solve this problem. Imagine you have the following workflow:\n",
+ "\n",
+ "\n",
+ "\n",
+ "Node `A` outputs a list of files, but node `B` accepts only one file. Additionally `C` expects a list of files. What you would like is to run `B` for every file in the output of `A` and collect the results as a list and feed it to `C`. Something like this:\n",
+ "\n",
+ "```python\n",
+ "from nipype import Node, MapNode, Workflow\n",
+ "a = Node(interface=A(), name=\"a\")\n",
+ "b = MapNode(interface=B(), name=\"b\", iterfield=['in_file'])\n",
+ "c = Node(interface=C(), name=\"c\")\n",
+ "\n",
+ "my_workflow = Workflow(name=\"my_workflow\")\n",
+ "my_workflow.connect([(a,b,[('out_files','in_file')]),\n",
+ " (b,c,[('out_file','in_files')])\n",
+ " ])\n",
+ "```\n",
"\n",
"Let's demonstrate this with a simple function interface:"
]
@@ -45,7 +61,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a `MapNode`.\n",
+ "What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a `MapNode`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## `iterfield`\n",
"\n",
"The `MapNode` constructor has a field called `iterfield`, which tells it what inputs should be expecting a list."
]
@@ -67,7 +90,16 @@
"outputs": [],
"source": [
"square_node.inputs.x = [0, 1, 2, 3]\n",
- "square_node.run().outputs.f_x"
+ "res = square_node.run()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "res.outputs.f_x"
]
},
{
@@ -97,7 +129,16 @@
"power_node = MapNode(power, name=\"power\", iterfield=[\"x\", \"y\"])\n",
"power_node.inputs.x = [0, 1, 2, 3]\n",
"power_node.inputs.y = [0, 1, 2, 3]\n",
- "print(power_node.run().outputs.f_xy)"
+ "res = power_node.run()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(res.outputs.f_xy)"
]
},
{
@@ -116,7 +157,16 @@
"power_node = MapNode(power, name=\"power\", iterfield=[\"x\"])\n",
"power_node.inputs.x = [0, 1, 2, 3]\n",
"power_node.inputs.y = 3\n",
- "print(power_node.run().outputs.f_xy)"
+ "res = power_node.run()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(res.outputs.f_xy)"
]
},
{
@@ -126,6 +176,13 @@
"As in the case of `iterables`, each underlying `MapNode` execution can happen in **parallel**. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you processes large amounts of data efficiently and reproducibly."
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In more advanced applications it is useful to be able to iterate over items of nested lists (for example ``[[1,2],[3,4]]``). MapNode allows you to do this with the \"nested=True\" parameter. Outputs will preserve the same nested structure as the inputs."
+ ]
+ },
{
"cell_type": "markdown",
"metadata": {},
@@ -149,7 +206,7 @@
"source": [
"from nipype.algorithms.misc import Gunzip\n",
"from nipype.interfaces.spm import Realign\n",
- "from nipype.pipeline.engine import Node, MapNode, Workflow\n",
+ "from nipype import Node, MapNode, Workflow\n",
"\n",
"# Here we specify a list of files (for this tutorial, we just add the same file twice)\n",
"files = ['/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz',\n",
@@ -390,7 +447,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_model_specification_fmri.ipynb b/notebooks/basic_model_specification_fmri.ipynb
index 0581446..ff88062 100644
--- a/notebooks/basic_model_specification_fmri.ipynb
+++ b/notebooks/basic_model_specification_fmri.ipynb
@@ -6,26 +6,82 @@
"source": [
"# Model Specification for 1st-Level fMRI Analysis\n",
"\n",
- "Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the study specific information, such as **condition**, their **onsets** and **durations**. For more information, make sure to check out [Model Specificaton](http://nipype.readthedocs.io/en/latest/users/model_specification.html) and [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html)"
+ "Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the study specific information, such as **condition**, their **onsets** and **durations**. For more information, make sure to check out [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Simple Example\n",
+ "## General purpose model specification\n",
"\n",
- "Let's consider a simple experiment, where we have three different stimuli such as ``'faces'``, ``'houses'`` and ``'scrambled pix'``. Now each of those three conditions has different stimuli onsets, but all of them have a stimuli presentation duration of 3 seconds.\n",
+ "The `SpecifyModel` provides a generic mechanism for model specification. A mandatory input called `subject_info` provides paradigm specification for each run corresponding to a subject. This has to be in the form of a `Bunch` or a list of `Bunch` objects (one for each run). Each `Bunch` object contains the following attributes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Required for most designs\n",
"\n",
- "So to summarize:\n",
+ "- **`conditions`** : list of names\n",
"\n",
- " conditions = ['faces', 'houses', 'scrambled pix']\n",
- " onsets = [[0, 30, 60, 90],\n",
- " [10, 40, 70, 100],\n",
- " [20, 50, 80, 110]]\n",
- " durations = [[3], [3], [3]]\n",
- " \n",
- "The way we would create this model with Nipype is almsot as simple as that. The only step that is missing is to put this all into a ``Bunch`` object. This can be done as follows:"
+ "\n",
+ "- **`onsets`** : lists of onsets corresponding to each condition\n",
+ "\n",
+ "\n",
+ "- **`durations`** : lists of durations corresponding to each condition. Should be left to a single 0 if all events are being modelled as impulses."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Optional\n",
+ "\n",
+ "- **`regressor_names`**: list of names corresponding to each column. Should be None if automatically assigned.\n",
+ "\n",
+ "\n",
+ "- **`regressors`**: list of lists. values for each regressor - must correspond to the number of volumes in the functional run\n",
+ "\n",
+ "\n",
+ "- **`amplitudes`**: lists of amplitudes for each event. This will be ignored by SPM's Level1Design.\n",
+ "\n",
+ "\n",
+ "The following two (`tmod`, `pmod`) will be ignored by any `Level1Design` class other than `SPM`:\n",
+ "\n",
+ "- **`tmod`**: lists of conditions that should be temporally modulated. Should default to None if not being used.\n",
+ "\n",
+ "- **`pmod`**: list of Bunch corresponding to conditions\n",
+ " - `name`: name of parametric modulator\n",
+ " - `param`: values of the modulator\n",
+ " - `poly`: degree of modulation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Together with this information, one needs to specify:\n",
+ "\n",
+ "- whether the durations and event onsets are specified in terms of scan volumes or secs.\n",
+ "\n",
+ "- the high-pass filter cutoff,\n",
+ "\n",
+ "- the repetition time per scan\n",
+ "\n",
+ "- functional data files corresponding to each run.\n",
+ "\n",
+ "Optionally you can specify realignment parameters, outlier indices. Outlier files should contain a list of numbers, one per row indicating which scans should not be included in the analysis. The numbers are 0-based"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Example\n",
+ "\n",
+ "An example Bunch definition:"
]
},
{
@@ -35,26 +91,51 @@
"outputs": [],
"source": [
"from nipype.interfaces.base import Bunch\n",
+ "condnames = ['Tapping', 'Speaking', 'Yawning']\n",
+ "event_onsets = [[0, 10, 50],\n",
+ " [20, 60, 80],\n",
+ " [30, 40, 70]]\n",
+ "durations = [[0],[0],[0]]\n",
"\n",
- "conditions = ['faces', 'houses', 'scrambled pix']\n",
- "onsets = [[0, 30, 60, 90],\n",
- " [10, 40, 70, 100],\n",
- " [20, 50, 80, 110]]\n",
- "durations = [[3], [3], [3]]\n",
- "\n",
- "subject_info = Bunch(conditions=conditions,\n",
- " onsets=onsets,\n",
- " durations=durations)"
+ "subject_info = Bunch(conditions=condnames,\n",
+ " onsets = event_onsets,\n",
+ " durations = durations)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "subject_info"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "It's also possible to specify additional regressors. For this you need to additionally specify:\n",
+ "## Input via textfile\n",
+ "\n",
+ "Alternatively, you can provide condition, onset, duration and amplitude\n",
+ "information through event files. The event files have to be in 1, 2 or 3\n",
+ "column format with the columns corresponding to Onsets, Durations and\n",
+ "Amplitudes and they have to have the name event_name.run\n",
+ "e.g.: `Words.run001.txt`.\n",
+ " \n",
+ "The event_name part will be used to create the condition names. `Words.run001.txt` may look like:\n",
+ "\n",
+ " # Word Onsets Durations\n",
+ " 0 10\n",
+ " 20 10\n",
+ " ...\n",
+ "\n",
+ "or with amplitudes:\n",
"\n",
- "- **``regressors``**: list of regressors that you want to include in the model (must correspond to the number of volumes in the functional run)\n",
- "- **``regressor_names``**: name of the regressors that you want to include"
+ " # Word Onsets Durations Amplitudes\n",
+ " 0 10 1\n",
+ " 20 10 1\n",
+ " ..."
]
},
{
@@ -139,6 +220,17 @@
" durations=durations)\n",
"subject_info.items()"
]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Sparse model specification\n",
+ "\n",
+ "In addition to standard models, `SpecifySparseModel` allows model generation for sparse and sparse-clustered acquisition experiments. Details of the model generation and utility are provided in [Ghosh et al. (2009) OHBM 2009](http://dl.dropbox.com/u/363467/OHBM2009_HRF.pdf)\n",
+ "\n",
+ "**!! Link is broken !!**"
+ ]
}
],
"metadata": {
diff --git a/notebooks/basic_nodes.ipynb b/notebooks/basic_nodes.ipynb
index 8625283..2cdbe76 100644
--- a/notebooks/basic_nodes.ipynb
+++ b/notebooks/basic_nodes.ipynb
@@ -300,7 +300,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_plugins.ipynb b/notebooks/basic_plugins.ipynb
index ffabed8..0dd3b24 100644
--- a/notebooks/basic_plugins.ipynb
+++ b/notebooks/basic_plugins.ipynb
@@ -4,67 +4,339 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Execution Plugins\n",
+ "# Using Nipype Plugins\n",
"\n",
- "As you learned in the [Workflow](basic_workflow.ipynb) tutorial, a workflow is executed with the ``run`` method. For example:\n",
+ "The workflow engine supports a plugin architecture for workflow execution. The available plugins allow local and distributed execution of workflows and debugging. Each available plugin is described below.\n",
"\n",
- " workflow.run()\n",
+ "Current plugins are available for Linear, Multiprocessing, [IPython](https://ipython.org/) distributed processing platforms and for direct processing on [SGE](http://www.oracle.com/us/products/tools/oracle-grid-engine-075549.html), [PBS](http://www.clusterresources.com/products/torque-resource-manager.php), [HTCondor](http://www.cs.wisc.edu/htcondor/), [LSF](http://www.platform.com/Products/platform-lsf), `OAR`, and [SLURM](http://slurm.schedmd.com/). We anticipate future plugins for the [Soma](http://brainvisa.info/soma/soma-workflow/) workflow.\n",
"\n",
- "Whenever you execute a workflow like this, it will be executed in serial order. This means that no node will be executed in parallel, even if they are completely independent of each other. Now, while this might be preferable under certain circumstances, we usually want to executed workflows in parallel. For this, Nipype provides many different plugins."
+ "
\n",
+ "**Note**: \n",
+ "The current distributed processing plugins rely on the availability of a shared filesystem across computational nodes. \n",
+ "A variety of config options can control how execution behaves in this distributed context. These are listed later on in this page.\n",
+ "
\n",
+ "\n",
+ "All plugins can be executed with:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin=PLUGIN_NAME, plugin_args=ARGS_DICT)\n",
+ "```\n",
+ "\n",
+ "Optional arguments:\n",
+ "\n",
+ " status_callback : a function handle\n",
+ " max_jobs : maximum number of concurrent jobs\n",
+ " max_tries : number of times to try submitting a job\n",
+ " retry_timeout : amount of time to wait between tries\n",
+ "\n",
+ "
\n",
+ "**Note**: Except for the status_callback, the remaining arguments only apply to the distributed plugins: MultiProc / IPython(X) / SGE / PBS / HTCondor / HTCondorDAGMan / LSF\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Debug\n",
+ "\n",
+ "This plugin provides a simple mechanism to debug certain components of a workflow without executing any node.\n",
+ "\n",
+ "Mandatory arguments:\n",
+ "\n",
+ " callable : A function handle that receives as arguments a node and a graph\n",
+ "\n",
+ "The function callable will called for every node from a topological sort of the execution graph."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Linear\n",
+ "\n",
+ "This plugin runs the workflow one node at a time in a single process locally. The order of the nodes is determined by a topological sort of the workflow:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='Linear')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## MultiProc\n",
+ "\n",
+ "Uses the [Python](http://www.python.org/) multiprocessing library to distribute jobs as new processes on a local system.\n",
+ "\n",
+ "Optional arguments:\n",
+ "\n",
+ "- `n_procs`: Number of processes to launch in parallel, if not set number of processors/threads will be automatically detected\n",
+ "\n",
+ "- `memory_gb`: Total memory available to be shared by all simultaneous tasks currently running, if not set it will be automatically set to 90% of system RAM.\n",
+ "\n",
+ "- `raise_insufficient`: Raise exception when the estimated resources of a node exceed the total amount of resources available (memory and threads), when ``False`` (default), only a warning will be issued.\n",
+ "\n",
+ "- `maxtasksperchild`: number of nodes to run on each process before refreshing the worker (default: 10).\n",
+ " \n",
+ "\n",
+ "To distribute processing on a multicore machine, simply call:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='MultiProc')\n",
+ "```\n",
+ "\n",
+ "This will use all available CPUs. If on the other hand you would like to restrict the number of used resources (to say 2 CPUs), you can call:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='MultiProc', plugin_args={'n_procs' : 2}\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## IPython\n",
+ "\n",
+ "This plugin provide access to distributed computing using [IPython](https://ipython.org/) parallel machinery.\n",
+ "\n",
+ "
\n",
+ "**Note**: \n",
+ "Please read the [IPython](https://ipython.org/) documentation to determine how to setup your cluster for distributed processing. This typically involves calling ipcluster.\n",
+ "
\n",
+ "\n",
+ "Once the clients have been started, any pipeline executed with:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='IPython')\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## SGE/PBS\n",
+ "\n",
+ "In order to use nipype with [SGE](http://www.oracle.com/us/products/tools/oracle-grid-engine-075549.html) or [PBS](http://www.clusterresources.com/products/torque-resource-manager.php) you simply need to call:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='SGE')\n",
+ "workflow.run(plugin='PBS')\n",
+ "```\n",
+ "\n",
+ "Optional arguments:\n",
+ "\n",
+ " template: custom template file to use\n",
+ " qsub_args: any other command line args to be passed to qsub.\n",
+ " max_jobname_len: (PBS only) maximum length of the job name. Default 15.\n",
+ "\n",
+ "For example, the following snippet executes the workflow on myqueue with a custom template:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='SGE',\n",
+ " plugin_args=dict(template='mytemplate.sh',\n",
+ " qsub_args='-q myqueue')\n",
+ "```\n",
+ "\n",
+ "In addition to overall workflow configuration, you can use node level\n",
+ "configuration for PBS/SGE:\n",
+ "\n",
+ "```python\n",
+ "node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3'}\n",
+ "```\n",
+ "\n",
+ "this would apply only to the node and is useful in situations, where a particular node might use more resources than other nodes in a workflow.\n",
+ "\n",
+ "
\n",
+ "**Note**: Setting the keyword `overwrite` would overwrite any global configuration with this local configuration: \n",
+ "```node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3', 'overwrite': True}```\n",
+ "
"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Local execution\n",
+ "### SGEGraph\n",
+ "\n",
+ "SGEGraph is an execution plugin working with Sun Grid Engine that allows for submitting entire graph of dependent jobs at once. This way Nipype does not need to run a monitoring process - SGE takes care of this. The use of SGEGraph is preferred over SGE since the latter adds unnecessary load on the submit machine.\n",
+ "\n",
+ "
\n",
+ "**Note**: When rerunning unfinished workflows using SGEGraph you may decide not to submit jobs for Nodes that previously finished running. This can speed up execution, but new or modified inputs that would previously trigger a Node to rerun will be ignored. The following option turns on this functionality: \n",
+ "```workflow.run(plugin='SGEGraph', plugin_args = {'dont_resubmit_completed_jobs': True})```\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## LSF\n",
+ "\n",
+ "Submitting via LSF is almost identical to SGE above above except for the optional arguments field:\n",
"\n",
- "### ``Linear`` Plugin\n",
+ "```python\n",
+ "workflow.run(plugin='LSF')\n",
+ "```\n",
"\n",
- "If you want to run your workflow in a linear fashion, just use the following code:\n",
+ "Optional arguments:\n",
"\n",
- " workflow.run(plugin='Linear')"
+ " template: custom template file to use\n",
+ " bsub_args: any other command line args to be passed to bsub."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "### ``MultiProc`` Plugin\n",
+ "## SLURM\n",
"\n",
- "The easiest way to executed a workflow locally in parallel is the ``MultiProc`` plugin:\n",
+ "Submitting via SLURM is almost identical to SGE above except for the optional arguments field:\n",
"\n",
- " workflow.run(plugin='MultiProc', plugin_args={'n_procs': 4})\n",
+ "```python\n",
+ "workflow.run(plugin='SLURM')\n",
+ "```\n",
"\n",
- "The additional plugin argument ``n_procs``, specifies how many cores should be used for the parallel execution. In this case, it's 4.\n",
+ "Optional arguments:\n",
"\n",
- "The `MultiProc` plugin uses the [multiprocessing](http://docs.python.org/library/multiprocessing.html) package in the standard library, and is the only parallel plugin that is guaranteed to work right out of the box."
+ " template: custom template file to use\n",
+ " sbatch_args: any other command line args to be passed to bsub.\n",
+ " jobid_re: regular expression for custom job submission id search"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Cluster execution\n",
+ "### SLURMGraph\n",
+ "\n",
+ "SLURMGraph is an execution plugin working with SLURM that allows for submitting entire graph of dependent jobs at once. This way Nipype does not need to run a monitoring process - SLURM takes care of this. The use of SLURMGraph plugin is preferred over the vanilla SLURM plugin since the latter adds unnecessary load on the submit machine.\n",
+ "\n",
+ "
\n",
+ "**Note**: When rerunning unfinished workflows using SLURMGraph you may decide not to submit jobs for Nodes that previously finished running. This can speed up execution, but new or modified inputs that would previously trigger a Node to rerun will be ignored. The following option turns on this functionality: \n",
+ "```workflow.run(plugin='SLURMGraph', plugin_args = {'dont_resubmit_completed_jobs': True})```\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## HTCondor\n",
+ "\n",
+ "### DAGMan\n",
+ "\n",
+ "With its [DAGMan](http://research.cs.wisc.edu/htcondor/dagman/dagman.html) component [HTCondor](http://www.cs.wisc.edu/htcondor/) (previously Condor) allows for submitting entire graphs of dependent jobs at once (similar to SGEGraph and SLURMGraph). With the ``CondorDAGMan`` plug-in Nipype can utilize this functionality to submit complete workflows directly and in a single step. Consequently, and in contrast to other plug-ins, workflow execution returns almost instantaneously -- Nipype is only used to generate the workflow graph, while job scheduling and dependency resolution are entirely managed by [HTCondor](http://www.cs.wisc.edu/htcondor/).\n",
+ "\n",
+ "Please note that although [DAGMan](http://research.cs.wisc.edu/htcondor/dagman/dagman.html) supports specification of data dependencies as well as data provisioning on compute nodes this functionality is currently not supported by this plug-in. As with all other batch systems supported by Nipype, only HTCondor pools with a shared file system can be used to process Nipype workflows.\n",
+ "\n",
+ "Workflow execution with HTCondor DAGMan is done by calling:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='CondorDAGMan')\n",
+ "```\n",
+ "\n",
+ "Job execution behavior can be tweaked with the following optional plug-in arguments. The value of most arguments can be a literal string or a filename, where in the latter case the content of the file will be used as the argument value:\n",
+ "\n",
+ "- `submit_template` : submit spec template for individual jobs in a DAG (see CondorDAGManPlugin.default_submit_template for the default.\n",
+ "- `initial_specs` : additional submit specs that are prepended to any job's submit file\n",
+ "- `override_specs` : additional submit specs that are appended to any job's submit file\n",
+ "- `wrapper_cmd` : path to an exectuable that will be started instead of a node script. This is useful for wrapper script that execute certain functionality prior or after a node runs. If this option is given the wrapper command is called with the respective Python exectuable and the path to the node script as final arguments\n",
+ "- `wrapper_args` : optional additional arguments to a wrapper command\n",
+ "- `dagman_args` : arguments to be prepended to the job execution script in the dagman call\n",
+ "- `block` : if True the plugin call will block until Condor has finished prcoessing the entire workflow (default: False)\n",
+ "\n",
+ "Please see the [HTCondor documentation](http://research.cs.wisc.edu/htcondor/manual) for details on possible configuration options and command line arguments.\n",
+ "\n",
+ "Using the ``wrapper_cmd`` argument it is possible to combine Nipype workflow execution with checkpoint/migration functionality offered by, for example, [DMTCP](http://dmtcp.sourceforge.net/). This is especially useful in the case of workflows with long running nodes, such as Freesurfer's recon-all pipeline, where Condor's job prioritization algorithm could lead to jobs being evicted from compute nodes in order to maximize overall troughput. With checkpoint/migration enabled such a job would be checkpointed prior eviction and resume work from the checkpointed state after being rescheduled -- instead of restarting from scratch.\n",
+ "\n",
+ "On a Debian system, executing a workflow with support for checkpoint/migration for all nodes could look like this:\n",
+ "\n",
+ "```python\n",
+ "# define common parameters\n",
+ "dmtcp_hdr = \"\"\"\n",
+ "should_transfer_files = YES\n",
+ "when_to_transfer_output = ON_EXIT_OR_EVICT\n",
+ "kill_sig = 2\n",
+ "environment = DMTCP_TMPDIR=./;JALIB_STDERR_PATH=/dev/null;DMTCP_PREFIX_ID=$(CLUSTER)_$(PROCESS)\n",
+ "\"\"\"\n",
+ "shim_args = \"--log %(basename)s.shimlog --stdout %(basename)s.shimout --stderr %(basename)s.shimerr\"\n",
+ "# run workflow\n",
+ "workflow.run(\n",
+ " plugin='CondorDAGMan',\n",
+ " plugin_args=dict(initial_specs=dmtcp_hdr,\n",
+ " wrapper_cmd='/usr/lib/condor/shim_dmtcp',\n",
+ " wrapper_args=shim_args)\n",
+ " )\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## OAR\n",
+ "\n",
+ "In order to use nipype with OAR you simply need to call:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='OAR')\n",
+ "```\n",
+ "\n",
+ "Optional arguments:\n",
+ "\n",
+ " template: custom template file to use\n",
+ " oar_args: any other command line args to be passed to qsub.\n",
+ " max_jobname_len: (PBS only) maximum length of the job name. Default 15.\n",
+ "\n",
+ "For example, the following snippet executes the workflow on myqueue with\n",
+ "a custom template:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='oar',\n",
+ " plugin_args=dict(template='mytemplate.sh',\n",
+ " oarsub_args='-q myqueue')\n",
+ "```\n",
+ "\n",
+ "In addition to overall workflow configuration, you can use node level configuration for OAR:\n",
+ "\n",
+ "```python\n",
+ "node.plugin_args = {'overwrite': True, 'oarsub_args': '-l \"nodes=1/cores=3\"'}\n",
+ "```\n",
+ "\n",
+ "this would apply only to the node and is useful in situations, where a particular node might use more resources than other nodes in a workflow. You need to set the 'overwrite' flag to bypass the general settings-template you defined for the other nodes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### ``qsub`` emulation\n",
+ "\n",
+ "
\n",
+ "**Note**: This plug-in is deprecated and users should migrate to the more robust and more versatile ``CondorDAGMan`` plug-in.\n",
+ "
\n",
+ "\n",
+ "Despite the differences between HTCondor and SGE-like batch systems the plugin usage (incl. supported arguments) is almost identical. The HTCondor plugin relies on a ``qsub`` emulation script for HTCondor, called ``condor_qsub`` that can be obtained from a [Git repository on git.debian.org](http://anonscm.debian.org/gitweb/?p=pkg-exppsy/condor.git;a=blob_plain;f=debian/condor_qsub;hb=HEAD). This script is currently not shipped with a standard HTCondor distribution, but is included in the HTCondor package from http://neuro.debian.net. It is sufficient to download this script and install it in any location on a system that is included in the ``PATH`` configuration.\n",
+ "\n",
+ "Running a workflow in a HTCondor pool is done by calling:\n",
+ "\n",
+ "```python\n",
+ "workflow.run(plugin='Condor')\n",
+ "```\n",
"\n",
- "There are many different plugins to run Nipype on a cluster, such as: ``PBS``, ``SGE``, ``LSF``, ``Condor`` and ``IPython``. Implementing them is as easy as ``'MultiProc'``.\n",
+ "The plugin supports a limited set of qsub arguments (``qsub_args``) that cover the most common use cases. The ``condor_qsub`` emulation script translates qsub arguments into the corresponding HTCondor terminology and handles the actual job submission. For details on supported options see the manpage of ``condor_qsub``.\n",
"\n",
- " workflow.run('PBS', plugin_args={'qsub_args': '-q many'})\n",
- " workflow.run('SGE', plugin_args={'qsub_args': '-q many'})\n",
- " workflow.run('LSF', plugin_args={'qsub_args': '-q many'})\n",
- " workflow.run('Condor')\n",
- " workflow.run('IPython')\n",
- " \n",
- " workflow.run('PBSGraph', plugin_args={'qsub_args': '-q many'})\n",
- " workflow.run('SGEGraph', plugin_args={'qsub_args': '-q many'})\n",
- " workflow.run('CondorDAGMan')\n",
+ "Optional arguments:\n",
"\n",
- "For a complete list and explanation of all supported plugins, see: http://nipype.readthedocs.io/en/latest/users/plugins.html"
+ " qsub_args: any other command line args to be passed to condor_qsub."
]
}
],
"metadata": {
- "anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",
@@ -80,7 +352,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/basic_workflow.ipynb b/notebooks/basic_workflow.ipynb
index 314937b..8f9a518 100644
--- a/notebooks/basic_workflow.ipynb
+++ b/notebooks/basic_workflow.ipynb
@@ -39,7 +39,7 @@
"
Do not have inputs/outputs, but expose them from the interfaces wrapped inside
\n",
"
\n",
"
\n",
- "
Do not cache results (unless you use [interface caching](http://nipype.readthedocs.io/en/latest/users/caching_tutorial.html))
\n",
+ "
Do not cache results (unless you use [interface caching](advanced_interfaces_caching.ipynb))
\n",
"
Cache results
\n",
"
\n",
"
\n",
@@ -772,7 +772,7 @@
"outputs": [],
"source": [
"# importing Node and Workflow\n",
- "from nipype.pipeline.engine import Workflow, Node\n",
+ "from nipype import Workflow, Node\n",
"# importing all interfaces\n",
"from nipype.interfaces.fsl import ExtractROI, MCFLIRT, SliceTimer"
]
diff --git a/notebooks/example_1stlevel.ipynb b/notebooks/example_1stlevel.ipynb
index 3857003..c111068 100644
--- a/notebooks/example_1stlevel.ipynb
+++ b/notebooks/example_1stlevel.ipynb
@@ -40,7 +40,7 @@
"from nipype.algorithms.modelgen import SpecifySPMModel\n",
"from nipype.interfaces.utility import Function, IdentityInterface\n",
"from nipype.interfaces.io import SelectFiles, DataSink\n",
- "from nipype.pipeline.engine import Workflow, Node"
+ "from nipype import Workflow, Node"
]
},
{
diff --git a/notebooks/example_2ndlevel.ipynb b/notebooks/example_2ndlevel.ipynb
index 5621b3f..6e43fb8 100644
--- a/notebooks/example_2ndlevel.ipynb
+++ b/notebooks/example_2ndlevel.ipynb
@@ -38,7 +38,7 @@
"from nipype.interfaces.spm import (OneSampleTTestDesign, EstimateModel,\n",
" EstimateContrast, Threshold)\n",
"from nipype.interfaces.utility import IdentityInterface\n",
- "from nipype.pipeline.engine import Workflow, Node\n",
+ "from nipype import Workflow, Node\n",
"from nipype.interfaces.fsl import Info\n",
"from nipype.algorithms.misc import Gunzip"
]
@@ -524,7 +524,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/example_normalize.ipynb b/notebooks/example_normalize.ipynb
index c869298..adabed7 100644
--- a/notebooks/example_normalize.ipynb
+++ b/notebooks/example_normalize.ipynb
@@ -88,10 +88,10 @@
"outputs": [],
"source": [
"from os.path import join as opj\n",
+ "from nipype import Workflow, Node, MapNode\n",
"from nipype.interfaces.ants import ApplyTransforms\n",
"from nipype.interfaces.utility import IdentityInterface\n",
"from nipype.interfaces.io import SelectFiles, DataSink\n",
- "from nipype.pipeline.engine import Workflow, Node, MapNode\n",
"from nipype.interfaces.fsl import Info"
]
},
@@ -320,7 +320,7 @@
"from nipype.interfaces.utility import IdentityInterface\n",
"from nipype.interfaces.io import SelectFiles, DataSink\n",
"from nipype.algorithms.misc import Gunzip\n",
- "from nipype.pipeline.engine import Workflow, Node"
+ "from nipype import Workflow, Node"
]
},
{
diff --git a/notebooks/example_preprocessing.ipynb b/notebooks/example_preprocessing.ipynb
index a38c3c2..98f1a12 100644
--- a/notebooks/example_preprocessing.ipynb
+++ b/notebooks/example_preprocessing.ipynb
@@ -88,7 +88,7 @@
"from nipype.interfaces.utility import IdentityInterface\n",
"from nipype.interfaces.io import SelectFiles, DataSink\n",
"from nipype.algorithms.rapidart import ArtifactDetect\n",
- "from nipype.pipeline.engine import Workflow, Node"
+ "from nipype import Workflow, Node"
]
},
{
diff --git a/notebooks/handson_analysis.ipynb b/notebooks/handson_analysis.ipynb
index f992798..9912504 100644
--- a/notebooks/handson_analysis.ipynb
+++ b/notebooks/handson_analysis.ipynb
@@ -1659,7 +1659,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/introduction_dataset.ipynb b/notebooks/introduction_dataset.ipynb
index ab1b00e..01ad189 100644
--- a/notebooks/introduction_dataset.ipynb
+++ b/notebooks/introduction_dataset.ipynb
@@ -150,7 +150,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/introduction_docker.ipynb b/notebooks/introduction_docker.ipynb
index 91cca1a..9aa3c68 100644
--- a/notebooks/introduction_docker.ipynb
+++ b/notebooks/introduction_docker.ipynb
@@ -10,7 +10,7 @@
"\n",
"[Docker](https://www.docker.com) is an open-source project that automates the deployment of applications inside software containers. Those containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, system tools, software libraries, such as Python, FSL, AFNI, SPM, FreeSurfer, ANTs, etc. This guarantees that it will always run the same, regardless of the environment it is running in.\n",
"\n",
- "Important: **You don't need Docker to run Nipype on your system**. For Mac and Linux users, it probably is much simpler to install Nipype directly on your system. For more information on how to do this see the [Nipype website](http://nipype.readthedocs.io/en/latest/users/install.html). But for Windows user, or users that don't want to setup all the dependencies themselves, Docker is the way to go."
+ "Important: **You don't need Docker to run Nipype on your system**. For Mac and Linux users, it probably is much simpler to install Nipype directly on your system. For more information on how to do this see the [Nipype website](resources_installation.ipynb). But for Windows user, or users that don't want to setup all the dependencies themselves, Docker is the way to go."
]
},
{
@@ -215,7 +215,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/introduction_neurodocker.ipynb b/notebooks/introduction_neurodocker.ipynb
new file mode 100644
index 0000000..26866a4
--- /dev/null
+++ b/notebooks/introduction_neurodocker.ipynb
@@ -0,0 +1,149 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Neurodocker tutorial\n",
+ "\n",
+ "[Neurodocker](https://github.com/kaczmarj/neurodocker) is a brilliant tool to create your own neuroimaging docker container. [Neurodocker](https://github.com/kaczmarj/neurodocker) is a command-line program that enables users to generate [Docker](http://www.docker.io/) containers that include neuroimaging software. These containers can be\n",
+ "converted to [Singularity](http://singularity.lbl.gov/) containers for use in high-performance computing\n",
+ "centers.\n",
+ "\n",
+ "Requirements:\n",
+ "\n",
+ "* [Docker](http://www.docker.io/)\n",
+ "* Internet connection"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Usage\n",
+ "\n",
+ "To view the Neurodocker help message\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate --help\n",
+ "\n",
+ "1. Users must specify a base Docker image and the package manager. Any Docker\n",
+ " image on DockerHub can be used as your base image. Common base images\n",
+ " include ``debian:stretch``, ``ubuntu:16.04``, ``centos:7``, and the various\n",
+ " ``neurodebian`` images. If users would like to install software from the\n",
+ " NeuroDebian repositories, it is recommended to use a ``neurodebian`` base\n",
+ " image. The package manager is ``apt`` or ``yum``, depending on the base\n",
+ " image.\n",
+ "2. Next, users should configure the container to fit their needs. This includes\n",
+ " installing neuroimaging software, installing packages from the chosen package\n",
+ " manager, installing Python and Python packages, copying files from the local\n",
+ " machine into the container, and other operations. The list of supported\n",
+ " neuroimaging software packages is available in the ``neurodocker`` help\n",
+ " message.\n",
+ "3. The ``neurodocker`` command will generate a Dockerfile. This Dockerfile can\n",
+ " be used to build a Docker image with the ``docker build`` command."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Create a Dockerfile with FSL, Python 3.6, and Nipype\n",
+ "\n",
+ "This command prints a Dockerfile (the specification for a Docker image) to the\n",
+ "terminal.\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate \\\n",
+ " --base debian:stretch --pkg-manager apt \\\n",
+ " --fsl version=5.0.10 \\\n",
+ " --miniconda env_name=neuro \\\n",
+ " conda_install=\"python=3.6 traits\" \\\n",
+ " pip_install=\"nipype\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Build the Docker image\n",
+ "\n",
+ "The Dockerfile can be saved and used to build the Docker image\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate \\\n",
+ " --base debian:stretch --pkg-manager apt \\\n",
+ " --fsl version=5.0.10 \\\n",
+ " --miniconda env_name=neuro \\\n",
+ " conda_install=\"python=3.6 traits\" \\\n",
+ " pip_install=\"nipype\" > Dockerfile\n",
+ "\n",
+ " docker build --tag my_image .\n",
+ " # or\n",
+ " docker build --tag my_image - < Dockerfile"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Use NeuroDebian\n",
+ "\n",
+ "This example installs AFNI and ANTs from the NeuroDebian repositories. It also\n",
+ "installs ``git`` and ``vim``.\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate \\\n",
+ " --base neurodebian:stretch --pkg-manager apt \\\n",
+ " --install afni ants git vim\n",
+ "\n",
+ "**Note**: the ``--install`` option will install software using the package manager.\n",
+ "Because the NeuroDebian repositories are enabled in the chosen base image, AFNI\n",
+ "and ANTs may be installed using the package manager. ``git`` and ``vim`` are\n",
+ "available in the default repositories."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Other examples\n",
+ "\n",
+ "Create a container with ``dcm2niix``, Nipype, and jupyter notebook. Install\n",
+ "Miniconda as a non-root user, and activate the Miniconda environment upon\n",
+ "running the container.\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate \\\n",
+ " --base centos:7 --pkg-manager yum \\\n",
+ " --dcm2niix version=master \\\n",
+ " --user neuro \\\n",
+ " --miniconda env_name=neuro conda_install=\"jupyter traits nipype\" \\\n",
+ " > Dockerfile\n",
+ " docker build --tag my_nipype - < Dockerfile\n",
+ "\n",
+ "Copy local files into a container.\n",
+ "\n",
+ " docker run --rm kaczmarj/neurodocker:v0.3.2 generate \\\n",
+ " --base ubuntu:16.04 --pkg-manager apt \\\n",
+ " --copy relative/path/to/source.txt /absolute/path/to/destination.txt"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/introduction_nipype.ipynb b/notebooks/introduction_nipype.ipynb
index 0d77b85..4ed0fb8 100644
--- a/notebooks/introduction_nipype.ipynb
+++ b/notebooks/introduction_nipype.ipynb
@@ -275,7 +275,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/introduction_python.ipynb b/notebooks/introduction_python.ipynb
index 7f0e9fa..ba641ee 100644
--- a/notebooks/introduction_python.ipynb
+++ b/notebooks/introduction_python.ipynb
@@ -2513,7 +2513,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/introduction_quickstart.ipynb b/notebooks/introduction_quickstart.ipynb
index b950d4a..6d0efda 100644
--- a/notebooks/introduction_quickstart.ipynb
+++ b/notebooks/introduction_quickstart.ipynb
@@ -877,7 +877,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
},
"nbpresent": {
"slides": {
diff --git a/notebooks/introduction_showcase.ipynb b/notebooks/introduction_showcase.ipynb
index 59626f2..f226dbe 100644
--- a/notebooks/introduction_showcase.ipynb
+++ b/notebooks/introduction_showcase.ipynb
@@ -418,7 +418,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/resources_help.ipynb b/notebooks/resources_help.ipynb
index b1f3bb4..6255da4 100644
--- a/notebooks/resources_help.ipynb
+++ b/notebooks/resources_help.ipynb
@@ -54,7 +54,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/resources_installation.ipynb b/notebooks/resources_installation.ipynb
index 67e9503..bff7c71 100644
--- a/notebooks/resources_installation.ipynb
+++ b/notebooks/resources_installation.ipynb
@@ -4,49 +4,94 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Install Nipype\n",
+ "# Download and install\n",
"\n",
- "The best and most complete instruction on how to download and install Nipype can be found on the [official homepage](http://nipype.readthedocs.io/en/latest/users/install.html). Nonetheless, here's a short summary of some (but not all) approaches."
+ "This page covers the necessary steps to install Nipype.\n",
+ "\n",
+ "# 1. Install Nipype\n",
+ "\n",
+ "Getting Nipype to run on your system is rather straight forward. And there are multiple ways to do the installation:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 1. Install Nipype\n",
+ "## Using docker\n",
"\n",
- "Getting Nipype to run on your system is rather straight forward. And there are multiple ways to do the installation:\n",
+ "- You can follow the [Nipype tutorial](https://miykael.github.io/nipype_tutorial)\n",
"\n",
"\n",
- "### Using conda\n",
+ "- You can pull the `nipype/nipype` image from Docker hub:\n",
"\n",
- "If you have [conda](http://conda.pydata.org/docs/index.html), [miniconda](https://conda.io/miniconda.html) or [anaconda](https://www.continuum.io/why-anaconda) on your system, than installing Nipype is just the following command:\n",
+ " docker pull nipype/nipype\n",
+ "\n",
+ "- You may also build custom docker containers with specific versions of software using [Neurodocker](https://github.com/kaczmarj/neurodocker) (see the [Neurodocker Tutorial](neurodocker.ipynb))."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using conda\n",
"\n",
- " conda config --add channels conda-forge\n",
- " conda install nipype\n",
+ "If you have [conda](http://conda.pydata.org/docs/index.html), [miniconda](https://conda.io/miniconda.html) or [anaconda](https://www.continuum.io/why-anaconda) on your system, than installing Nipype can be done with just the following command:\n",
"\n",
+ " conda install --channel conda-forge nipype\n",
"\n",
- "### Using ``pip`` or ``easy_install``\n",
+ "It is possible to list all of the versions of nipype available on your platform with:\n",
"\n",
- "Installing Nipype via ``pip`` or ``easy_install`` is as simple as you would imagine.\n",
+ " conda search nipype --channel conda-forge\n",
+ "\n",
+ "For more information, please see https://github.com/conda-forge/nipype-feedstock."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using Pypi\n",
+ "\n",
+ "The installation process is similar to other Python packages.\n",
+ "\n",
+ "If you already have a Python environment set up, you can do:\n",
"\n",
" pip install nipype\n",
- " \n",
- "or\n",
- " \n",
- " easy_install nipype\n",
"\n",
+ "If you want to install all the optional features of ``nipype``, use the following command:\n",
"\n",
- "### Using Debian or Ubuntu\n",
+ " pip install nipype[all]\n",
"\n",
- "Installing Nipype on a Debian or Ubuntu system can also be done via ``apt-get``. For this use the following command:\n",
+ "While `all` installs everything, one can also install select components as listed below:\n",
"\n",
- " apt-get install python-nipype\n",
+ "```python\n",
+ "'doc': ['Sphinx>=1.4', 'matplotlib', 'pydotplus', 'pydot>=1.2.3'],\n",
+ "'tests': ['pytest-cov', 'codecov'],\n",
+ "'nipy': ['nitime', 'nilearn', 'dipy', 'nipy', 'matplotlib'],\n",
+ "'profiler': ['psutil'],\n",
+ "'duecredit': ['duecredit'],\n",
+ "'xvfbwrapper': ['xvfbwrapper'],\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Debian and Ubuntu\n",
"\n",
+ "Add the [NeuroDebian](http://neuro.debian.org) repository and install the ``python-nipype`` package using ``apt-get`` or your favorite package manager:\n",
"\n",
- "### Using Github\n",
+ " apt-get install python-nipype"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Using Github\n",
"\n",
- "To make sure that you really have the newest version of Nipype on your system, you can run the pip command with a flag that points to the github repo:\n",
+ "To make sure that you really have the newest version of Nipype on your system, you can run the `pip` command with a flag that points to the github repo:\n",
"\n",
" pip install git+https://github.com/nipy/nipype#egg=nipype"
]
@@ -55,37 +100,74 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 2. Install Dependencies\n",
+ "## Mac OS X\n",
"\n",
- "For more information about the installation in general and to get a list of recommended software, go to the main page, under: http://nipype.readthedocs.io/en/latest/users/install.html\n",
+ "The easiest way to get nipype running on Mac OS X is to install [Miniconda](https://conda.io/miniconda.html) and follow the instructions above. If you have a non-conda environment you can install nipype by typing:\n",
"\n",
- "For a more step by step installation guide for additional software dependencies like SPM, FSL, FreeSurfer and ANTs, go to the [Beginner's Guide](http://miykael.github.io/nipype-beginner-s-guide/installation.html).\n"
+ " pip install nipype\n",
+ "\n",
+ "Note that the above procedure may require availability of gcc on your system path to compile the traits package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "## 3. Test Nipype"
+ "## From source\n",
+ "\n",
+ "- The most recent release is found here: https://github.com/nipy/nipype/releases/latest\n",
+ "\n",
+ "\n",
+ "- The development version: [[zip](http://github.com/nipy/nipype/zipball/master), [tar.gz](http://github.com/nipy/nipype/tarball/master)]\n",
+ "\n",
+ "\n",
+ "- For previous versions: [prior downloads](http://github.com/nipy/nipype/tags)\n",
+ "\n",
+ "\n",
+ "- If you downloaded the source distribution named something\n",
+ "like ``nipype-x.y.tar.gz``, then unpack the tarball, change into the\n",
+ "``nipype-x.y`` directory and install nipype using:\n",
+ "\n",
+ " pip install .\n",
+ "\n",
+ "**Note:** Depending on permissions you may need to use ``sudo``."
]
},
{
- "cell_type": "code",
- "execution_count": null,
+ "cell_type": "markdown",
"metadata": {},
- "outputs": [],
"source": [
- "# Import the nipype module\n",
- "import nipype\n",
+ "## Installation for developers\n",
+ "\n",
+ "Developers should start [here](http://nipype.readthedocs.io/en/latest/devel/testing_nipype.html).\n",
"\n",
- "# Run the test\n",
- "nipype.test(doctests=False)"
+ "Developers can also use this docker container:\n",
+ "\n",
+ " docker pull nipype/nipype:master"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
+ "# 2. Interface Dependencies\n",
+ "\n",
+ "Nipype provides wrappers around many neuroimaging tools and contains some algorithms. These tools will need to be installed for Nipype to run. You can create containers with different versions of these tools installed using [Neurodocker](https://github.com/kaczmarj/neurodocker) (see the [Neurodocker Tutorial](neurodocker.ipynb))."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 3. Testing the install\n",
+ "\n",
+ "The best way to test the install is checking nipype's version and then running the tests:\n",
+ "\n",
+ "```python\n",
+ "python -c \"import nipype; print(nipype.__version__)\"\n",
+ "python -c \"import nipype; nipype.test(doctests=False)\"\n",
+ "```\n",
+ "\n",
"The test will create a lot of output, but if all goes well you will see at the end something like this:\n",
"\n",
" ----------------------------------------------------------------------\n",
@@ -98,7 +180,6 @@
}
],
"metadata": {
- "anaconda-cloud": {},
"kernelspec": {
"display_name": "Python [default]",
"language": "python",
@@ -114,9 +195,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
- "nbformat_minor": 1
+ "nbformat_minor": 2
}
diff --git a/notebooks/resources_python_cheat_sheet.ipynb b/notebooks/resources_python_cheat_sheet.ipynb
index da3a4cf..8930fbd 100644
--- a/notebooks/resources_python_cheat_sheet.ipynb
+++ b/notebooks/resources_python_cheat_sheet.ipynb
@@ -679,7 +679,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/resources_resources.ipynb b/notebooks/resources_resources.ipynb
index 30ba16a..a6a91b1 100644
--- a/notebooks/resources_resources.ipynb
+++ b/notebooks/resources_resources.ipynb
@@ -61,7 +61,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.6.4"
+ "version": "3.6.5"
}
},
"nbformat": 4,
diff --git a/notebooks/scripts/ANTS_registration.py b/notebooks/scripts/ANTS_registration.py
index b9263ce..f7d8ae9 100644
--- a/notebooks/scripts/ANTS_registration.py
+++ b/notebooks/scripts/ANTS_registration.py
@@ -3,7 +3,7 @@
from nipype.interfaces.ants import Registration
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
-from nipype.pipeline.engine import Workflow, Node
+from nipype import Workflow, Node
from nipype.interfaces.fsl import Info
# Specify variables
diff --git a/notebooks/wip_nipype_cmd.ipynb b/notebooks/wip_nipype_cmd.ipynb
new file mode 100644
index 0000000..0e79536
--- /dev/null
+++ b/notebooks/wip_nipype_cmd.ipynb
@@ -0,0 +1,119 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Running Nipype Interfaces from the command line (nipype_cmd)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The primary use of [Nipype](http://nipy.org/nipype/) is to build automated non-interactive pipelines.\n",
+ "However, sometimes there is a need to run some interfaces quickly from the command line.\n",
+ "This is especially useful when running Interfaces wrapping code that does not have\n",
+ "command line equivalents (nipy or SPM). Being able to run Nipype interfaces opens new\n",
+ "possibilities such as inclusion of SPM processing steps in bash scripts.\n",
+ "\n",
+ "To run Nipype Interfaces you need to use the nipype_cmd tool that should already be installed.\n",
+ "The tool allows you to list Interfaces available in a certain package:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ " $nipype_cmd nipype.interfaces.nipy\n",
+ "\n",
+ " Available Interfaces:\n",
+ " SpaceTimeRealigner\n",
+ " Similarity\n",
+ " ComputeMask\n",
+ " FitGLM\n",
+ " EstimateContrast"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After selecting a particular Interface you can learn what inputs it requires:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ " $nipype_cmd nipype.interfaces.nipy ComputeMask --help\n",
+ "\n",
+ "\tusage:nipype_cmd nipype.interfaces.nipy ComputeMask [-h] [--M M] [--cc CC]\n",
+ "\t [--ignore_exception IGNORE_EXCEPTION]\n",
+ "\t [--m M]\n",
+ "\t [--reference_volume REFERENCE_VOLUME]\n",
+ "\t mean_volume\n",
+ "\n",
+ "\tRun ComputeMask\n",
+ "\n",
+ "\tpositional arguments:\n",
+ "\t mean_volume mean EPI image, used to compute the threshold for the\n",
+ "\t mask\n",
+ "\n",
+ "\toptional arguments:\n",
+ "\t -h, --help show this help message and exit\n",
+ "\t --M M upper fraction of the histogram to be discarded\n",
+ "\t --cc CC Keep only the largest connected component\n",
+ "\t --ignore_exception IGNORE_EXCEPTION\n",
+ "\t Print an error message instead of throwing an\n",
+ "\t exception in case the interface fails to run\n",
+ "\t --m M lower fraction of the histogram to be discarded\n",
+ "\t --reference_volume REFERENCE_VOLUME\n",
+ "\t reference volume used to compute the mask. If none is\n",
+ "\t give, the mean volume is used."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Finally you can run run the Interface:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "\t$nipype_cmd nipype.interfaces.nipy ComputeMask mean.nii.gz"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "All that from the command line without having to start python interpreter manually."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/wip_resource_sched_profiler.ipynb b/notebooks/wip_resource_sched_profiler.ipynb
new file mode 100644
index 0000000..f852c72
--- /dev/null
+++ b/notebooks/wip_resource_sched_profiler.ipynb
@@ -0,0 +1,235 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Resource Scheduling and Profiling with Nipype\n",
+ "\n",
+ "The latest version of Nipype supports system resource scheduling and profiling. These features allows users to ensure high throughput of their data processing while also controlling the amount of computing resources a given workflow will use."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Specifying Resources in the Node Interface\n",
+ "\n",
+ "Each ``Node`` instance interface has two parameters that specify its expected thread and memory usage: ``num_threads`` and ``estimated_memory_gb``. If a particular node is expected to use 8 threads and 2 GB of memory:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import Node\n",
+ "from nipype.interfaces.fsl import Smooth\n",
+ "node = Node(Smooth(), name='smooth')\n",
+ "node.interface.num_threads = 8\n",
+ "node.interface.estimated_memory_gb = 2"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "If the resource parameters are never set, they default to being 1 thread and 1 GB of RAM."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Resource Scheduler\n",
+ "\n",
+ "The ``MultiProc`` workflow plugin schedules node execution based on the resources used by the current running nodes and the total resources available to the workflow. The plugin utilizes the plugin arguments ``n_procs`` and ``memory_gb`` to set the maximum resources a workflow can utilize. To limit a workflow to using 8 cores and 10 GB of RAM:\n",
+ "\n",
+ "```python\n",
+ "args_dict = {'n_procs' : 8, 'memory_gb' : 10}\n",
+ "workflow.run(plugin='MultiProc', plugin_args=args_dict)\n",
+ "```\n",
+ "\n",
+ "If these values are not specifically set then the plugin will assume it can use all of the processors and memory on the system. For example, if the machine has 16 cores and 12 GB of RAM, the workflow will internally assume those values for ``n_procs`` and ``memory_gb``, respectively.\n",
+ "\n",
+ "The plugin will then queue eligible nodes for execution based on their expected usage via the ``num_threads`` and ``estimated_memory_gb`` interface parameters. If the plugin sees that only 3 of its 8 processors and 4 GB of its 10 GB of RAM are being used by running nodes, it will attempt to execute the next available node as long as its ``num_threads <= 5`` and ``estimated_memory_gb <= 6``. If this is not the case, it will continue to check every available node in the queue until it sees a node that meets these conditions, or it waits for an executing node to finish to earn back the necessary resources. The priority of the queue is highest for nodes with the most ``estimated_memory_gb`` followed by nodes with the most expected ``num_threads``."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Runtime Profiler and using the Callback Log\n",
+ "\n",
+ "It is not always easy to estimate the amount of resources a particular function or command uses. To help with this, Nipype provides some feedback about the system resources used by every node during workflow execution via the built-in runtime profiler. The runtime profiler is automatically enabled if the [psutil](https://pythonhosted.org/psutil/) Python package is installed and found on the system.\n",
+ "\n",
+ "If the package is not found, the workflow will run normally without the runtime profiler.\n",
+ "\n",
+ "The runtime profiler records the number of threads and the amount of memory (GB) used as ``runtime_threads`` and ``runtime_memory_gb`` in the Node's ``result.runtime`` attribute. Since the node object is pickled and written to disk in its working directory, these values are available for analysis after node or workflow execution by manually parsing the pickle file contents.\n",
+ "\n",
+ "Nipype also provides a logging mechanism for saving node runtime statistics to a JSON-style log file via the ``log_nodes_cb`` logger function. This is enabled by setting the ``status_callback`` parameter to point to this function in the ``plugin_args`` when using the ``MultiProc`` plugin."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.utils.profiler import log_nodes_cb\n",
+ "args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To set the filepath for the callback log the ``'callback'`` logger must be configured."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set path to log file\n",
+ "import logging\n",
+ "callback_log_path = '/home/neuro/run_stats.log'\n",
+ "logger = logging.getLogger('callback')\n",
+ "logger.setLevel(logging.DEBUG)\n",
+ "handler = logging.FileHandler(callback_log_path)\n",
+ "logger.addHandler(handler)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Finally, the workflow can be run. For this, let's first create a simple workflow:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.workflows.fmri.fsl import create_featreg_preproc"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Import and initiate the workflow\n",
+ "from nipype.workflows.fmri.fsl import create_featreg_preproc\n",
+ "workflow = create_featreg_preproc()\n",
+ "\n",
+ "# Specify input values\n",
+ "workflow.inputs.inputspec.func = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'\n",
+ "workflow.inputs.inputspec.fwhm = 10\n",
+ "workflow.inputs.inputspec.highpass = 50"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [],
+ "source": [
+ "workflow.run(plugin='MultiProc', plugin_args=args_dict)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "node.result.runtime\n",
+ "[Bunch(cmdline='fslmaths /data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz /tmp/tmp9102ji29/featpreproc/img2float/mapflow/_img2float0/sub-01_ses-test_task-fingerfootlips_bold_dtype.nii.gz -odt float', command_path='/usr/lib/fsl/5.0/fslmaths', cwd='/tmp/tmp9102ji29/featpreproc/img2float/mapflow/_img2float0', dependencies=b'\\tlinux-vdso.so.1 (0x00007ffc53ffb000)\\n\\tlibnewimage.so => /usr/lib/fsl/5.0/libnewimage.so (0x00007f1064ef7000)\\n\\tlibmiscmaths.so => /usr/lib/fsl/5.0/libmiscmaths.so (0x00007f1064c6a000)\\n\\tlibprob.so => /usr/lib/fsl/5.0/libprob.so (0x00007f1064a62000)\\n\\tlibfslio.so => /usr/lib/fsl/5.0/libfslio.so (0x00007f1064855000)\\n\\tlibnewmat.so.10 => /usr/lib/libnewmat.so.10 (0x00007f10645ff000)\\n\\tlibutils.so => /usr/lib/fsl/5.0/libutils.so (0x00007f10643f2000)\\n\\tlibniftiio.so.2 => /usr/lib/libniftiio.so.2 (0x00007f10641d0000)\\n\\tlibznz.so.2 => /usr/lib/libznz.so.2 (0x00007f1063fcc000)\\n\\tlibz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f1063db2000)\\n\\tlibstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f1063a30000)\\n\\tlibm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f106372c000)\\n\\tlibgcc_s.so.1 => /opt/mcr/v92/sys/os/glnxa64/libgcc_s.so.1 (0x00007f1063516000)\\n\\tlibc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1063177000)\\n\\t/lib64/ld-linux-x86-64.so.2 (0x00007f1065513000)', duration=8.307612, endTime='2018-04-30T14:45:51.031657', environ={'CLICOLOR': 1, 'CONDA_DEFAULT_ENV': neuro, 'CONDA_DIR': /opt/conda, 'CONDA_PATH_BACKUP': /usr/lib/fsl/5.0:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin, 'CONDA_PREFIX': /opt/conda/envs/neuro, 'CONDA_PS1_BACKUP': , 'FORCE_SPMMCR': 1, 'FSLBROWSER': /etc/alternatives/x-www-browser, 'FSLDIR': /usr/share/fsl/5.0, 'FSLLOCKDIR': , 'FSLMACHINELIST': , 'FSLMULTIFILEQUIT': TRUE, 'FSLOUTPUTTYPE': NIFTI_GZ, 'FSLREMOTECALL': , 'FSLTCLSH': /usr/bin/tclsh, 'FSLWISH': /usr/bin/wish, 'GIT_PAGER': cat, 'HOME': /home/neuro, 'HOSTNAME': bb97daa6f4d9, 'JPY_PARENT_PID': 50, 'LANG': en_US.UTF-8, 'LC_ALL': C.UTF-8, 'LD_LIBRARY_PATH': /usr/lib/fsl/5.0:/usr/lib/x86_64-linux-gnu:/opt/mcr/v92/runtime/glnxa64:/opt/mcr/v92/bin/glnxa64:/opt/mcr/v92/sys/os/glnxa64, 'MATLABCMD': /opt/mcr/v92/toolbox/matlab, 'MPLBACKEND': module://ipykernel.pylab.backend_inline, 'ND_ENTRYPOINT': /neurodocker/startup.sh, 'PAGER': cat, 'PATH': /opt/conda/envs/neuro/bin:/usr/lib/fsl/5.0:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin, 'POSSUMDIR': /usr/share/fsl/5.0, 'PS1': (neuro) , 'PWD': /home/neuro/nipype_tutorial, 'SHLVL': 1, 'SPMMCRCMD': /opt/spm12/run_spm12.sh /opt/mcr/v92/ script, 'TERM': xterm-color, '_': /opt/conda/envs/neuro/bin/jupyter-notebook}, hostname='bb97daa6f4d9', merged='', platform='Linux-4.13.0-39-generic-x86_64-with-debian-9.4', prevcwd='/home/neuro/nipype_tutorial/notebooks', returncode=0, startTime='2018-04-30T14:45:42.724045', stderr='', stdout='', version='5.0.9')]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "After the workflow finishes executing, the log file at `/home/neuro/run_stats.log` can be parsed for the runtime statistics. Here is an example of what the contents would look like:\n",
+ "\n",
+ "```python\n",
+ "{\"name\":\"resample_node\",\"id\":\"resample_node\",\n",
+ " \"start\":\"2016-03-11 21:43:41.682258\",\n",
+ " \"estimated_memory_gb\":2,\"num_threads\":1}\n",
+ "{\"name\":\"resample_node\",\"id\":\"resample_node\",\n",
+ "\"finish\":\"2016-03-11 21:44:28.357519\",\n",
+ "\"estimated_memory_gb\":\"2\",\"num_threads\":\"1\",\n",
+ "\"runtime_threads\":\"3\",\"runtime_memory_gb\":\"1.118469238281\"}\n",
+ "```\n",
+ "\n",
+ "Here it can be seen that the number of threads was underestimated while the amount of memory needed was overestimated. The next time this workflow is run the user can change the node interface ``num_threads`` and ``estimated_memory_gb`` parameters to reflect this for a higher pipeline throughput. Note, sometimes the \"runtime_threads\" value is higher than expected, particularly for multi-threaded applications. Tools can implement multi-threading in different ways under-the-hood; the profiler merely traverses the process tree to return all running threads associated with that process, some of which may include active thread-monitoring daemons or transient processes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Visualizing Pipeline Resources\n",
+ "\n",
+ "Nipype provides the ability to visualize the workflow execution based on the runtimes and system resources each node takes. It does this using the log file generated from the callback logger after workflow execution - as shown above. The [pandas](http://pandas.pydata.org/) Python package is required to use this feature."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.utils.profiler import log_nodes_cb\n",
+ "args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}\n",
+ "workflow.run(plugin='MultiProc', plugin_args=args_dict)\n",
+ "\n",
+ "# ...workflow finishes and writes callback log to '/home/user/run_stats.log'\n",
+ "\n",
+ "from nipype.utils.draw_gantt_chart import generate_gantt_chart\n",
+ "generate_gantt_chart('/home/neuro/run_stats.log', cores=8)\n",
+ "# ...creates gantt chart in '/home/user/run_stats.log.html'"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The ``generate_gantt_chart`` function will create an html file that can be viewed in a browser. Below is an example of the gantt chart displayed in a web browser. Note that when the cursor is hovered over any particular node bubble or resource bubble, some additional information is shown in a pop-up.\n",
+ "\n",
+ ""
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/wip_saving_workflows.ipynb b/notebooks/wip_saving_workflows.ipynb
new file mode 100644
index 0000000..2b1b7cd
--- /dev/null
+++ b/notebooks/wip_saving_workflows.ipynb
@@ -0,0 +1,166 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Saving Workflows and Nodes to a file (experimental)\n",
+ "\n",
+ "On top of the standard way of saving (i.e. serializing) objects in Python\n",
+ "(see [pickle](http://docs.python.org/2/library/pickle.html)) Nipype\n",
+ "provides methods to turn Workflows and nodes into human readable code.\n",
+ "This is useful if you want to save a Workflow that you have generated\n",
+ "on the fly for future use.\n",
+ "\n",
+ "# Example 1\n",
+ "\n",
+ "Let's first create a workflow:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype.interfaces.fsl import BET, ImageMaths\n",
+ "from nipype import Workflow, Node, MapNode\n",
+ "from nipype.interfaces.utility import Function, IdentityInterface\n",
+ "\n",
+ "bet = Node(BET(), name='bet')\n",
+ "bet.iterables = ('frac', [0.3, 0.4])\n",
+ "\n",
+ "bet2 = MapNode(BET(), name='bet2', iterfield=['infile'])\n",
+ "bet2.iterables = ('frac', [0.4, 0.5])\n",
+ "\n",
+ "maths = Node(ImageMaths(), name='maths')\n",
+ "\n",
+ "def testfunc(in1):\n",
+ " \"\"\"dummy func\n",
+ " \"\"\"\n",
+ " out = in1 + 'foo' + \"out1\"\n",
+ " return out\n",
+ "\n",
+ "funcnode = Node(Function(input_names=['a'], output_names=['output'], function=testfunc),\n",
+ " name='testfunc')\n",
+ "funcnode.inputs.in1 = '-sub'\n",
+ "func = lambda x: x\n",
+ "\n",
+ "inode = Node(IdentityInterface(fields=['a']), name='inode')\n",
+ "\n",
+ "wf = Workflow('testsave')\n",
+ "wf.add_nodes([bet2])\n",
+ "wf.connect(bet, 'mask_file', maths, 'in_file')\n",
+ "wf.connect(bet2, ('mask_file', func), maths, 'in_file2')\n",
+ "wf.connect(inode, 'a', funcnode, 'in1')\n",
+ "wf.connect(funcnode, 'output', maths, 'op_string')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To generate and export the Python code of this Workflow, we can use the `export` method:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "wf.export('special_workflow.py')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "This will create a file `special_workflow.py` with the following content:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from nipype import Workflow, Node, MapNode\n",
+ "from nipype.interfaces.utility import IdentityInterface\n",
+ "from nipype.interfaces.utility import Function\n",
+ "from nipype.utils.functions import getsource\n",
+ "from nipype.interfaces.fsl.preprocess import BET\n",
+ "from nipype.interfaces.fsl.utils import ImageMaths\n",
+ "# Functions\n",
+ "func = lambda x: x\n",
+ "# Workflow\n",
+ "testsave = Workflow(\"testsave\")\n",
+ "# Node: testsave.inode\n",
+ "inode = Node(IdentityInterface(fields=['a'], mandatory_inputs=True), name=\"inode\")\n",
+ "# Node: testsave.testfunc\n",
+ "testfunc = Node(Function(input_names=['a'], output_names=['output']), name=\"testfunc\")\n",
+ "testfunc.interface.ignore_exception = False\n",
+ "def testfunc_1(in1):\n",
+ " \"\"\"dummy func\n",
+ " \"\"\"\n",
+ " out = in1 + 'foo' + \"out1\"\n",
+ " return out\n",
+ "\n",
+ "testfunc.inputs.function_str = getsource(testfunc_1)\n",
+ "testfunc.inputs.in1 = '-sub'\n",
+ "testsave.connect(inode, \"a\", testfunc, \"in1\")\n",
+ "# Node: testsave.bet2\n",
+ "bet2 = MapNode(BET(), iterfield=['infile'], name=\"bet2\")\n",
+ "bet2.interface.ignore_exception = False\n",
+ "bet2.iterables = ('frac', [0.4, 0.5])\n",
+ "bet2.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}\n",
+ "bet2.inputs.output_type = 'NIFTI_GZ'\n",
+ "bet2.terminal_output = 'stream'\n",
+ "# Node: testsave.bet\n",
+ "bet = Node(BET(), name=\"bet\")\n",
+ "bet.interface.ignore_exception = False\n",
+ "bet.iterables = ('frac', [0.3, 0.4])\n",
+ "bet.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}\n",
+ "bet.inputs.output_type = 'NIFTI_GZ'\n",
+ "bet.terminal_output = 'stream'\n",
+ "# Node: testsave.maths\n",
+ "maths = Node(ImageMaths(), name=\"maths\")\n",
+ "maths.interface.ignore_exception = False\n",
+ "maths.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}\n",
+ "maths.inputs.output_type = 'NIFTI_GZ'\n",
+ "maths.terminal_output = 'stream'\n",
+ "testsave.connect(bet2, ('mask_file', func), maths, \"in_file2\")\n",
+ "testsave.connect(bet, \"mask_file\", maths, \"in_file\")\n",
+ "testsave.connect(testfunc, \"output\", maths, \"op_string\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The file is ready to use and includes all the necessary imports."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [default]",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.6.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/notebooks/z_advanced_caching.ipynb b/notebooks/z_advanced_caching.ipynb
deleted file mode 100644
index 57ce627..0000000
--- a/notebooks/z_advanced_caching.ipynb
+++ /dev/null
@@ -1,139 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "http://nipype.readthedocs.io/en/latest/users/caching_tutorial.html"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Nipype caching"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype.caching import Memory\n",
- "mem = Memory('.')"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Create `cacheable` objects"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype.interfaces.spm import Realign\n",
- "from nipype.interfaces.fsl import MCFLIRT\n",
- "\n",
- "spm_realign = mem.cache(Realign)\n",
- "fsl_realign = mem.cache(MCFLIRT)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Execute interfaces"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "spm_results = spm_realign(in_files='ds107.nii', register_to_mean=False)\n",
- "fsl_results = fsl_realign(in_file='ds107.nii', ref_vol=0, save_plots=True)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "subplot(211);plot(genfromtxt(fsl_results.outputs.par_file)[:, 3:])\n",
- "subplot(212);plot(genfromtxt(spm_results.outputs.realignment_parameters)[:,:3])"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "spm_results = spm_realign(in_files='ds107.nii', register_to_mean=False)\n",
- "fsl_results = fsl_realign(in_file='ds107.nii', ref_vol=0, save_plots=True)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### More caching"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from os.path import abspath as opap\n",
- "files = [opap('../ds107/sub001/BOLD/task001_run001/bold.nii.gz'),\n",
- " opap('../ds107/sub001/BOLD/task001_run002/bold.nii.gz')]\n",
- "converter = mem.cache(MRIConvert)\n",
- "newfiles = []\n",
- "for idx, fname in enumerate(files):\n",
- " newfiles.append(converter(in_file=fname,\n",
- " out_type='nii').outputs.out_file)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "os.chdir(tutorial_dir)"
- ]
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/z_advanced_commandline.ipynb b/notebooks/z_advanced_commandline.ipynb
deleted file mode 100644
index 05012ba..0000000
--- a/notebooks/z_advanced_commandline.ipynb
+++ /dev/null
@@ -1,47 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "http://nipype.readthedocs.io/en/latest/users/cli.html"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "http://nipype.readthedocs.io/en/latest/users/nipypecmd.html"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/z_advanced_databases.ipynb b/notebooks/z_advanced_databases.ipynb
deleted file mode 100644
index 4bdd3d2..0000000
--- a/notebooks/z_advanced_databases.ipynb
+++ /dev/null
@@ -1,95 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "https://github.com/nipy/nipype/blob/master/examples/fmri_ants_openfmri.py"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Step 9: Connecting to Databases"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from os.path import abspath as opap\n",
- "\n",
- "from nipype.interfaces.io import XNATSource\n",
- "from nipype.pipeline.engine import Node, Workflow\n",
- "from nipype.interfaces.fsl import BET\n",
- "\n",
- "subject_id = 'xnat_S00001'\n",
- "\n",
- "dg = Node(XNATSource(infields=['subject_id'],\n",
- " outfields=['struct'],\n",
- " config='/Users/satra/xnat_configs/nitrc_ir_config'),\n",
- " name='xnatsource')\n",
- "dg.inputs.query_template = ('/projects/fcon_1000/subjects/%s/experiments/xnat_E00001'\n",
- " '/scans/%s/resources/NIfTI/files')\n",
- "dg.inputs.query_template_args['struct'] = [['subject_id', 'anat_mprage_anonymized']]\n",
- "dg.inputs.subject_id = subject_id\n",
- "\n",
- "bet = Node(BET(), name='skull_stripper')\n",
- "\n",
- "wf = Workflow(name='testxnat')\n",
- "wf.base_dir = opap('xnattest')\n",
- "wf.connect(dg, 'struct', bet, 'in_file')"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from nipype.interfaces.io import XNATSink\n",
- "\n",
- "ds = Node(XNATSink(config='/Users/satra/xnat_configs/central_config'),\n",
- " name='xnatsink')\n",
- "ds.inputs.project_id = 'NPTEST'\n",
- "ds.inputs.subject_id = 'NPTEST_xnat_S00001'\n",
- "ds.inputs.experiment_id = 'test_xnat'\n",
- "ds.inputs.reconstruction_id = 'bet'\n",
- "ds.inputs.share = True\n",
- "wf.connect(bet, 'out_file', ds, 'brain')"
- ]
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/z_advanced_debug.ipynb b/notebooks/z_advanced_debug.ipynb
deleted file mode 100644
index 6787b4d..0000000
--- a/notebooks/z_advanced_debug.ipynb
+++ /dev/null
@@ -1,39 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "http://nipype.readthedocs.io/en/latest/users/debug.html"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/notebooks/z_advanced_export_workflow.ipynb b/notebooks/z_advanced_export_workflow.ipynb
deleted file mode 100644
index 5513a35..0000000
--- a/notebooks/z_advanced_export_workflow.ipynb
+++ /dev/null
@@ -1,39 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "http://nipype.readthedocs.io/en/latest/users/saving_workflows.html"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}
diff --git a/notebooks/z_advanced_resources_and_profiling.ipynb b/notebooks/z_advanced_resources_and_profiling.ipynb
deleted file mode 100644
index b2d8a98..0000000
--- a/notebooks/z_advanced_resources_and_profiling.ipynb
+++ /dev/null
@@ -1,40 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Look into: http://nipype.readthedocs.io/en/latest/users/resource_sched_profiler.html"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/z_development_github.ipynb b/notebooks/z_development_github.ipynb
deleted file mode 100644
index 1a6d915..0000000
--- a/notebooks/z_development_github.ipynb
+++ /dev/null
@@ -1,35 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Github\n",
- "\n",
- "step by step guide on how to submit PR's etc."
- ]
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/notebooks/z_development_report_issue.ipynb b/notebooks/z_development_report_issue.ipynb
deleted file mode 100644
index b8b1e45..0000000
--- a/notebooks/z_development_report_issue.ipynb
+++ /dev/null
@@ -1,35 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Report an issue\n",
- "\n",
- "step by step guide how to open an issue on github..."
- ]
- }
- ],
- "metadata": {
- "anaconda-cloud": {},
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.2"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 1
-}
diff --git a/static/images/datasink_flow.png b/static/images/datasink_flow.png
new file mode 100644
index 0000000..78b0d87
Binary files /dev/null and b/static/images/datasink_flow.png differ
diff --git a/static/images/gantt_chart.png b/static/images/gantt_chart.png
new file mode 100644
index 0000000..e457aa8
Binary files /dev/null and b/static/images/gantt_chart.png differ
diff --git a/static/images/itersource_1.png b/static/images/itersource_1.png
new file mode 100644
index 0000000..d1ca34c
Binary files /dev/null and b/static/images/itersource_1.png differ
diff --git a/static/images/itersource_2.png b/static/images/itersource_2.png
new file mode 100644
index 0000000..cc29142
Binary files /dev/null and b/static/images/itersource_2.png differ
diff --git a/static/images/logoNipype.png b/static/images/logoNipype.png
deleted file mode 100644
index 91b6fbb..0000000
Binary files a/static/images/logoNipype.png and /dev/null differ
diff --git a/static/images/plot.sub-01_ses-test_task-fingerfootlips_bold.png b/static/images/plot.sub-01_ses-test_task-fingerfootlips_bold.png
deleted file mode 100644
index 7a3160a..0000000
Binary files a/static/images/plot.sub-01_ses-test_task-fingerfootlips_bold.png and /dev/null differ
diff --git a/static/images/sphinx_ext.svg b/static/images/sphinx_ext.svg
new file mode 100644
index 0000000..dfa79e0
--- /dev/null
+++ b/static/images/sphinx_ext.svg
@@ -0,0 +1,1554 @@
+
+
+
+
+
diff --git a/static/images/synchronize_1.png b/static/images/synchronize_1.png
new file mode 100644
index 0000000..67a4aa0
Binary files /dev/null and b/static/images/synchronize_1.png differ
diff --git a/static/images/synchronize_2.png b/static/images/synchronize_2.png
new file mode 100644
index 0000000..ba5331b
Binary files /dev/null and b/static/images/synchronize_2.png differ