diff --git a/doc/_static/nipy-logo-bg-138x120.png b/doc/_static/nipy-logo-bg-138x120.png
deleted file mode 100644
index 73c4ffc124..0000000000
Binary files a/doc/_static/nipy-logo-bg-138x120.png and /dev/null differ
diff --git a/doc/_static/reggie2.png b/doc/_static/reggie2.png
deleted file mode 100644
index 1febedb3db..0000000000
Binary files a/doc/_static/reggie2.png and /dev/null differ
diff --git a/doc/documentation.rst b/doc/documentation.rst
index a9b741f46d..4a16c4a1cc 100644
--- a/doc/documentation.rst
+++ b/doc/documentation.rst
@@ -18,32 +18,6 @@ Previous versions: `1.1.3 `_ `1.1.2 `_.
- .. admonition:: Guides
-
- .. hlist::
- :columns: 2
-
- * User
-
- .. toctree::
- :maxdepth: 2
-
- users/index
-
- .. toctree::
- :maxdepth: 1
-
- changes
-
- * Developer
-
- .. toctree::
- :maxdepth: 2
-
- api/index
- devel/index
-
-
.. admonition:: Interfaces, Workflows and Examples
.. hlist::
@@ -72,4 +46,23 @@ Previous versions: `1.1.3 `_ `1.1.2 `_ and
-`botocore `_ Python packages to
-interact with AWS. To configure the DataSink to write data to S3, the user must
-set the ``base_directory`` property to an S3-style filepath. For example:
-
-::
-
- import nipype.interfaces.io as nio
- ds = nio.DataSink()
- ds.inputs.base_directory = 's3://mybucket/path/to/output/dir'
-
-With the "s3://" prefix in the path, the DataSink knows that the output
-directory to send files is on S3 in the bucket "mybucket". "path/to/output/dir"
-is the relative directory path within the bucket "mybucket" where output data
-will be uploaded to (NOTE: if the relative path specified contains folders that
-don’t exist in the bucket, the DataSink will create them). The DataSink treats
-the S3 base directory exactly as it would a local directory, maintaining support
-for containers, substitutions, subfolders, "." notation, etc to route output
-data appropriately.
-
-There are four new attributes introduced with S3-compatibility: ``creds_path``,
-``encrypt_bucket_keys``, ``local_copy``, and ``bucket``.
-
-::
-
- ds.inputs.creds_path = '/home/user/aws_creds/credentials.csv'
- ds.inputs.encrypt_bucket_keys = True
- ds.local_copy = '/home/user/workflow_outputs/local_backup'
-
-``creds_path`` is a file path where the user's AWS credentials file (typically
-a csv) is stored. This credentials file should contain the AWS access key id and
-secret access key and should be formatted as one of the following (these formats
-are how Amazon provides the credentials file by default when first downloaded).
-
-Root-account user:
-
-::
-
- AWSAccessKeyID=ABCDEFGHIJKLMNOP
- AWSSecretKey=zyx123wvu456/ABC890+gHiJk
-
-IAM-user:
-
-::
-
- User Name,Access Key Id,Secret Access Key
- "username",ABCDEFGHIJKLMNOP,zyx123wvu456/ABC890+gHiJk
-
-The ``creds_path`` is necessary when writing files to a bucket that has
-restricted access (almost no buckets are publicly writable). If ``creds_path``
-is not specified, the DataSink will check the ``AWS_ACCESS_KEY_ID`` and
-``AWS_SECRET_ACCESS_KEY`` environment variables and use those values for bucket
-access.
-
-``encrypt_bucket_keys`` is a boolean flag that indicates whether to encrypt the
-output data on S3, using server-side AES-256 encryption. This is useful if the
-data being output is sensitive and one desires an extra layer of security on the
-data. By default, this is turned off.
-
-``local_copy`` is a string of the filepath where local copies of the output data
-are stored in addition to those sent to S3. This is useful if one wants to keep
-a backup version of the data stored on their local computer. By default, this is
-turned off.
-
-``bucket`` is a boto3 Bucket object that the user can use to overwrite the
-bucket specified in their ``base_directory``. This can be useful if one has to
-manually create a bucket instance on their own using special credentials (or
-using a mock server like `fakes3 `_). This is
-typically used for developers unit-testing the DataSink class. Most users do not
-need to use this attribute for actual workflows. This is an optional argument.
-
-Finally, the user needs only to specify the input attributes for any incoming
-data to the node, and the outputs will be written to their S3 bucket.
-
-::
-
- workflow.connect(inputnode, 'subject_id', ds, 'container')
- workflow.connect(realigner, 'realigned_files', ds, 'motion')
-
-So, for example, outputs for sub001’s realigned_file1.nii.gz will be in:
-s3://mybucket/path/to/output/dir/sub001/motion/realigned_file1.nii.gz
-
-
-Using S3DataGrabber
-======================
-Coming soon...
diff --git a/doc/users/caching_tutorial.rst b/doc/users/caching_tutorial.rst
deleted file mode 100644
index 4d648277bd..0000000000
--- a/doc/users/caching_tutorial.rst
+++ /dev/null
@@ -1,173 +0,0 @@
-.. _caching:
-
-===========================
-Interface caching
-===========================
-
-This section details the interface-caching mechanism, exposed in the
-:mod:`nipype.caching` module.
-
-.. currentmodule:: nipype.caching
-
-Interface caching: why and how
-===============================
-
-* :ref:`Pipelines ` (also called `workflows`) specify
- processing by an execution graph. This is useful because it opens the
- door to dependency checking and enable `i)` to minimize
- recomputations, `ii)` to have the execution engine transparently deal
- with intermediate file manipulations.
-
- They however do not blend in well with arbitrary Python code, as they
- must rely on their own execution engine.
-
-* :ref:`Interfaces ` give fine control of the
- execution of each step with a thin wrapper on the underlying software.
- As a result that can easily be inserted in Python code.
-
- However, they force the user to specify explicit input and output file
- names and cannot do any caching.
-
-This is why nipype exposes an intermediate mechanism, `caching` that
-provides transparent output file management and caching within imperative
-Python code rather than a workflow.
-
-A big picture view: using the :class:`Memory` object
-=======================================================
-
-nipype caching relies on the :class:`Memory` class: it creates an
-execution context that is bound to a disk cache::
-
- >>> from nipype.caching import Memory
- >>> mem = Memory(base_dir='.')
-
-Note that the caching directory is a subdirectory called `nipype_mem` of
-the given `base_dir`. This is done to avoid polluting the base director.
-
-In the corresponding execution context, nipype interfaces can be turned
-into callables that can be used as functions using the
-:meth:`Memory.cache` method. For instance if we want to run the fslMerge
-command on a set of files::
-
- >>> from nipype.interface import fsl
- >>> fsl_merge = mem.cache(fsl.Merge)
-
-Note that the :meth:`Memory.cache` method takes interfaces **classes**,
-and not instances.
-
-The resulting `fsl_merge` object can be applied as a function to
-parameters, that will form the inputs of the `merge` fsl commands. Those
-inputs are given as keyword arguments, bearing the same name as the
-name in the inputs specs of the interface. In IPython, you can also get
-the argument list by using the `fsl_merge?` synthax to inspect the docs::
-
- In [10]: fsl_merge?
- String Form:PipeFunc(nipype.interfaces.fsl.utils.Merge, base_dir=/home/varoquau/dev/nipype/nipype/caching/nipype_mem)
- Namespace: Interactive
- File: /home/varoquau/dev/nipype/nipype/caching/memory.py
- Definition: fsl_merge(self, **kwargs)
- Docstring:
- Use fslmerge to concatenate images
-
- Inputs
- ------
-
- Mandatory:
- dimension: dimension along which the file will be merged
- in_files: None
-
- Optional:
- args: Additional parameters to the command
- environ: Environment variables (default={})
- ignore_exception: Print an error message instead of throwing an exception in case the interface fails to run (default=False)
- merged_file: None
- output_type: FSL output type
-
- Outputs
- -------
- merged_file: None
- Class Docstring:
- ...
-
-Thus `fsl_merge` is applied to parameters as such::
-
- >>> results = fsl_merge(dimension='t', in_files=['a.nii.gz', 'b.nii.gz'])
- INFO:workflow:Executing node faa7888f5955c961e5c6aa70cbd5c807 in dir: /home/varoquau/dev/nipype/nipype/caching/nipype_mem/nipype-interfaces-fsl-utils-Merge/faa7888f5955c961e5c6aa70cbd5c807
- INFO:workflow:Running: fslmerge -t /home/varoquau/dev/nipype/nipype/caching/nipype_mem/nipype-interfaces-fsl-utils-Merge/faa7888f5955c961e5c6aa70cbd5c807/a_merged.nii /home/varoquau/dev/nipype/nipype/caching/a.nii.gz /home/varoquau/dev/nipype/nipype/caching/b.nii.gz
-
-The results are standard nipype nodes results. In particular, they expose
-an `outputs` attribute that carries all the outputs of the process, as
-specified by the docs.
-
- >>> results.outputs.merged_file
- '/home/varoquau/dev/nipype/nipype/caching/nipype_mem/nipype-interfaces-fsl-utils-Merge/faa7888f5955c961e5c6aa70cbd5c807/a_merged.nii'
-
-Finally, and most important, if the node is applied to the same input
-parameters, it is not computed, and the results are reloaded from the
-disk::
-
- >>> results = fsl_merge(dimension='t', in_files=['a.nii.gz', 'b.nii.gz'])
- INFO:workflow:Executing node faa7888f5955c961e5c6aa70cbd5c807 in dir: /home/varoquau/dev/nipype/nipype/caching/nipype_mem/nipype-interfaces-fsl-utils-Merge/faa7888f5955c961e5c6aa70cbd5c807
- INFO:workflow:Collecting precomputed outputs
-
-Once the :class:`Memory` is set up and you are applying it to data, an
-important thing to keep in mind is that you are using up disk cache. It
-might be useful to clean it using the methods that :class:`Memory`
-provides for this: :meth:`Memory.clear_previous_runs`,
-:meth:`Memory.clear_runs_since`.
-
-.. topic:: Example
-
- A full-blown example showing how to stage multiple operations can be
- found in the :download:`caching_example.py <../../examples/howto_caching_example.py>` file.
-
-Usage patterns: working efficiently with caching
-===================================================
-
-The goal of the `caching` module is to enable writing plain Python code
-rather than workflows. Use it: instead of data grabber nodes, use for
-instance the `glob` module. To vary parameters, use `for` loops. To make
-reusable code, write Python functions.
-
-One good rule of thumb to respect is to avoid the usage of explicit
-filenames apart from the outermost inputs and outputs of your
-processing. The reason being that the caching mechanism of
-:mod:`nipy.caching` takes care of generating the unique hashes, ensuring
-that, when you vary parameters, files are not overridden by the output of
-different computations.
-
-.. topic:: Debuging
-
- If you need to inspect the running environment of the nodes, it may
- be useful to know where they were executed. With `nipype.caching`,
- you do not control this location as it is encoded by hashes.
-
- To find out where an operation has been persisted, simply look in
- it's output variable::
-
- out.runtime.cwd
-
-Finally, the more you explore different parameters, the more you risk
-creating cached results that will never be reused. Keep in mind that it
-may be useful to flush the cache using :meth:`Memory.clear_previous_runs`
-or :meth:`Memory.clear_runs_since`.
-
-API reference
-===============
-
-The main class of the :mod:`nipype.caching` module is the :class:`Memory`
-class:
-
-.. autoclass:: Memory
- :members: __init__, cache, clear_previous_runs, clear_runs_since
-
-____
-
-Also used are the :class:`PipeFunc`, callables that are returned by the
-:meth:`Memory.cache` decorator:
-
-.. currentmodule:: nipype.caching.memory
-
-.. autoclass:: PipeFunc
- :members: __init__
-
diff --git a/doc/users/cli.rst b/doc/users/cli.rst
deleted file mode 100644
index 04dddd3fee..0000000000
--- a/doc/users/cli.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-.. _cli:
-
-=============================
-Nipype Command Line Interface
-=============================
-
-The Nipype Command Line Interface allows a variety of operations::
-
- $ nipypecli
- Usage: nipypecli [OPTIONS] COMMAND [ARGS]...
-
- Options:
- -h, --help Show this message and exit.
-
- Commands:
- convert Export nipype interfaces to other formats.
- crash Display Nipype crash files.
- run Run a Nipype Interface.
- search Search for tracebacks content.
- show Print the content of Nipype node .pklz file.
-
-These have replaced previous nipype command line tools such as
-`nipype_display_crash`, `nipype_crash_search`, `nipype2boutiques`,
-`nipype_cmd` and `nipype_display_pklz`.
diff --git a/doc/users/config_file.rst b/doc/users/config_file.rst
deleted file mode 100644
index 8d296556cb..0000000000
--- a/doc/users/config_file.rst
+++ /dev/null
@@ -1,259 +0,0 @@
-.. _config_file:
-
-=======================
- Configuration File
-=======================
-
-Some of the system wide options of Nipype can be configured using a
-configuration file. Nipype looks for the file in the local folder under the name
-``nipype.cfg`` and in ``~/.nipype/nipype.cfg`` (in this order). If an option
-will not be specified a default value will be assumed. The file is divided into
-following sections:
-
-Logging
-~~~~~~~
-
-*workflow_level*
- How detailed the logs regarding workflow should be (possible values:
- ``INFO`` and ``DEBUG``; default value: ``INFO``)
-*utils_level*
- How detailed the logs regarding nipype utils, like file operations
- (for example overwriting warning) or the resource profiler, should be
- (possible values: ``INFO`` and ``DEBUG``; default value:
- ``INFO``)
-*interface_level*
- How detailed the logs regarding interface execution should be (possible
- values: ``INFO`` and ``DEBUG``; default value: ``INFO``)
-*filemanip_level* (deprecated as of 1.0)
- How detailed the logs regarding file operations (for example overwriting
- warning) should be (possible values: ``INFO`` and ``DEBUG``)
-*log_to_file*
- Indicates whether logging should also send the output to a file (possible
- values: ``true`` and ``false``; default value: ``false``)
-*log_directory*
- Where to store logs. (string, default value: home directory)
-*log_size*
- Size of a single log file. (integer, default value: 254000)
-*log_rotate*
- How many rotation should the log file make. (integer, default value: 4)
-
-Execution
-~~~~~~~~~
-
-*plugin*
- This defines which execution plugin to use. (possible values: ``Linear``,
- ``MultiProc``, ``SGE``, ``IPython``; default value: ``Linear``)
-
-*stop_on_first_crash*
- Should the workflow stop upon first node crashing or try to execute as many
- nodes as possible? (possible values: ``true`` and ``false``; default value:
- ``false``)
-
-*stop_on_first_rerun*
- Should the workflow stop upon first node trying to recompute (by that we
- mean rerunning a node that has been run before - this can happen due changed
- inputs and/or hash_method since the last run). (possible values: ``true``
- and ``false``; default value: ``false``)
-
-*hash_method*
- Should the input files be checked for changes using their content (slow, but
- 100% accurate) or just their size and modification date (fast, but
- potentially prone to errors)? (possible values: ``content`` and
- ``timestamp``; default value: ``timestamp``)
-
-*keep_inputs*
- Ensures that all inputs that are created in the nodes working directory are
- kept after node execution (possible values: ``true`` and ``false``; default
- value: ``false``)
-
-*single_thread_matlab*
- Should all of the Matlab interfaces (including SPM) use only one thread?
- This is useful if you are parallelizing your workflow using MultiProc or
- IPython on a single multicore machine. (possible values: ``true`` and
- ``false``; default value: ``true``)
-
-*display_variable*
- Override the ``$DISPLAY`` environment variable for interfaces that require
- an X server. This option is useful if there is a running X server, but
- ``$DISPLAY`` was not defined in nipype's environment. For example, if an X
- server is listening on the default port of 6000, set ``display_variable = :0``
- to enable nipype interfaces to use it. It may also point to displays provided
- by VNC, `xnest `_
- or `Xvfb `_.
- If neither ``display_variable`` nor the ``$DISPLAY`` environment variable are
- set, nipype will try to configure a new virtual server using Xvfb.
- (possible values: any X server address; default value: not set)
-
-*remove_unnecessary_outputs*
- This will remove any interface outputs not needed by the workflow. If the
- required outputs from a node changes, rerunning the workflow will rerun the
- node. Outputs of leaf nodes (nodes whose outputs are not connected to any
- other nodes) will never be deleted independent of this parameter. (possible
- values: ``true`` and ``false``; default value: ``true``)
-
-*try_hard_link_datasink*
- When the DataSink is used to produce an orginized output file outside
- of nipypes internal cache structure, a file system hard link will be
- attempted first. A hard link allow multiple file paths to point to the
- same physical storage location on disk if the conditions allow. By
- refering to the same physical file on disk (instead of copying files
- byte-by-byte) we can avoid unnecessary data duplication. If hard links
- are not supported for the source or destination paths specified, then
- a standard byte-by-byte copy is used. (possible values: ``true`` and
- ``false``; default value: ``true``)
-
-*use_relative_paths*
- Should the paths stored in results (and used to look for inputs)
- be relative or absolute. Relative paths allow moving the whole
- working directory around but may cause problems with
- symlinks. (possible values: ``true`` and ``false``; default
- value: ``false``)
-
-*local_hash_check*
- Perform the hash check on the job submission machine. This option minimizes
- the number of jobs submitted to a cluster engine or a multiprocessing pool
- to only those that need to be rerun. (possible values: ``true`` and
- ``false``; default value: ``true``)
-
-*job_finished_timeout*
- When batch jobs are submitted through, SGE/PBS/Condor they could be killed
- externally. Nipype checks to see if a results file exists to determine if
- the node has completed. This timeout determines for how long this check is
- done after a job finish is detected. (float in seconds; default value: 5)
-
-*remove_node_directories (EXPERIMENTAL)*
- Removes directories whose outputs have already been used
- up. Doesn't work with IdentiInterface or any node that patches
- data through (without copying) (possible values: ``true`` and
- ``false``; default value: ``false``)
-
-*stop_on_unknown_version*
- If this is set to True, an underlying interface will raise an error, when no
- version information is available. Please notify developers or submit a
- patch.
-
-*parameterize_dirs*
- If this is set to True, the node's output directory will contain full
- parameterization of any iterable, otherwise parameterizations over 32
- characters will be replaced by their hash. (possible values: ``true`` and
- ``false``; default value: ``true``)
-
-*poll_sleep_duration*
- This controls how long the job submission loop will sleep between submitting
- all pending jobs and checking for job completion. To be nice to cluster
- schedulers the default is set to 2 seconds.
-
-*xvfb_max_wait*
- Maximum time (in seconds) to wait for Xvfb to start, if the _redirect_x
- parameter of an Interface is True.
-
-*crashfile_format*
- This option controls the file type of any crashfile generated. Pklz
- crashfiles allow interactive debugging and rerunning of nodes, while text
- crashfiles allow portability across machines and shorter load time.
- (possible values: ``pklz`` and ``txt``; default value: ``pklz``)
-
-
-Resource Monitor
-~~~~~~~~~~~~~~~~
-
-*enabled*
- Enables monitoring the resources occupation (possible values: ``true`` and
- ``false``; default value: ``false``). All the following options will be
- dismissed if the resource monitor is not enabled.
-
-*sample_frequency*
- Sampling period (in seconds) between measurements of resources (memory, cpus)
- being used by an interface (default value: ``1``)
-
-*summary_file*
- Indicates where the summary file collecting all profiling information from the
- resource monitor should be stored after execution of a workflow.
- The ``summary_file`` does not apply to interfaces run independently.
- (unset by default, in which case the summary file will be written out to
- ``/resource_monitor.json`` of the top-level workflow).
-
-*summary_append*
- Append to an existing summary file (only applies to workflows).
- (default value: ``true``, possible values: ``true`` or ``false``).
-
-Example
-~~~~~~~
-
-::
-
- [logging]
- workflow_level = DEBUG
-
- [execution]
- stop_on_first_crash = true
- hash_method = timestamp
- display_variable = :1
-
- [monitoring]
- enabled = false
-
-
-Workflow.config property has a form of a nested dictionary reflecting the
-structure of the .cfg file.
-
-::
-
- myworkflow = pe.Workflow()
- myworkflow.config['execution'] = {'stop_on_first_rerun': 'True',
- 'hash_method': 'timestamp'}
-
-You can also directly set global config options in your workflow script. An
-example is shown below. This needs to be called before you import the
-pipeline or the logger. Otherwise logging level will not be reset.
-
-::
-
- from nipype import config
- cfg = dict(logging=dict(workflow_level = 'DEBUG'),
- execution={'stop_on_first_crash': False,
- 'hash_method': 'content'})
- config.update_config(cfg)
-
-Enabling logging to file
-~~~~~~~~~~~~~~~~~~~~~~~~
-
-By default, logging to file is disabled. One can enable and write the file to
-a location of choice as in the example below.
-
-::
-
- import os
- from nipype import config, logging
- config.update_config({'logging': {'log_directory': os.getcwd(),
- 'log_to_file': True}})
- logging.update_logging(config)
-
-The logging update line is necessary to change the behavior of logging such as
-output directory, logging level, etc.,.
-
-Debug configuration
-~~~~~~~~~~~~~~~~~~~
-
-To enable debug mode, one can insert the following lines::
-
- from nipype import config
- config.enable_debug_mode()
-
-In this mode the following variables are set::
-
- config.set('execution', 'stop_on_first_crash', 'true')
- config.set('execution', 'remove_unnecessary_outputs', 'false')
- config.set('execution', 'keep_inputs', 'true')
- config.set('logging', 'workflow_level', 'DEBUG')
- config.set('logging', 'interface_level', 'DEBUG')
- config.set('logging', 'utils_level', 'DEBUG')
-
-The primary loggers (``workflow``, ``interface`` and ``utils``) are also reset
-to level ``DEBUG``.
-You may wish to adjust these manually using::
-
- from nipype import logging
- logging.getLogger().setLevel()
-
-.. include:: ../links_names.txt
diff --git a/doc/users/debug.rst b/doc/users/debug.rst
deleted file mode 100644
index fcaa79ea4e..0000000000
--- a/doc/users/debug.rst
+++ /dev/null
@@ -1,76 +0,0 @@
-.. _debug:
-
-==========================
-Debugging Nipype Workflows
-==========================
-
-Throughout Nipype_ we try to provide meaningful error messages. If you run into
-an error that does not have a meaningful error message please let us know so
-that we can improve error reporting.
-
-Here are some notes that may help debugging workflows or understanding
-performance issues.
-
-#. Always run your workflow first on a single iterable (e.g. subject) and
- gradually increase the execution distribution complexity (Linear->MultiProc->
- SGE).
-
-#. Use the debug config mode. This can be done by setting::
-
- from nipype import config
- config.enable_debug_mode()
-
- as the first import of your nipype script.
-
- .. note::
-
- Turning on debug will rerun your workflows and will rerun them after
- debugging is turned off.
-
- Turning on debug mode will also override log levels specified elsewhere,
- such as in the nipype configuration.
- ``workflow``, ``interface`` and ``utils`` loggers will all be set to
- level ```DEBUG``.
-
-#. There are several configuration options that can help with debugging. See
- :ref:`config_file` for more details::
-
- keep_inputs
- remove_unnecessary_outputs
- stop_on_first_crash
- stop_on_first_rerun
-
-#. When running in distributed mode on cluster engines, it is possible for a
- node to fail without generating a crash file in the crashdump directory. In
- such cases, it will store a crash file in the `batch` directory.
-
-#. All Nipype crashfiles can be inspected with the `nipypecli crash`
- utility.
-
-#. The `nipypecli search` command allows you to search for regular expressions
- in the tracebacks of the Nipype crashfiles within a log folder.
-
-#. Nipype determines the hash of the input state of a node. If any input
- contains strings that represent files on the system path, the hash evaluation
- mechanism will determine the timestamp or content hash of each of those
- files. Thus any node with an input containing huge dictionaries (or lists) of
- file names can cause serious performance penalties.
-
-#. For HUGE data processing, 'stop_on_first_crash':'False', is needed to get the
- bulk of processing done, and then 'stop_on_first_crash':'True', is needed for
- debugging and finding failing cases. Setting 'stop_on_first_crash': 'False'
- is a reasonable option when you would expect 90% of the data to execute
- properly.
-
-#. Sometimes nipype will hang as if nothing is going on and if you hit Ctrl+C
- you will get a `ConcurrentLogHandler` error. Simply remove the pypeline.lock
- file in your home directory and continue.
-
-#. One many clusters with shared NFS mounts synchronization of files across
- clusters may not happen before the typical NFS cache timeouts. When using
- PBS/LSF/SGE/Condor plugins in such cases the workflow may crash because it
- cannot retrieve the node result. Setting the `job_finished_timeout` can help::
-
- workflow.config['execution']['job_finished_timeout'] = 65
-
-.. include:: ../links_names.txt
diff --git a/doc/users/function_interface.rst b/doc/users/function_interface.rst
deleted file mode 100644
index 7466469a42..0000000000
--- a/doc/users/function_interface.rst
+++ /dev/null
@@ -1,151 +0,0 @@
-.. _function_interface:
-
-======================
-The Function Interface
-======================
-
-Most Nipype interfaces provide access to external programs, such as FSL
-binaries or SPM routines. However, a special interface,
-:class:`nipype.interfaces.utility.Function`,
-allows you to wrap arbitrary Python code in the Interface framework and
-seamlessly integrate it into your workflows.
-
-A Simple Function Interface
----------------------------
-
-The most important component of a working Function interface is a Python
-function. There are several ways to associate a function with a Function
-interface, but the most common way will involve functions you code
-yourself as part of your Nipype scripts. Consider the following function::
-
- def add_two(val):
- return val + 2
-
-This simple function takes a value, adds 2 to it, and returns that new value.
-
-Just as Nipype interfaces have inputs and outputs, Python functions have
-inputs, in the form of parameters or arguments, and outputs, in the form
-of their return values. When you define a Function interface object with
-an existing function, as in the case of ``add_two()`` above, you must pass the
-constructor information about the function's inputs, its outputs, and the
-function itself. For example,
-
-::
-
- from nipype.interfaces.utility import Function
- add_two_interface = Function(input_names=["val"],
- output_names=["out_val"],
- function=add_two)
-
-Then you can set the inputs and run just as you would with any other
-interface::
-
- add_two_interface.inputs.val = 2
- res = add_two_interface.run()
- print res.outputs.out_val
-
-Which would print ``4``.
-
-Note that, if you are working interactively, the Function interface is
-unable to use functions that are defined within your interpreter session.
-(Specifically, it can't use functions that live in the ``__main__`` namespace).
-
-Using External Packages
------------------------
-
-Chances are, you will want to write functions that do more complicated
-processing, particularly using the growing stack of Python packages
-geared towards neuroimaging, such as Nibabel_, Nipy_, or PyMVPA_.
-
-While this is completely possible (and, indeed, an intended use of the
-Function interface), it does come with one important constraint. The
-function code you write is executed in a standalone environment,
-which means that any external functions or classes you use have to
-be imported within the function itself::
-
- def get_n_trs(in_file):
- import nibabel
- f = nibabel.load(in_file)
- return f.shape[-1]
-
-Without explicitly importing Nibabel in the body of the function, this
-would fail.
-
-Alternatively, it is possible to provide a list of strings corresponding
-to the imports needed to execute a function as a parameter of the `Function`
-constructor. This allows for the use of external functions that do not
-import all external definitions inside the function body.
-
-Hello World - Function interface in a workflow
-----------------------------------------------
-
-Contributed by: Hänel Nikolaus Valentin
-
-The following snippet of code demonstrates the use of the function interface in
-the context of a workflow. Note the use of ``import os`` within the function as
-well as returning the absolute path from the Hello function. The `import` inside
-is necessary because functions are coded as strings and do not have to be on the
-PYTHONPATH. However any function called by this function has to be available on
-the PYTHONPATH. The `absolute path` is necessary because all workflow nodes are
-executed in their own directory and therefore there is no way of determining
-that the input file came from a different directory::
-
- import nipype.pipeline.engine as pe
- from nipype.interfaces.utility import Function
-
- def Hello():
- import os
- from nipype import logging
- iflogger = logging.getLogger('interface')
- message = "Hello "
- file_name = 'hello.txt'
- iflogger.info(message)
- with open(file_name, 'w') as fp:
- fp.write(message)
- return os.path.abspath(file_name)
-
- def World(in_file):
- from nipype import logging
- iflogger = logging.getLogger('interface')
- message = "World!"
- iflogger.info(message)
- with open(in_file, 'a') as fp:
- fp.write(message)
-
- hello = pe.Node(name='hello',
- interface=Function(input_names=[],
- output_names=['out_file'],
- function=Hello))
- world = pe.Node(name='world',
- interface=Function(input_names=['in_file'],
- output_names=[],
- function=World))
-
- pipeline = pe.Workflow(name='nipype_demo')
- pipeline.connect([(hello, world, [('out_file', 'in_file')])])
- pipeline.run()
- pipeline.write_graph(graph2use='flat')
-
-
-Advanced Use
-------------
-
-To use an existing function object (as we have been doing so far) with a Function
-interface, it must be passed to the constructor. However, it is also possible
-to dynamically set how a Function interface will process its inputs using the
-special ``function_str`` input.
-
-This input takes not a function object, but actually a single string that can
-be parsed to define a function. In the equivalent case to our example above,
-the string would be
-
-::
-
- add_two_str = "def add_two(val):\n return val + 2\n"
-
-Unlike when using a function object, this input can be set like any other,
-meaning that you could write a function that outputs different function
-strings depending on some run-time contingencies, and connect that output
-the ``function_str`` input of a downstream Function interface.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/grabbing_and_sinking.rst b/doc/users/grabbing_and_sinking.rst
deleted file mode 100644
index ae6193b550..0000000000
--- a/doc/users/grabbing_and_sinking.rst
+++ /dev/null
@@ -1,267 +0,0 @@
-.. _grabbing_and_sinking:
-
-==================================
-DataGrabber and DataSink explained
-==================================
-
-In this chapter we will try to explain the concepts behind DataGrabber and
-:ref:`DataSink `.
-
-Why do we need these interfaces?
-================================
-
-A typical workflow takes data as input and produces data as the result of one or
-more operations. One can set the data required by a workflow directly as
-illustrated below.
-
-::
-
- from fsl_tutorial2 import preproc
- preproc.base_dir = os.path.abspath('.')
- preproc.inputs.inputspec.func = os.path.abspath('data/s1/f3.nii')
- preproc.inputs.inputspec.struct = os.path.abspath('data/s1/struct.nii')
- preproc.run()
-
-Typical neuroimaging studies require running workflows on multiple subjects or
-different parameterizations of algorithms. One simple approach to that would be
-to simply iterate over subjects.
-
-::
-
- from fsl_tutorial2 import preproc
- for name in subjects:
- preproc.base_dir = os.path.abspath('.')
- preproc.inputs.inputspec.func = os.path.abspath('data/%s/f3.nii'%name)
- preproc.inputs.inputspec.struct = os.path.abspath('data/%s/struct.nii'%name)
- preproc.run()
-
-However, in the context of complex workflows and given that users typically
-arrange their imaging and other data in a semantically hierarchical data store,
-an alternative mechanism for reading and writing the data generated by a workflow
-is often necessary. As the names suggest DataGrabber is used to get at data
-stored in a shared file system while :ref:`DataSink ` is used to store the data
-generated by a workflow into a hierarchical structure on disk.
-
-
-DataGrabber
-===========
-
-DataGrabber is an interface for collecting files from hard drive. It is very
-flexible and supports almost any file organization of your data you can imagine.
-
-You can use it as a trivial use case of getting a fixed file. By default,
-DataGrabber stores its outputs in a field called outfiles.
-
-::
-
- import nipype.interfaces.io as nio
- datasource1 = nio.DataGrabber()
- datasource1.inputs.base_directory = os.getcwd()
- datasource1.inputs.template = 'data/s1/f3.nii'
- datasource1.inputs.sort_filelist = True
- results = datasource1.run()
-
-Or you can get at all uncompressed NIfTI files starting with the letter 'f' in
-all directories starting with the letter 's'.
-
-::
-
- datasource2.inputs.base_directory = '/mass'
- datasource2.inputs.template = 'data/s*/f*.nii'
- datasource1.inputs.sort_filelist = True
-
-Two special inputs were used in these previous cases. The input `base_directory`
-indicates in which directory to search, while the input `template` indicates the
-string template to match. So in the previous case datagrabber is looking for
-path matches of the form `/mass/data/s*/f*`.
-
-.. note::
-
- When used with wildcards (e.g., s* and f* above) DataGrabber does not return
- data in sorted order. In order to force it to return data in sorted order, one
- needs to set the input `sorted = True`. However, when explicitly specifying an
- order as we will see below, `sorted` should be set to `False`.
-
-More useful cases arise when the template can be filled by other inputs. In the
-example below, we define an input field for `datagrabber` called `run`. This is
-then used to set the template (see %d in the template).
-
-::
-
- datasource3 = nio.DataGrabber(infields=['run'])
- datasource3.inputs.base_directory = os.getcwd()
- datasource3.inputs.template = 'data/s1/f%d.nii'
- datasource1.inputs.sort_filelist = True
- datasource3.inputs.run = [3, 7]
-
-This will return files `basedir/data/s1/f3.nii` and `basedir/data/s1/f7.nii`. We
-can take this a step further and pair subjects with runs.
-
-::
-
- datasource4 = nio.DataGrabber(infields=['subject_id', 'run'])
- datasource4.inputs.template = 'data/%s/f%d.nii'
- datasource1.inputs.sort_filelist = True
- datasource4.inputs.run = [3, 7]
- datasource4.inputs.subject_id = ['s1', 's3']
-
-This will return files `basedir/data/s1/f3.nii` and `basedir/data/s3/f7.nii`.
-
-A more realistic use-case
--------------------------
-
-In a typical study one often wants to grab different files for a given subject
-and store them in semantically meaningful outputs. In the following example, we
-wish to retrieve all the functional runs and the structural image for the subject 's1'.
-
-::
-
- datasource = nio.DataGrabber(infields=['subject_id'], outfields=['func', 'struct'])
- datasource.inputs.base_directory = 'data'
- datasource.inputs.template = '*'
- datasource1.inputs.sort_filelist = True
- datasource.inputs.field_template = dict(func='%s/f%d.nii',
- struct='%s/struct.nii')
- datasource.inputs.template_args = dict(func=[['subject_id', [3,5,7,10]]],
- struct=[['subject_id']])
- datasource.inputs.subject_id = 's1'
-
-Two more fields are introduced: `field_template` and `template_args`. These
-fields are both dictionaries whose keys correspond to the `outfields`
-keyword. The `field_template` reflects the search path for each output field,
-while the `template_args` reflect the inputs that satisfy the template. The
-inputs can either be one of the named inputs specified by the `infields` keyword
-arg or it can be raw strings or integers corresponding to the template. For the
-`func` output, the **%s** in the `field_template` is satisfied by `subject_id`
-and the **%d** is field in by the list of numbers.
-
-.. note::
-
- We have not set `sorted` to `True` as we want the DataGrabber to return the
- functional files in the order it was specified rather than in an alphabetic
- sorted order.
-
-DataSink
-========
-
-A workflow working directory is like a **cache**. It contains not only the
-outputs of various processing stages, it also contains various extraneous
-information such as execution reports, hashfiles determining the input state of
-processes. All of this is embedded in a hierarchical structure that reflects the
-iterables that have been used in the workflow. This makes navigating the working
-directory a not so pleasant experience. And typically the user is interested in
-preserving only a small percentage of these outputs. The :ref:`DataSink ` interface can
-be used to extract components from this `cache` and store it at a different
-location. For XNAT-based storage, see :ref:`XNATSink ` .
-
-.. note::
-
- Unlike other interfaces, a :ref:`DataSink `'s inputs are defined and created by using
- the workflow connect statement. Currently disconnecting an input from the
- :ref:`DataSink ` does not remove that connection port.
-
-Let's assume we have the following workflow.
-
-.. digraph:: simple_workflow
-
- "InputNode" -> "Realign" -> "DataSink";
- "InputNode" -> "DataSink";
-
-The following code segment defines the :ref:`DataSink ` node and sets the `base_directory`
-in which all outputs will be stored. The `container` input creates a
-subdirectory within the `base_directory`. If you are iterating a workflow over
-subjects, it may be useful to save it within a folder with the subject id.
-
-::
-
- datasink = pe.Node(nio.DataSink(), name='sinker')
- datasink.inputs.base_directory = '/path/to/output'
- workflow.connect(inputnode, 'subject_id', datasink, 'container')
-
-If we wanted to save the realigned files and the realignment parameters to the
-same place the most intuitive option would be:
-
-::
-
- workflow.connect(realigner, 'realigned_files', datasink, 'motion')
- workflow.connect(realigner, 'realignment_parameters', datasink, 'motion')
-
-However, this will not work as only one connection is allowed per input port. So
-we need to create a second port. We can store the files in a separate folder.
-
-::
-
- workflow.connect(realigner, 'realigned_files', datasink, 'motion')
- workflow.connect(realigner, 'realignment_parameters', datasink, 'motion.par')
-
-The period (.) indicates that a subfolder called par should be created. But if
-we wanted to store it in the same folder as the realigned files, we would use
-the `.@` syntax. The @ tells the :ref:`DataSink ` interface to not create the
-subfolder. This will allow us to create different named input ports for :ref:`DataSink `
-and allow the user to store the files in the same folder.
-
-::
-
- workflow.connect(realigner, 'realigned_files', datasink, 'motion')
- workflow.connect(realigner, 'realignment_parameters', datasink, 'motion.@par')
-
-The syntax for the input port of :ref:`DataSink ` takes the following form:
-
-::
-
- string[[.[@]]string[[.[@]]string] ...]
- where parts between paired [] are optional.
-
-MapNode
--------
-
-In order to use :ref:`DataSink ` inside a MapNode, it's
-inputs have to be defined inside the constructor using the `infields` keyword
-arg.
-
-Parameterization
-----------------
-
-As discussed in :doc:`mapnode_and_iterables`, one can run a workflow iterating
-over various inputs using the iterables attribute of nodes. This means that a
-given workflow can have multiple outputs depending on how many iterables are
-there. Iterables create working directory subfolders such as
-`_iterable_name_value`. The `parameterization` input parameter controls whether
-the data stored using :ref:`DataSink ` is in a folder structure that contains this
-iterable information or not. It is generally recommended to set this to `True`
-when using multiple nested iterables.
-
-
-Substitutions
--------------
-
-The ``substitutions`` and ``regexp_substitutions`` inputs allow users to modify the
-output destination path and name of a file. Substitutions are a list of 2-tuples
-and are carried out in the order in which they were entered. Assuming that the
-output path of a file is:
-
-::
-
- /root/container/_variable_1/file_subject_realigned.nii
-
-we can use substitutions to clean up the output path.
-
-::
-
- datasink.inputs.substitutions = [('_variable', 'variable'),
- ('file_subject_', '')]
-
-This will rewrite the file as:
-
-::
-
- /root/container/variable_1/realigned.nii
-
-
-.. note::
-
- In order to figure out which substitutions are needed it is often useful to
- run the workflow on a limited set of iterables and then determine the
- substitutions.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/images/componentarchitecture.png b/doc/users/images/componentarchitecture.png
deleted file mode 100644
index 95c117c1a3..0000000000
Binary files a/doc/users/images/componentarchitecture.png and /dev/null differ
diff --git a/doc/users/images/gantt_chart.png b/doc/users/images/gantt_chart.png
deleted file mode 100644
index e457aa8799..0000000000
Binary files a/doc/users/images/gantt_chart.png and /dev/null differ
diff --git a/doc/users/images/proc2subj.png b/doc/users/images/proc2subj.png
deleted file mode 100644
index 3ca530422d..0000000000
Binary files a/doc/users/images/proc2subj.png and /dev/null differ
diff --git a/doc/users/images/proc2subj2fwhm.png b/doc/users/images/proc2subj2fwhm.png
deleted file mode 100644
index 5d71800cbb..0000000000
Binary files a/doc/users/images/proc2subj2fwhm.png and /dev/null differ
diff --git a/doc/users/images/smoothrealignconnected.png b/doc/users/images/smoothrealignconnected.png
deleted file mode 100644
index c2ca2d53c3..0000000000
Binary files a/doc/users/images/smoothrealignconnected.png and /dev/null differ
diff --git a/doc/users/images/smoothrealignunconnected.png b/doc/users/images/smoothrealignunconnected.png
deleted file mode 100644
index c032e50603..0000000000
Binary files a/doc/users/images/smoothrealignunconnected.png and /dev/null differ
diff --git a/doc/users/images/threecomponentpipe.png b/doc/users/images/threecomponentpipe.png
deleted file mode 100644
index 6d564c3146..0000000000
Binary files a/doc/users/images/threecomponentpipe.png and /dev/null differ
diff --git a/doc/users/index.rst b/doc/users/index.rst
deleted file mode 100644
index a66d8d7a14..0000000000
--- a/doc/users/index.rst
+++ /dev/null
@@ -1,49 +0,0 @@
-.. _users-guide-index:
-
-============
- User Guide
-============
-
-:Release: |version|
-:Date: |today|
-
-.. toctree::
- :maxdepth: 2
-
- install
- neurodocker
- caching_tutorial
-
-.. toctree::
- Nipype tutorials
- Porcupine graphical interface
-
-.. toctree::
- :maxdepth: 1
-
- plugins
- config_file
- debug
- cli
-
-.. toctree::
- :maxdepth: 1
-
- grabbing_and_sinking
- select_files
- function_interface
- mapnode_and_iterables
- joinnode_and_itersource
- model_specification
- saving_workflows
- spmmcr
- mipav
- nipypecmd
- aws
- resource_sched_profiler
- sphinx_ext
-
-
-
-
-
diff --git a/doc/users/install.rst b/doc/users/install.rst
index 6c90d7f294..3a710088e9 100644
--- a/doc/users/install.rst
+++ b/doc/users/install.rst
@@ -16,7 +16,8 @@ image from Docker hub::
docker pull nipype/nipype
You may also build custom docker containers with specific versions of software
-using Neurodocker_ (see the :doc:`neurodocker`).
+using Neurodocker_ (see the `Neurodocker tutorial
+`_).
Using conda
~~~~~~~~~~~
diff --git a/doc/users/joinnode_and_itersource.rst b/doc/users/joinnode_and_itersource.rst
deleted file mode 100644
index 235ef8c445..0000000000
--- a/doc/users/joinnode_and_itersource.rst
+++ /dev/null
@@ -1,175 +0,0 @@
-.. _joinnode_and_itersource:
-
-====================================
-JoinNode, synchronize and itersource
-====================================
-The previous :doc:`mapnode_and_iterables` chapter described how to
-fork and join nodes using MapNode and iterables. In this chapter, we
-introduce features which build on these concepts to add workflow
-flexibility.
-
-JoinNode, joinsource and joinfield
-==================================
-
-A :class:`nipype.pipeline.engine.JoinNode` generalizes MapNode to
-operate in conjunction with an upstream iterable node to reassemble
-downstream results, e.g.:
-
-.. digraph:: joinnode_ex
-
- "A" -> "B1" -> "C1" -> "D";
- "A" -> "B2" -> "C2" -> "D";
- "A" -> "B3" -> "C3" -> "D";
-
-The code to achieve this is as follows:
-
-::
-
- import nipype.pipeline.engine as pe
- a = pe.Node(interface=A(), name="a")
- b = pe.Node(interface=B(), name="b")
- b.iterables = ("in_file", images)
- c = pe.Node(interface=C(), name="c")
- d = pe.JoinNode(interface=D(), joinsource="b",
- joinfield="in_files", name="d")
-
- my_workflow = pe.Workflow(name="my_workflow")
- my_workflow.connect([(a,b,[('subject','subject')]),
- (b,c,[('out_file','in_file')])
- (c,d,[('out_file','in_files')])
- ])
-
-This example assumes that interface "A" has one output *subject*,
-interface "B" has two inputs *subject* and *in_file* and one output
-*out_file*, interface "C" has one input *in_file* and one output
-*out_file*, and interface D has one list input *in_files*. The
-*images* variable is a list of three input image file names.
-
-As with *iterables* and the MapNode *iterfield*, the *joinfield*
-can be a list of fields. Thus, the declaration in the previous example
-is equivalent to the following:
-
-::
-
- d = pe.JoinNode(interface=D(), joinsource="b",
- joinfield=["in_files"], name="d")
-
-The *joinfield* defaults to all of the JoinNode input fields, so the
-declaration is also equivalent to the following:
-
-::
-
- d = pe.JoinNode(interface=D(), joinsource="b", name="d")
-
-In this example, the node "c" *out_file* outputs are collected into
-the JoinNode "d" *in_files* input list. The *in_files* order is the
-same as the upstream "b" node iterables order.
-
-The JoinNode input can be filtered for unique values by specifying
-the *unique* flag, e.g.:
-
-::
-
- d = pe.JoinNode(interface=D(), joinsource="b", unique=True, name="d")
-
-synchronize
-===========
-
-The :class:`nipype.pipeline.engine.Node` *iterables* parameter can be
-be a single field or a list of fields. If it is a list, then execution
-is performed over all permutations of the list items. For example:
-
-::
-
- b.iterables = [("m", [1, 2]), ("n", [3, 4])]
-
-results in the execution graph:
-
-.. digraph:: multiple_iterables_ex
-
- "A" -> "B13" -> "C";
- "A" -> "B14" -> "C";
- "A" -> "B23" -> "C";
- "A" -> "B24" -> "C";
-
-where "B13" has inputs *m* = 1, *n* = 3, "B14" has inputs *m* = 1,
-*n* = 4, etc.
-
-The *synchronize* parameter synchronizes the iterables lists, e.g.:
-
-::
-
- b.iterables = [("m", [1, 2]), ("n", [3, 4])]
- b.synchronize = True
-
-results in the execution graph:
-
-.. digraph:: synchronize_ex
-
- "A" -> "B13" -> "C";
- "A" -> "B24" -> "C";
-
-where the iterable inputs are selected in lock-step by index, i.e.:
-
-(*m*, *n*) = (1, 3) and (2, 4)
-
-for "B13" and "B24", resp.
-
-itersource
-==========
-
-The *itersource* feature allows you to expand a downstream iterable
-based on a mapping of an upstream iterable. For example:
-
-::
-
- a = pe.Node(interface=A(), name="a")
- b = pe.Node(interface=B(), name="b")
- b.iterables = ("m", [1, 2])
- c = pe.Node(interface=C(), name="c")
- d = pe.Node(interface=D(), name="d")
- d.itersource = ("b", "m")
- d.iterables = [("n", {1:[3,4], 2:[5,6]})]
- my_workflow = pe.Workflow(name="my_workflow")
- my_workflow.connect([(a,b,[('out_file','in_file')]),
- (b,c,[('out_file','in_file')])
- (c,d,[('out_file','in_file')])
- ])
-
-results in the execution graph:
-
-.. digraph:: itersource_ex
-
- "A" -> "B1" -> "C1" -> "D13";
- "C1" -> "D14";
- "A" -> "B2" -> "C2" -> "D25";
- "C2" -> "D26";
-
-In this example, all interfaces have input *in_file* and output
-*out_file*. In addition, interface "B" has input *m* and interface "D"
-has input *n*. A Python dictionary associates the "b" node input
-value with the downstream "d" node *n* iterable values.
-
-This example can be extended with a summary JoinNode:
-
-::
-
- e = pe.JoinNode(interface=E(), joinsource="d",
- joinfield="in_files", name="e")
- my_workflow.connect(d, 'out_file',
- e, 'in_files')
-
-resulting in the graph:
-
-.. digraph:: itersource_with_join_ex
-
- "A" -> "B1" -> "C1" -> "D13" -> "E";
- "C1" -> "D14" -> "E";
- "A" -> "B2" -> "C2" -> "D25" -> "E";
- "C2" -> "D26" -> "E";
-
-The combination of iterables, MapNode, JoinNode, synchronize and
-itersource enables the creation of arbitrarily complex workflow graphs.
-The astute workflow builder will recognize that this flexibility is
-both a blessing and a curse. These advanced features are handy additions
-to the Nipype toolkit when used judiciously.
diff --git a/doc/users/mapnode_and_iterables.rst b/doc/users/mapnode_and_iterables.rst
deleted file mode 100644
index 30c8efe79c..0000000000
--- a/doc/users/mapnode_and_iterables.rst
+++ /dev/null
@@ -1,152 +0,0 @@
-.. _mapnode_and_iterables:
-
-============================================
-MapNode, iterfield, and iterables explained
-============================================
-In this chapter we will try to explain the concepts behind MapNode, iterfield,
-and iterables.
-
-
-MapNode and iterfield
-======================
-
-Imagine that you have a list of items (lets say files) and you want to execute
-the same node on them (for example some smoothing or masking). Some nodes accept
-multiple files and do exactly the same thing on them, but some don't (they expect
-only one file). MapNode can solve this problem. Imagine you have the following
-workflow:
-
-.. digraph:: mapnode_before
-
- "A" -> "B" -> "C";
-
-Node "A" outputs a list of files, but node "B" accepts only one file. Additionally
-"C" expects a list of files. What you would like is to run "B" for every file in
-the output of "A" and collect the results as a list and feed it to "C". Something
-like this:
-
-.. digraph:: mapnode_after
-
- "A" -> "B1" -> "C";
- "A" -> "B2" -> "C";
- "A" -> "B3" -> "C";
- "A" -> "Bn" -> "C";
-
-The code to achieve this is quite simple
-
-::
-
- import nipype.pipeline.engine as pe
- a = pe.Node(interface=A(), name="a")
- b = pe.MapNode(interface=B(), name="b", iterfield=['in_file'])
- c = pe.Node(interface=C(), name="c")
-
- my_workflow = pe.Workflow(name="my_workflow")
- my_workflow.connect([(a,b,[('out_files','in_file')]),
- (b,c,[('out_file','in_files')])
- ])
-
-assuming that interfaces "A" and "C" have one input "in_files" and one output
-"out_files" (both lists of files). Interface "B" has single file input "in_file"
-and single file output "out_file".
-
-You probably noticed that you connect nodes as if "B" could accept and output
-list of files. This is because it is wrapped using MapNode instead of Node. This
-special version of node will (under the bonnet) create an instance of "B" for
-every item in the list from the input. The compulsory argument "iterfield"
-defines which input should it iterate over (for example in single file smooth
-interface you would like to iterate over input files not the smoothing width). At
-the end outputs are collected into a list again. In other words this is map and
-reduce scenario.
-
-You might have also noticed that the iterfield arguments expects a list of input
-names instead of just one name. This suggests that there can be more than one!
-Even though a bit confusing this is true. You can specify more than one input to
-iterate over but the lists that you provide (for all the inputs specified in
-iterfield) have to have the same length. MapNode will then pair the parameters up
-and run the first instance with first set of parameters and second with second set
-of parameters. For example, this code:
-
-::
-
- b = pe.MapNode(interface=B(), name="b", iterfield=['in_file', 'n'])
- b.inputs.in_file = ['file', 'another_file', 'different_file']
- b.inputs.n = [1,2,3]
- b.run()
-
-is almost the same as running
-
-::
-
- b1 = pe.Node(interface=B(), name="b1")
- b1.inputs.in_file = 'file'
- b1.inputs.n = 1
-
- b2 = pe.Node(interface=B(), name="b2")
- b2.inputs.in_file = 'another_file'
- b2.inputs.n = 2
-
- b3 = pe.Node(interface=B(), name="b3")
- b3.inputs.in_file = 'different_file'
- b3.inputs.n = 3
-
-It is a rarely used feature, but you can sometimes find it useful.
-
-In more advanced applications it is useful to be able to iterate over items
-of nested lists (for example [[1,2],[3,4]]). MapNode allows you to do this
-with the "nested=True" parameter. Outputs will preserve the same nested
-structure as the inputs.
-
-Iterables
-=========
-
-Now imagine a different scenario. You have your workflow as before
-
-.. digraph:: iterables_before
-
- "A" -> "B" -> "C";
-
-and there are three possible values of one of the inputs node "B" you would like
-to investigate (for example width of 2,4, and 6 pixels of a smoothing node). You
-would like to see how different parameters in node "B" would influence everything
-that depends on its outputs (node "C" in our example). Therefore the new graph
-should look like this:
-
-.. digraph:: foo
-
- "A" -> "B1" -> "C1";
- "A" -> "B2" -> "C2";
- "A" -> "B3" -> "C3";
-
-Of course you can do it manually by creating copies of all the nodes for
-different parameter set, but this can be very time consuming, especially when there
-are more than one node taking inputs from "B". Luckily nipype supports this
-scenario! Its called iterables and and you use it this way:
-
-::
-
- import nipype.pipeline.engine as pe
- a = pe.Node(interface=A(), name="a")
- b = pe.Node(interface=B(), name="b")
- b.iterables = ("n", [1, 2, 3])
- c = pe.Node(interface=C(), name="c")
-
- my_workflow = pe.Workflow(name="my_workflow")
- my_workflow.connect([(a,b,[('out_file','in_file')]),
- (b,c,[('out_file','in_file')])
- ])
-
-Assuming that you want to try out values 1, 2, and 3 of input "n" of the node
-"B". This will also create three different versions of node "C" - each with
-inputs from instances of node "C" with different values of "n".
-
-Additionally, you can set multiple iterables for a node with a list of tuples
-in the above format.
-
-Iterables are commonly used to execute the same workflow for many subjects.
-Usually one parametrises DataGrabber node with subject ID. This is achieved by
-connecting an IdentityInterface in front of DataGrabber. When you set iterables of the
-IdentityInterface to the list of subjects IDs, the same workflow will be executed
-for every subject. See :doc:`examples/fmri_spm` to see this pattern in action.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/mipav.rst b/doc/users/mipav.rst
deleted file mode 100644
index 02df0a78e4..0000000000
--- a/doc/users/mipav.rst
+++ /dev/null
@@ -1,27 +0,0 @@
-.. _mipav:
-
-================================
-Using MIPAV, JIST, and CBS Tools
-================================
-
-If you are trying to use MIPAV, JIST or CBS Tools interfaces you need
-to configure CLASSPATH environmental variable correctly. It needs to
-include extensions shipped with MIPAV, MIPAV itself and MIPAV plugins.
-For example:
-
-In order to use the standalone MCR version of spm, you need to ensure that
-the following commands are executed at the beginning of your script:
-
-.. testcode::
-
-
- # location of additional JAVA libraries to use
- JAVALIB=/Applications/mipav/jre/Contents/Home/lib/ext/
-
- # location of the MIPAV installation to use
- MIPAV=/Applications/mipav
- # location of the plugin installation to use
- # please replace 'ThisUser' by your user name
- PLUGINS=/Users/ThisUser/mipav/plugins
-
- export CLASSPATH=$JAVALIB/*:$MIPAV:$MIPAV/lib/*:$PLUGINS
diff --git a/doc/users/model_specification.rst b/doc/users/model_specification.rst
deleted file mode 100644
index 7b2216fc98..0000000000
--- a/doc/users/model_specification.rst
+++ /dev/null
@@ -1,128 +0,0 @@
-.. _model_spec:
-
-===================================================
- Model Specification for First Level fMRI Analysis
-===================================================
-
-Nipype provides a general purpose model specification mechanism with
-specialized subclasses for package specific extensions.
-
-
-General purpose model specification
-===================================
-
-The :class:`SpecifyModel` provides a generic mechanism for model
-specification. A mandatory input called subject_info provides paradigm
-specification for each run corresponding to a subject. This has to be in
-the form of a :class:`Bunch` or a list of Bunch objects (one for each
-run). Each Bunch object contains the following attribules.
-
-Required for most designs
--------------------------
-
-- conditions : list of names
-
-- onsets : lists of onsets corresponding to each condition
-
-- durations : lists of durations corresponding to each condition. Should be
- left to a single 0 if all events are being modelled as impulses.
-
-Optional
---------
-
-- regressor_names : list of names corresponding to each column. Should be None if automatically assigned.
-
-- regressors : list of lists. values for each regressor - must correspond to the number of volumes in the functional run
-
-- amplitudes : lists of amplitudes for each event. This will be ignored by
- SPM's Level1Design.
-
-The following two (tmod, pmod) will be ignored by any
-Level1Design class other than SPM:
-
-- tmod : lists of conditions that should be temporally modulated. Should
- default to None if not being used.
-
-- pmod : list of Bunch corresponding to conditions
- - name : name of parametric modulator
- - param : values of the modulator
- - poly : degree of modulation
-
-
-An example Bunch definition::
-
- from nipype.interfaces.base import Bunch
- condnames = ['Tapping', 'Speaking', 'Yawning']
- event_onsets = [[0, 10, 50], [20, 60, 80], [30, 40, 70]]
- durations = [[0],[0],[0]]
-
- subject_info = Bunch(conditions=condnames,
- onsets = event_onsets,
- durations = durations)
-
-Alternatively, you can provide condition, onset, duration and amplitude
-information through event files. The event files have to be in 1,2 or 3
-column format with the columns corresponding to Onsets, Durations and
-Amplitudes and they have to have the name event_name.run
-e.g.: Words.run001.txt. The event_name part will be used to create the
-condition names. Words.run001.txt may look like::
-
- # Word Onsets Durations
- 0 10
- 20 10
- ...
-
-or with amplitudes::
-
- # Word Onsets Durations Amplitudes
- 0 10 1
- 20 10 1
- ...
-
-Together with this information, one needs to specify:
-
-- whether the durations and event onsets are specified in terms of scan volumes
- or secs.
-
-- the high-pass filter cutoff,
-
-- the repetition time per scan
-
-- functional data files corresponding to each run.
-
-Optionally you can specify realignment parameters, outlier indices.
-Outlier files should contain a list of numbers, one per row indicating
-which scans should not be included in the analysis. The numbers are
-0-based.
-
-SPM specific attributes
-=======================
-
-in addition to the generic specification options, several SPM specific
-options can be provided. In particular, the subject_info function can
-provide temporal and parametric modulators in the Bunch attributes tmod
-and pmod. The following example adds a linear parametric modulator for
-speaking rate for the events specified earlier::
-
- pmod = [None, Bunch(name=['Rate'], param=[[.300, .500, .600]],
- poly=[1]), None]
- subject_info = Bunch(conditions=condnames,
- onsets = event_onsets,
- durations = durations,
- pmod = pmod)
-
-:class:`SpecifySPMModel` also allows specifying additional components.
-If you have a study with multiple runs, you can choose to concatenate
-conditions from different runs. by setting the input
-option **concatenate_runs** to True. You can also choose to set the
-output options for this class to be in terms of 'scans'.
-
-Sparse model specification
-==========================
-
-In addition to standard models, :class:`SpecifySparseModel` allows model
-generation for sparse and sparse-clustered acquisition experiments.
-Details of the model generation and utility are provided in `Ghosh et
-al. (2009) OHBM 2009. `_
-
-.. include:: ../links_names.txt
diff --git a/doc/users/neurodocker.rst b/doc/users/neurodocker.rst
deleted file mode 100644
index 025c2bead2..0000000000
--- a/doc/users/neurodocker.rst
+++ /dev/null
@@ -1,131 +0,0 @@
-.. _neurodocker_tutorial:
-
-====================
-Neurodocker tutorial
-====================
-
-This page covers the steps to create containers with Neurodocker_.
-
-Neurodocker_ is a command-line program that enables users to generate Docker_
-containers and Singularity_ images that include neuroimaging software.
-
-Requirements:
-
-* Docker_ or Singularity_
-* Internet connection
-
-
-Usage
------
-
-To view the Neurodocker help message
-::
- docker run --rm kaczmarj/neurodocker:0.4.0 generate [docker|singularity] --help
-
-Note: choose between ``docker`` and ``singularity`` in ``[docker|singularity]``.
-
-1. Users must specify a base Docker image and the package manager. Any Docker
- image on DockerHub can be used as your base image. Common base images
- include ``debian:stretch``, ``ubuntu:16.04``, ``centos:7``, and the various
- ``neurodebian`` images. If users would like to install software from the
- NeuroDebian repositories, it is recommended to use a ``neurodebian`` base
- image. The package manager is ``apt`` or ``yum``, depending on the base
- image.
-2. Next, users should configure the container to fit their needs. This includes
- installing neuroimaging software, installing packages from the chosen package
- manager, installing Python and Python packages, copying files from the local
- machine into the container, and other operations. The list of supported
- neuroimaging software packages is available in the ``neurodocker`` help
- message.
-3. The ``neurodocker`` command will generate a Dockerfile or Singularity recipe.
- The Dockerfile can be used with the ``docker build`` command to build a
- Docker image. The Singularity recipe can be used to build a Singularity
- container with the ``singularity build`` command.
-
-
-Create a Dockerfile or Singularity recipe with FSL, Python 3.6, and Nipype
---------------------------------------------------------------------------
-
-This command prints a Dockerfile (the specification for a Docker image) or a
-Singularity recipe (the specification for a Singularity container) to the
-terminal.
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate [docker|singularity] \
- --base debian:stretch --pkg-manager apt \
- --fsl version=5.0.10 \
- --miniconda create_env=neuro \
- conda_install="python=3.6 traits" \
- pip_install="nipype"
-
-
-Build the Docker image
-----------------------
-
-The Dockerfile can be saved and used to build the Docker image
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate docker \
- --base debian:stretch --pkg-manager apt \
- --fsl version=5.0.10 \
- --miniconda create_env=neuro \
- conda_install="python=3.6 traits" \
- pip_install="nipype" > Dockerfile
- $ docker build --tag my_image .
- $ # or
- $ docker build --tag my_image - < Dockerfile
-
-
-Build the Singularity container
--------------------------------
-
-The Singularity recipe can be saved and used to build the Singularity container
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate singularity \
- --base debian:stretch --pkg-manager apt \
- --fsl version=5.0.10 \
- --miniconda create_env=neuro \
- conda_install="python=3.6 traits" \
- pip_install="nipype" > Singularity
- $ singularity build my_nipype.simg Singularity
-
-
-Use NeuroDebian
----------------
-
-This example installs AFNI and ANTs from the NeuroDebian repositories. It also
-installs ``git`` and ``vim``.
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate [docker|singularity] \
- --base neurodebian:stretch --pkg-manager apt \
- --install afni ants git vim
-
-Note: the ``--install`` option will install software using the package manager.
-Because the NeuroDebian repositories are enabled in the chosen base image, AFNI
-and ANTs may be installed using the package manager. ``git`` and ``vim`` are
-available in the default repositories.
-
-
-Other examples
---------------
-
-Create a container with ``dcm2niix``, Nipype, and jupyter notebook. Install
-Miniconda as a non-root user, and activate the Miniconda environment upon
-running the container.
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate docker \
- --base centos:7 --pkg-manager yum \
- --dcm2niix version=master method=source \
- --user neuro \
- --miniconda create_env=neuro conda_install="jupyter traits nipype" \
- > Dockerfile
- $ docker build --tag my_nipype - < Dockerfile
-
-
-Copy local files into a container.
-::
- $ docker run --rm kaczmarj/neurodocker:0.4.0 generate [docker|singularity] \
- --base ubuntu:16.04 --pkg-manager apt \
- --copy relative/path/to/source.txt /absolute/path/to/destination.txt
-
-See the `Neurodocker examples page `_ for more.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/nipypecmd.rst b/doc/users/nipypecmd.rst
deleted file mode 100644
index 3717306920..0000000000
--- a/doc/users/nipypecmd.rst
+++ /dev/null
@@ -1,67 +0,0 @@
-.. _nipypecmd:
-
-============================================================
-Running Nipype Interfaces from the command line (nipype_cmd)
-============================================================
-
-The primary use of Nipype_ is to build automated non-interactive pipelines.
-However, sometimes there is a need to run some interfaces quickly from the command line.
-This is especially useful when running Interfaces wrapping code that does not have
-command line equivalents (nipy or SPM). Being able to run Nipype interfaces opens new
-possibilities such as inclusion of SPM processing steps in bash scripts.
-
-To run Nipype Interfaces you need to use the nipype_cmd tool that should already be installed.
-The tool allows you to list Interfaces available in a certain package:
-
-.. testcode::
-
-
- $nipype_cmd nipype.interfaces.nipy
-
- Available Interfaces:
- SpaceTimeRealigner
- Similarity
- ComputeMask
- FitGLM
- EstimateContrast
-
-After selecting a particular Interface you can learn what inputs it requires:
-
-.. testcode::
-
-
- $nipype_cmd nipype.interfaces.nipy ComputeMask --help
-
- usage:nipype_cmd nipype.interfaces.nipy ComputeMask [-h] [--M M] [--cc CC]
- [--ignore_exception IGNORE_EXCEPTION]
- [--m M]
- [--reference_volume REFERENCE_VOLUME]
- mean_volume
-
- Run ComputeMask
-
- positional arguments:
- mean_volume mean EPI image, used to compute the threshold for the
- mask
-
- optional arguments:
- -h, --help show this help message and exit
- --M M upper fraction of the histogram to be discarded
- --cc CC Keep only the largest connected component
- --ignore_exception IGNORE_EXCEPTION
- Print an error message instead of throwing an
- exception in case the interface fails to run
- --m M lower fraction of the histogram to be discarded
- --reference_volume REFERENCE_VOLUME
- reference volume used to compute the mask. If none is
- give, the mean volume is used.
-
-Finally you can run run the Interface:
-
-.. testcode::
-
- $nipype_cmd nipype.interfaces.nipy ComputeMask mean.nii.gz
-
-All that from the command line without having to start python interpreter manually.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/plugins.rst b/doc/users/plugins.rst
deleted file mode 100644
index 1484247b7e..0000000000
--- a/doc/users/plugins.rst
+++ /dev/null
@@ -1,361 +0,0 @@
-.. _plugins:
-
-====================
-Using Nipype Plugins
-====================
-
-The workflow engine supports a plugin architecture for workflow execution. The
-available plugins allow local and distributed execution of workflows and
-debugging. Each available plugin is described below.
-
-Current plugins are available for Linear, Multiprocessing, IPython_ distributed
-processing platforms and for direct processing on SGE_, PBS_, HTCondor_, LSF_, OAR_, and SLURM_. We
-anticipate future plugins for the Soma_ workflow.
-
-.. note::
-
- The current distributed processing plugins rely on the availability of a
- shared filesystem across computational nodes.
-
- A variety of config options can control how execution behaves in this
- distributed context. These are listed later on in this page.
-
-All plugins can be executed with::
-
- workflow.run(plugin=PLUGIN_NAME, plugin_args=ARGS_DICT)
-
-Optional arguments::
-
- status_callback : a function handle
- max_jobs : maximum number of concurrent jobs
- max_tries : number of times to try submitting a job
- retry_timeout : amount of time to wait between tries
-
-.. note::
-
- Except for the status_callback, the remaining arguments only apply to the
- distributed plugins: MultiProc/IPython(X)/SGE/PBS/HTCondor/HTCondorDAGMan/LSF
-
-For example:
-
-
-Plugins
-=======
-
-Debug
------
-
-This plugin provides a simple mechanism to debug certain components of a
-workflow without executing any node.
-
-Mandatory arguments::
-
- callable : A function handle that receives as arguments a node and a graph
-
-The function callable will called for every node from a topological sort of the
-execution graph.
-
-Linear
-------
-
-This plugin runs the workflow one node at a time in a single process locally.
-The order of the nodes is determined by a topological sort of the workflow::
-
- workflow.run(plugin='Linear')
-
-MultiProc
----------
-
-Uses the Python_ multiprocessing library to distribute jobs as new processes on
-a local system.
-
-Optional arguments::
-
- n_procs : Number of processes to launch in parallel, if not set number of
- processors/threads will be automatically detected
-
- memory_gb : Total memory available to be shared by all simultaneous tasks
- currently running, if not set it will be automatically set to 90\% of
- system RAM.
-
- raise_insufficient : Raise exception when the estimated resources of a node
- exceed the total amount of resources available (memory and threads), when
- ``False`` (default), only a warning will be issued.
-
- maxtasksperchild : number of nodes to run on each process before refreshing
- the worker (default: 10).
-
-
-To distribute processing on a multicore machine, simply call::
-
- workflow.run(plugin='MultiProc')
-
-This will use all available CPUs. If on the other hand you would like to restrict
-the number of used resources (to say 2 CPUs), you can call::
-
- workflow.run(plugin='MultiProc', plugin_args={'n_procs' : 2}
-
-IPython
--------
-
-This plugin provide access to distributed computing using IPython_ parallel
-machinery.
-
-.. note::
-
- Please read the IPython_ documentation to determine how to setup your cluster
- for distributed processing. This typically involves calling ipcluster.
-
-Once the clients have been started, any pipeline executed with::
-
- workflow.run(plugin='IPython')
-
-
-SGE/PBS
--------
-
-In order to use nipype with SGE_ or PBS_ you simply need to call::
-
- workflow.run(plugin='SGE')
- workflow.run(plugin='PBS')
-
-Optional arguments::
-
- template: custom template file to use
- qsub_args: any other command line args to be passed to qsub.
- max_jobname_len: (PBS only) maximum length of the job name. Default 15.
-
-For example, the following snippet executes the workflow on myqueue with
-a custom template::
-
- workflow.run(plugin='SGE',
- plugin_args=dict(template='mytemplate.sh', qsub_args='-q myqueue')
-
-In addition to overall workflow configuration, you can use node level
-configuration for PBS/SGE::
-
- node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3'}
-
-this would apply only to the node and is useful in situations, where a
-particular node might use more resources than other nodes in a workflow.
-
-.. note::
-
- Setting the keyword `overwrite` would overwrite any global configuration with
- this local configuration::
-
- node.plugin_args = {'qsub_args': '-l nodes=1:ppn=3', 'overwrite': True}
-
-SGEGraph
-~~~~~~~~
-SGEGraph_ is an execution plugin working with Sun Grid Engine that allows for
-submitting entire graph of dependent jobs at once. This way Nipype does not
-need to run a monitoring process - SGE takes care of this. The use of SGEGraph_
-is preferred over SGE_ since the latter adds unnecessary load on the submit
-machine.
-
-.. note::
-
- When rerunning unfinished workflows using SGEGraph you may decide not to
- submit jobs for Nodes that previously finished running. This can speed up
- execution, but new or modified inputs that would previously trigger a Node
- to rerun will be ignored. The following option turns on this functionality::
-
- workflow.run(plugin='SGEGraph', plugin_args = {'dont_resubmit_completed_jobs': True})
-
-LSF
----
-
-Submitting via LSF is almost identical to SGE above above except for the optional arguments field::
-
- workflow.run(plugin='LSF')
-
-Optional arguments::
-
- template: custom template file to use
- bsub_args: any other command line args to be passed to bsub.
-
-SLURM
------
-
-Submitting via SLURM is almost identical to SGE above except for the optional arguments field:
-
- workflow.run(plugin='SLURM')
-
-Optional arguments::
-
- template: custom template file to use
- sbatch_args: any other command line args to be passed to bsub.
- jobid_re: regular expression for custom job submission id search
-
-
-SLURMGraph
-~~~~~~~~~~
-SLURMGraph_ is an execution plugin working with SLURM that allows for
-submitting entire graph of dependent jobs at once. This way Nipype does not
-need to run a monitoring process - SLURM takes care of this. The use of SLURMGraph_
-plugin is preferred over the vanilla SLURM_ plugin since the latter adds
-unnecessary load on the submit machine.
-
-
-.. note::
-
- When rerunning unfinished workflows using SLURMGraph you may decide not to
- submit jobs for Nodes that previously finished running. This can speed up
- execution, but new or modified inputs that would previously trigger a Node
- to rerun will be ignored. The following option turns on this functionality::
-
- workflow.run(plugin='SLURMGraph', plugin_args = {'dont_resubmit_completed_jobs': True})
-
-
-HTCondor
---------
-
-DAGMan
-~~~~~~
-
-With its DAGMan_ component HTCondor_ (previously Condor) allows for submitting
-entire graphs of dependent jobs at once (similar to SGEGraph_ and SLURMGraph_).
-With the ``CondorDAGMan`` plug-in Nipype can utilize this functionality to
-submit complete workflows directly and in a single step. Consequently, and
-in contrast to other plug-ins, workflow execution returns almost
-instantaneously -- Nipype is only used to generate the workflow graph,
-while job scheduling and dependency resolution are entirely managed by HTCondor_.
-
-Please note that although DAGMan_ supports specification of data dependencies
-as well as data provisioning on compute nodes this functionality is currently
-not supported by this plug-in. As with all other batch systems supported by
-Nipype, only HTCondor pools with a shared file system can be used to process
-Nipype workflows.
-
-Workflow execution with HTCondor DAGMan is done by calling::
-
- workflow.run(plugin='CondorDAGMan')
-
-Job execution behavior can be tweaked with the following optional plug-in
-arguments. The value of most arguments can be a literal string or a filename,
-where in the latter case the content of the file will be used as the argument
-value::
-
- submit_template : submit spec template for individual jobs in a DAG (see
- CondorDAGManPlugin.default_submit_template for the default.
- initial_specs : additional submit specs that are prepended to any job's
- submit file
- override_specs : additional submit specs that are appended to any job's
- submit file
- wrapper_cmd : path to an exectuable that will be started instead of a node
- script. This is useful for wrapper script that execute certain
- functionality prior or after a node runs. If this option is
- given the wrapper command is called with the respective Python
- exectuable and the path to the node script as final arguments
- wrapper_args : optional additional arguments to a wrapper command
- dagman_args : arguments to be prepended to the job execution script in the
- dagman call
- block : if True the plugin call will block until Condor has finished
- prcoessing the entire workflow (default: False)
-
-Please see the `HTCondor documentation`_ for details on possible configuration
-options and command line arguments.
-
-Using the ``wrapper_cmd`` argument it is possible to combine Nipype workflow
-execution with checkpoint/migration functionality offered by, for example,
-DMTCP_. This is especially useful in the case of workflows with long running
-nodes, such as Freesurfer's recon-all pipeline, where Condor's job
-prioritization algorithm could lead to jobs being evicted from compute
-nodes in order to maximize overall troughput. With checkpoint/migration enabled
-such a job would be checkpointed prior eviction and resume work from the
-checkpointed state after being rescheduled -- instead of restarting from
-scratch.
-
-On a Debian system, executing a workflow with support for checkpoint/migration
-for all nodes could look like this::
-
- # define common parameters
- dmtcp_hdr = """
- should_transfer_files = YES
- when_to_transfer_output = ON_EXIT_OR_EVICT
- kill_sig = 2
- environment = DMTCP_TMPDIR=./;JALIB_STDERR_PATH=/dev/null;DMTCP_PREFIX_ID=$(CLUSTER)_$(PROCESS)
- """
- shim_args = "--log %(basename)s.shimlog --stdout %(basename)s.shimout --stderr %(basename)s.shimerr"
- # run workflow
- workflow.run(
- plugin='CondorDAGMan',
- plugin_args=dict(initial_specs=dmtcp_hdr,
- wrapper_cmd='/usr/lib/condor/shim_dmtcp',
- wrapper_args=shim_args)
- )
-
-OAR
----
-
-In order to use nipype with OAR_ you simply need to call::
-
- workflow.run(plugin='OAR')
-
-Optional arguments::
-
- template: custom template file to use
- oar_args: any other command line args to be passed to qsub.
- max_jobname_len: (PBS only) maximum length of the job name. Default 15.
-
-For example, the following snippet executes the workflow on myqueue with
-a custom template::
-
- workflow.run(plugin='oar',
- plugin_args=dict(template='mytemplate.sh', oarsub_args='-q myqueue')
-
-In addition to overall workflow configuration, you can use node level
-configuration for OAR::
-
- node.plugin_args = {'overwrite': True, 'oarsub_args': '-l "nodes=1/cores=3"'}
-
-this would apply only to the node and is useful in situations, where a
-particular node might use more resources than other nodes in a workflow.
-You need to set the 'overwrite' flag to bypass the general settings-template you defined for the other nodes.
-
-
-``qsub`` emulation
-~~~~~~~~~~~~~~~~~~
-
-.. note::
-
- This plug-in is deprecated and users should migrate to the more robust and
- more versatile ``CondorDAGMan`` plug-in.
-
-Despite the differences between HTCondor and SGE-like batch systems the plugin
-usage (incl. supported arguments) is almost identical. The HTCondor plugin relies
-on a ``qsub`` emulation script for HTCondor, called ``condor_qsub`` that can be
-obtained from a `Git repository on git.debian.org`_. This script is currently
-not shipped with a standard HTCondor distribution, but is included in the HTCondor
-package from http://neuro.debian.net. It is sufficient to download this script
-and install it in any location on a system that is included in the ``PATH``
-configuration.
-
-.. _Git repository on git.debian.org: http://anonscm.debian.org/gitweb/?p=pkg-exppsy/condor.git;a=blob_plain;f=debian/condor_qsub;hb=HEAD
-
-Running a workflow in a HTCondor pool is done by calling::
-
- workflow.run(plugin='Condor')
-
-The plugin supports a limited set of qsub arguments (``qsub_args``) that cover
-the most common use cases. The ``condor_qsub`` emulation script translates qsub
-arguments into the corresponding HTCondor terminology and handles the actual job
-submission. For details on supported options see the manpage of ``condor_qsub``.
-
-Optional arguments::
-
- qsub_args: any other command line args to be passed to condor_qsub.
-
-.. include:: ../links_names.txt
-
-.. _SGE: http://www.oracle.com/us/products/tools/oracle-grid-engine-075549.html
-.. _OGE: http://www.oracle.com/us/products/tools/oracle-grid-engine-075549.html
-.. _Soma: http://brainvisa.info/soma/soma-workflow/
-.. _PBS: http://www.clusterresources.com/products/torque-resource-manager.php
-.. _LSF: http://www.platform.com/Products/platform-lsf
-.. _HTCondor: http://www.cs.wisc.edu/htcondor/
-.. _DAGMan: http://research.cs.wisc.edu/htcondor/dagman/dagman.html
-.. _HTCondor documentation: http://research.cs.wisc.edu/htcondor/manual
-.. _DMTCP: http://dmtcp.sourceforge.net
-.. _SLURM: http://slurm.schedmd.com/
diff --git a/doc/users/resource_sched_profiler.rst b/doc/users/resource_sched_profiler.rst
deleted file mode 100644
index 7fa0819c19..0000000000
--- a/doc/users/resource_sched_profiler.rst
+++ /dev/null
@@ -1,160 +0,0 @@
-.. _resource_sched_profiler:
-
-=============================================
-Resource Scheduling and Profiling with Nipype
-=============================================
-The latest version of Nipype supports system resource scheduling and profiling.
-These features allows users to ensure high throughput of their data processing
-while also controlling the amount of computing resources a given workflow will
-use.
-
-
-Specifying Resources in the Node Interface
-==========================================
-Each ``Node`` instance interface has two parameters that specify its expected
-thread and memory usage: ``num_threads`` and ``estimated_memory_gb``. If a
-particular node is expected to use 8 threads and 2 GB of memory:
-
-::
-
- import nipype.pipeline.engine as pe
- node = pe.Node()
- node.interface.num_threads = 8
- node.interface.estimated_memory_gb = 2
-
-If the resource parameters are never set, they default to being 1 thread and 1
-GB of RAM.
-
-
-Resource Scheduler
-==================
-The ``MultiProc`` workflow plugin schedules node execution based on the
-resources used by the current running nodes and the total resources available to
-the workflow. The plugin utilizes the plugin arguments ``n_procs`` and
-``memory_gb`` to set the maximum resources a workflow can utilize. To limit a
-workflow to using 8 cores and 10 GB of RAM:
-
-::
-
- args_dict = {'n_procs' : 8, 'memory_gb' : 10}
- workflow.run(plugin='MultiProc', plugin_args=args_dict)
-
-If these values are not specifically set then the plugin will assume it can
-use all of the processors and memory on the system. For example, if the machine
-has 16 cores and 12 GB of RAM, the workflow will internally assume those values
-for ``n_procs`` and ``memory_gb``, respectively.
-
-The plugin will then queue eligible nodes for execution based on their expected
-usage via the ``num_threads`` and ``estimated_memory_gb`` interface parameters.
-If the plugin sees that only 3 of its 8 processors and 4 GB of its 10 GB of RAM
-are being used by running nodes, it will attempt to execute the next available
-node as long as its ``num_threads <= 5`` and ``estimated_memory_gb <= 6``. If
-this is not the case, it will continue to check every available node in the
-queue until it sees a node that meets these conditions, or it waits for an
-executing node to finish to earn back the necessary resources. The priority of
-the queue is highest for nodes with the most ``estimated_memory_gb`` followed
-by nodes with the most expected ``num_threads``.
-
-
-Runtime Profiler and using the Callback Log
-===========================================
-It is not always easy to estimate the amount of resources a particular function
-or command uses. To help with this, Nipype provides some feedback about the
-system resources used by every node during workflow execution via the built-in
-runtime profiler. The runtime profiler is automatically enabled if the
-psutil_ Python package is installed and found on the system.
-
-.. _psutil: https://pythonhosted.org/psutil/
-
-If the package is not found, the workflow will run normally without the runtime
-profiler.
-
-The runtime profiler records the number of threads and the amount of memory (GB)
-used as ``runtime_threads`` and ``runtime_memory_gb`` in the Node's
-``result.runtime`` attribute. Since the node object is pickled and written to
-disk in its working directory, these values are available for analysis after
-node or workflow execution by manually parsing the pickle file contents.
-
-Nipype also provides a logging mechanism for saving node runtime statistics to
-a JSON-style log file via the ``log_nodes_cb`` logger function. This is enabled
-by setting the ``status_callback`` parameter to point to this function in the
-``plugin_args`` when using the ``MultiProc`` plugin.
-
-::
-
- from nipype.utils.profiler import log_nodes_cb
- args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}
-
-To set the filepath for the callback log the ``'callback'`` logger must be
-configured.
-
-::
-
- # Set path to log file
- import logging
- callback_log_path = '/home/user/run_stats.log'
- logger = logging.getLogger('callback')
- logger.setLevel(logging.DEBUG)
- handler = logging.FileHandler(callback_log_path)
- logger.addHandler(handler)
-
-Finally, the workflow can be run.
-
-::
-
- workflow.run(plugin='MultiProc', plugin_args=args_dict)
-
-After the workflow finishes executing, the log file at
-"/home/user/run_stats.log" can be parsed for the runtime statistics. Here is an
-example of what the contents would look like:
-
-::
-
- {"name":"resample_node","id":"resample_node",
- "start":"2016-03-11 21:43:41.682258",
- "estimated_memory_gb":2,"num_threads":1}
- {"name":"resample_node","id":"resample_node",
- "finish":"2016-03-11 21:44:28.357519",
- "estimated_memory_gb":"2","num_threads":"1",
- "runtime_threads":"3","runtime_memory_gb":"1.118469238281"}
-
-Here it can be seen that the number of threads was underestimated while the
-amount of memory needed was overestimated. The next time this workflow is run
-the user can change the node interface ``num_threads`` and
-``estimated_memory_gb`` parameters to reflect this for a higher pipeline
-throughput. Note, sometimes the "runtime_threads" value is higher than expected,
-particularly for multi-threaded applications. Tools can implement
-multi-threading in different ways under-the-hood; the profiler merely traverses
-the process tree to return all running threads associated with that process,
-some of which may include active thread-monitoring daemons or transient
-processes.
-
-
-Visualizing Pipeline Resources
-==============================
-Nipype provides the ability to visualize the workflow execution based on the
-runtimes and system resources each node takes. It does this using the log file
-generated from the callback logger after workflow execution - as shown above.
-The pandas_ Python package is required to use this feature.
-
-.. _pandas: http://pandas.pydata.org/
-
-::
-
- from nipype.utils.profiler import log_nodes_cb
- args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}
- workflow.run(plugin='MultiProc', plugin_args=args_dict)
-
- # ...workflow finishes and writes callback log to '/home/user/run_stats.log'
-
- from nipype.utils.draw_gantt_chart import generate_gantt_chart
- generate_gantt_chart('/home/user/run_stats.log', cores=8)
- # ...creates gantt chart in '/home/user/run_stats.log.html'
-
-The ``generate_gantt_chart`` function will create an html file that can be viewed
-in a browser. Below is an example of the gantt chart displayed in a web browser.
-Note that when the cursor is hovered over any particular node bubble or resource
-bubble, some additional information is shown in a pop-up.
-
- * - .. image:: images/gantt_chart.png
- :width: 100 %
diff --git a/doc/users/saving_workflows.rst b/doc/users/saving_workflows.rst
deleted file mode 100644
index 8942103519..0000000000
--- a/doc/users/saving_workflows.rst
+++ /dev/null
@@ -1,105 +0,0 @@
-.. _saving_workflows:
-
-===================================================
-Saving Workflows and Nodes to a file (experimental)
-===================================================
-
-On top of the standard way of saving (i.e. serializing) objects in Python
-(see `pickle `_) Nipype
-provides methods to turn Workflows and nodes into human readable code.
-This is useful if you want to save a Workflow that you have generated
-on the fly for future use.
-
-To generate Python code for a Workflow use the export method:
-
-.. testcode::
-
- from nipype.interfaces.fsl import BET, ImageMaths
- from nipype.pipeline.engine import Workflow, Node, MapNode, format_node
- from nipype.interfaces.utility import Function, IdentityInterface
-
- bet = Node(BET(), name='bet')
- bet.iterables = ('frac', [0.3, 0.4])
-
- bet2 = MapNode(BET(), name='bet2', iterfield=['infile'])
- bet2.iterables = ('frac', [0.4, 0.5])
-
- maths = Node(ImageMaths(), name='maths')
-
- def testfunc(in1):
- """dummy func
- """
- out = in1 + 'foo' + "out1"
- return out
-
- funcnode = Node(Function(input_names=['a'], output_names=['output'], function=testfunc),
- name='testfunc')
- funcnode.inputs.in1 = '-sub'
- func = lambda x: x
-
- inode = Node(IdentityInterface(fields=['a']), name='inode')
-
- wf = Workflow('testsave')
- wf.add_nodes([bet2])
- wf.connect(bet, 'mask_file', maths, 'in_file')
- wf.connect(bet2, ('mask_file', func), maths, 'in_file2')
- wf.connect(inode, 'a', funcnode, 'in1')
- wf.connect(funcnode, 'output', maths, 'op_string')
-
- wf.export()
-
-This will create a file "outputtestsave.py" with the following content:
-
-.. testcode::
-
- from nipype.pipeline.engine import Workflow, Node, MapNode
- from nipype.interfaces.utility import IdentityInterface
- from nipype.interfaces.utility import Function
- from nipype.utils.functions import getsource
- from nipype.interfaces.fsl.preprocess import BET
- from nipype.interfaces.fsl.utils import ImageMaths
- # Functions
- func = lambda x: x
- # Workflow
- testsave = Workflow("testsave")
- # Node: testsave.inode
- inode = Node(IdentityInterface(fields=['a'], mandatory_inputs=True), name="inode")
- # Node: testsave.testfunc
- testfunc = Node(Function(input_names=['a'], output_names=['output']), name="testfunc")
- testfunc.interface.ignore_exception = False
- def testfunc_1(in1):
- """dummy func
- """
- out = in1 + 'foo' + "out1"
- return out
-
- testfunc.inputs.function_str = getsource(testfunc_1)
- testfunc.inputs.in1 = '-sub'
- testsave.connect(inode, "a", testfunc, "in1")
- # Node: testsave.bet2
- bet2 = MapNode(BET(), iterfield=['infile'], name="bet2")
- bet2.interface.ignore_exception = False
- bet2.iterables = ('frac', [0.4, 0.5])
- bet2.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}
- bet2.inputs.output_type = 'NIFTI_GZ'
- bet2.terminal_output = 'stream'
- # Node: testsave.bet
- bet = Node(BET(), name="bet")
- bet.interface.ignore_exception = False
- bet.iterables = ('frac', [0.3, 0.4])
- bet.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}
- bet.inputs.output_type = 'NIFTI_GZ'
- bet.terminal_output = 'stream'
- # Node: testsave.maths
- maths = Node(ImageMaths(), name="maths")
- maths.interface.ignore_exception = False
- maths.inputs.environ = {'FSLOUTPUTTYPE': 'NIFTI_GZ'}
- maths.inputs.output_type = 'NIFTI_GZ'
- maths.terminal_output = 'stream'
- testsave.connect(bet2, ('mask_file', func), maths, "in_file2")
- testsave.connect(bet, "mask_file", maths, "in_file")
- testsave.connect(testfunc, "output", maths, "op_string")
-
-The file is ready to use and includes all the necessary imports.
-
-.. include:: ../links_names.txt
diff --git a/doc/users/select_files.rst b/doc/users/select_files.rst
deleted file mode 100644
index 3512985161..0000000000
--- a/doc/users/select_files.rst
+++ /dev/null
@@ -1,75 +0,0 @@
-.. _select_files:
-
-==========================
-The SelectFiles Interfaces
-==========================
-
-Nipype 0.9 introduces a new interface for intelligently finding files on the
-disk and feeding them into your workflows: :ref:`SelectFiles
-`. SelectFiles is intended as a simpler
-alternative to the :ref:`DataGrabber `
-interface that was discussed previously in :doc:`grabbing_and_sinking`.
-
-SelectFiles is built on Python `format strings
-`_, which
-are similar to the Python string interpolation feature you are likely already
-familiar with, but advantageous in several respects. Format strings allow you
-to replace named sections of template strings set off by curly braces (`{}`),
-possibly filtered through a set of functions that control how the values are
-rendered into the string. As a very basic example, we could write
-
-::
-
- msg = "This workflow uses {package}"
-
-and then format it with keyword arguments::
-
- print msg.format(package="FSL")
-
-SelectFiles only requires that you provide templates that can be used to find
-your data; the actual formatting happens behind the scenes.
-
-Consider a basic example in which you want to select a T1 image and multple
-functional images for a number of subjects. Invoking SelectFiles in this case
-is quite straightforward::
-
- from nipype import SelectFiles
- templates = dict(T1="data/{subject_id}/struct/T1.nii",
- epi="data/{subject_id}/func/epi_run*.nii")
- sf = SelectFiles(templates)
-
-SelectFiles will take the `templates` dictionary and parse it to determine its
-own inputs and oututs. Specifically, each name used in the format spec (here
-just `subject_id`) will become an interface input, and each key in the
-dictionary (here `T1` and `epi`) will become interface outputs. The `templates`
-dictionary thus succinctly links the node inputs to the appropriate outputs.
-You'll also note that, as was the case with DataGrabber, you can use basic
-`glob `_ syntax to match multiple
-files for a given output field. Additionally, any of the conversions outlined in the Python documentation for format strings can be used in the templates.
-
-There are a few other options that help make SelectFiles flexible enough to
-deal with any situation where you need to collect data. Like DataGrabber,
-SelectFiles has a `base_directory` parameter that allows you to specify a path
-that is common to all of the values in the `templates` dictionary.
-Additionally, as `glob` does not return a sorted list, there is also a
-`sort_filelist` option, taking a boolean, to control whether sorting should be
-applied (it is True by default).
-
-The final input is `force_lists`, which controls how SelectFiles behaves in
-cases where only a single file matches the template. The default behavior is
-that when a template matches multiple files they are returned as a list, while
-a single file is returned as a string. There may be situations where you want
-to force the outputs to always be returned as a list (for example, you are
-writing a workflow that expects to operate on several runs of data, but some of
-your subjects only have a single run). In this case, `force_lists` can be used
-to tune the outputs of the interface. You can either use a boolean value, which
-will be applied to every output the interface has, or you can provide a list of
-the output fields that should be coerced to a list. Returning to our basic
-example, you may want to ensure that the `epi` files are returned as a list,
-but you only ever will have a single `T1` file. In this case, you would do
-
-::
-
- sf = SelectFiles(templates, force_lists=["epi"])
-
-.. include:: ../links_names.txt
diff --git a/doc/users/sphinx_ext.rst b/doc/users/sphinx_ext.rst
deleted file mode 100644
index 9e6732a2ef..0000000000
--- a/doc/users/sphinx_ext.rst
+++ /dev/null
@@ -1,13 +0,0 @@
-.. _sphinx_ext:
-
-Sphinx extensions
------------------
-
-
-To help users document their *Nipype*-based code, the software is shipped
-with a set of extensions (currently only one) to customize the appearance
-and simplify the generation process.
-
-.. automodule:: nipype.sphinxext.plot_workflow
- :undoc-members:
- :noindex:
diff --git a/doc/users/spmmcr.rst b/doc/users/spmmcr.rst
deleted file mode 100644
index 376741a2c9..0000000000
--- a/doc/users/spmmcr.rst
+++ /dev/null
@@ -1,36 +0,0 @@
-.. _spmmcr:
-
-====================================
-Using SPM with MATLAB Common Runtime
-====================================
-
-In order to use the standalone MCR version of spm, you need to ensure that
-the following commands are executed at the beginning of your script:
-
-.. testcode::
-
- from nipype.interfaces import spm
- matlab_cmd = '/path/to/run_spm8.sh /path/to/Compiler_Runtime/v713/ script'
- spm.SPMCommand.set_mlab_paths(matlab_cmd=matlab_cmd, use_mcr=True)
-
-you can test by calling:
-
-.. testcode::
-
- spm.SPMCommand().version
-
-If you want to enforce the standalone MCR version of spm for nipype globally,
-you can do so by setting the following environment variables:
-
-*SPMMCRCMD*
- Specifies the command to use to run the spm standalone MCR version. You
- may still override the command as described above.
-
-*FORCE_SPMMCR*
- Set this to any value in order to enforce the use of spm standalone MCR
- version in nipype globally. Technically, this sets the `use_mcr` flag of
- the spm interface to True.
-
-Information about the MCR version of SPM8 can be found at:
-
-http://en.wikibooks.org/wiki/SPM/Standalone