Skip to content

[ENH] New ResourceMonitor (replaces resource profiler) #2200

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 88 commits into from
Oct 4, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
da6681f
[WIP,ENH] Revision to the resource profiler
oesteban Sep 21, 2017
32c2f39
fix tests
oesteban Sep 22, 2017
0e2c581
Python 2 compatibility
oesteban Sep 22, 2017
5a8e7fe
add nipype_mprof
oesteban Sep 22, 2017
7d953cc
implement monitor in a parallel process
oesteban Sep 22, 2017
306c4ec
set profiling outputs to runtime object, read it from node execution
oesteban Sep 22, 2017
8a903f0
revise profiler callback
oesteban Sep 22, 2017
02fdbda
Merge remote-tracking branch 'upstream/master' into enh/ReviseResourc…
oesteban Sep 24, 2017
e3982d7
robuster constructor
oesteban Sep 25, 2017
48f87af
remove unused import
oesteban Sep 25, 2017
46dde32
various fixes
oesteban Sep 25, 2017
9d70a2f
cleaning up code
oesteban Sep 25, 2017
1fabd25
remove comment
oesteban Sep 25, 2017
ecedfcf
interface.base cleanup
oesteban Sep 25, 2017
2d35959
update new config settings
oesteban Sep 25, 2017
3f34711
make naming consistent across tests
oesteban Sep 25, 2017
99ded42
implement raise_insufficient
oesteban Sep 26, 2017
b0d25bd
fix test
oesteban Sep 26, 2017
2a37693
fix test (amend previous commit)
oesteban Sep 26, 2017
10d0f39
address review comments
oesteban Sep 26, 2017
62a6593
fix typo
oesteban Sep 26, 2017
d6401f3
fixes to the tear-up section of interfaces
oesteban Sep 26, 2017
ce3f08a
fix NoSuchProcess exception
oesteban Sep 26, 2017
ffb7509
making monitor robuster
oesteban Sep 26, 2017
7b7846b
Merge remote-tracking branch 'upstream/master' into enh/ReviseResourc…
oesteban Sep 26, 2017
c9b474b
first functional prototype
oesteban Sep 26, 2017
117924c
Merge remote-tracking branch 'upstream/master' into enh/ReviseResourc…
oesteban Sep 27, 2017
cf1f15b
add warning to old filemanip logger
oesteban Sep 27, 2017
4b7ab93
do not search for filemanip_level in config
oesteban Sep 27, 2017
c7a1992
fix CommandLine interface doctest
oesteban Sep 27, 2017
80eb342
fix tests
oesteban Sep 27, 2017
c166c1d
fix location of use_resources
oesteban Sep 27, 2017
16c195a
fix attribute error when input spec is not standard
oesteban Sep 27, 2017
cda3a5e
re-include filemanip logger into config documentation
oesteban Sep 27, 2017
166205a
minor additions to resource_monitor option
oesteban Sep 27, 2017
6045c93
fix resource_monitor tests
oesteban Sep 27, 2017
66a89c4
run build 2 (the shortest) with the resource monitor on
oesteban Sep 27, 2017
04adabd
fix unbound variable
oesteban Sep 27, 2017
c83c407
collect resource_monitor info after run
oesteban Sep 27, 2017
f8a9fc7
reduce resource_monitor_frequency on tests (and we test it works)
oesteban Sep 27, 2017
8af3775
store a new trace before exit
oesteban Sep 27, 2017
0b00a20
run resource_monitor only for level2 of fmri_spm_nested, switch pytho…
oesteban Sep 27, 2017
6402981
cleaning up MultiProc
oesteban Sep 27, 2017
b9537b5
do not access __array__() of matrices
oesteban Sep 27, 2017
9a609b6
do not access __array__() of matrices (now base plugin)
oesteban Sep 27, 2017
c7fbb61
do not access __array__() of matrices (revise)
oesteban Sep 27, 2017
a40eb3b
restore those __array__()
oesteban Sep 27, 2017
8e710fa
address @satra's comments
oesteban Sep 27, 2017
1bccef7
refactoring multiproc to fix deadlock
oesteban Sep 28, 2017
5d13229
do not import iflogger from base plugin
oesteban Sep 28, 2017
a503fc8
improve logging traces
oesteban Sep 28, 2017
cb6ef02
fix tests
oesteban Sep 28, 2017
9c2a8da
add test
oesteban Sep 28, 2017
2fcaa45
improve debugging traces
oesteban Sep 28, 2017
983ac37
make open python 2 compatible
oesteban Sep 28, 2017
ac19d23
address @effigies' comments
oesteban Sep 28, 2017
7fbd869
fix error accessing config.has_option
oesteban Sep 28, 2017
b338e4f
make write_workflow_resources python 2 compatible (take 2)
oesteban Sep 28, 2017
17d205d
circle tests - write txt crashfiles
oesteban Sep 28, 2017
6edd5b5
fix outdated call to get_mem_gb and get_n_procs
oesteban Sep 28, 2017
430a3b4
fix tests
oesteban Sep 28, 2017
8a5e7a3
add MultiProc scheduler option
oesteban Sep 28, 2017
43f32d5
fix initialization of NipypeConfig
oesteban Sep 28, 2017
b7b860b
add more documentation to MultiProc
oesteban Sep 28, 2017
0137243
remove some code duplication checking hash locally
oesteban Sep 29, 2017
6e00306
fix linting
oesteban Sep 29, 2017
43ff268
remove leftover line that @satra spotted
oesteban Sep 29, 2017
d09ca59
improve logging to understand https://github.com/nipy/nipype/pull/220…
oesteban Sep 29, 2017
013be14
prevent writing to same monitor
oesteban Sep 29, 2017
427e668
improve documentation of DistributedBasePlugin
oesteban Sep 29, 2017
34f9824
fix mistaken property name
oesteban Sep 29, 2017
4883211
mv final resource_monitor.json to logs/ folder
oesteban Sep 30, 2017
831071b
fix: add new config options and defaults to default_cfg
satra Sep 30, 2017
966b7f1
improving measurement of resources, use oneshot from psutils>=5.0
oesteban Oct 1, 2017
403961f
fix unconsistency of runtime attributes
oesteban Oct 1, 2017
4e85b75
Merge pull request #6 from satra/ref/resourcemon
oesteban Oct 2, 2017
e7bc888
enable all tests in test_resource_monitor
oesteban Oct 2, 2017
ef097a6
do not delete num_threads, check inputs also
oesteban Oct 2, 2017
4fdce5c
several fixups
oesteban Oct 2, 2017
31a4952
retrieve num_threads from Interface object
oesteban Oct 2, 2017
5fb992e
fix unnecessary, preemptive float castings
oesteban Oct 2, 2017
2e1b2ce
a more consistent resource_monitor checking
oesteban Oct 2, 2017
345e978
do not hook _nthreads_update to inputs.num_threads changes for afni i…
oesteban Oct 2, 2017
1d7afbc
quickly return to polling function when no resources are available
oesteban Oct 2, 2017
c42473a
remove logging from run_node which blocked mriqc, improve logging of …
oesteban Oct 3, 2017
8a887d3
Merge remote-tracking branch 'upstream/master' into enh/ReviseResourc…
oesteban Oct 3, 2017
d2599e2
disable resource_monitor tests when running tests in Circle and Travis
oesteban Oct 3, 2017
678bb1a
let the inner interface set _n_procs and _mem_gb
oesteban Oct 3, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .circle/tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ fi
# They may need to be rebalanced in the future.
case ${CIRCLE_NODE_INDEX} in
0)
docker run --rm=false -it -e FSL_COURSE_DATA="/data/examples/nipype-fsl_course_data" -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_pytests.sh && \
docker run --rm=false -it -e FSL_COURSE_DATA="/data/examples/nipype-fsl_course_data" -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py27 /usr/bin/run_pytests.sh && \
docker run --rm=false -it -e CI_SKIP_TEST=1 -e NIPYPE_RESOURCE_MONITOR=1 -e FSL_COURSE_DATA="/data/examples/nipype-fsl_course_data" -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_pytests.sh && \
docker run --rm=false -it -e CI_SKIP_TEST=1 -e NIPYPE_RESOURCE_MONITOR=1 -e FSL_COURSE_DATA="/data/examples/nipype-fsl_course_data" -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py27 /usr/bin/run_pytests.sh && \
docker run --rm=false -it -v $WORKDIR:/work -w /src/nipype/doc --entrypoint=/usr/bin/run_builddocs.sh nipype/nipype:py36 /usr/bin/run_builddocs.sh && \
docker run --rm=false -it -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_examples.sh test_spm Linear /data/examples/ workflow3d && \
docker run --rm=false -it -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_examples.sh test_spm Linear /data/examples/ workflow4d
Expand All @@ -30,8 +30,8 @@ case ${CIRCLE_NODE_INDEX} in
exitcode=$?
;;
2)
docker run --rm=false -it -e NIPYPE_NUMBER_OF_CPUS=4 -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py27 /usr/bin/run_examples.sh fmri_spm_nested MultiProc /data/examples/ level1 && \
docker run --rm=false -it -e NIPYPE_NUMBER_OF_CPUS=4 -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_examples.sh fmri_spm_nested MultiProc /data/examples/ l2pipeline
docker run --rm=false -it -e NIPYPE_NUMBER_OF_CPUS=4 -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py36 /usr/bin/run_examples.sh fmri_spm_nested MultiProc /data/examples/ level1 && \
docker run --rm=false -it -e NIPYPE_NUMBER_OF_CPUS=4 -e NIPYPE_RESOURCE_MONITOR=1 -v $HOME/examples:/data/examples:ro -v $WORKDIR:/work -w /work nipype/nipype:py27 /usr/bin/run_examples.sh fmri_spm_nested MultiProc /data/examples/ l2pipeline
exitcode=$?
;;
3)
Expand Down
8 changes: 4 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,10 @@ python:
- 3.5
- 3.6
env:
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler"
- INSTALL_DEB_DEPENDECIES=false NIPYPE_EXTRAS="doc,tests,fmri,profiler"
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler,duecredit"
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler" PIP_FLAGS="--pre"
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler" CI_SKIP_TEST=1
- INSTALL_DEB_DEPENDECIES=false NIPYPE_EXTRAS="doc,tests,fmri,profiler" CI_SKIP_TEST=1
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler,duecredit" CI_SKIP_TEST=1
- INSTALL_DEB_DEPENDECIES=true NIPYPE_EXTRAS="doc,tests,fmri,profiler" PIP_FLAGS="--pre" CI_SKIP_TEST=1
before_install:
- function apt_inst {
if $INSTALL_DEB_DEPENDECIES; then sudo rm -rf /dev/shm; fi &&
Expand Down
141 changes: 77 additions & 64 deletions doc/users/config_file.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,93 +14,97 @@ Logging
~~~~~~~

*workflow_level*
How detailed the logs regarding workflow should be (possible values:
``INFO`` and ``DEBUG``; default value: ``INFO``)
*filemanip_level*
How detailed the logs regarding file operations (for example overwriting
warning) should be (possible values: ``INFO`` and ``DEBUG``; default value:
``INFO``)
How detailed the logs regarding workflow should be (possible values:
``INFO`` and ``DEBUG``; default value: ``INFO``)
*utils_level*
How detailed the logs regarding nipype utils, like file operations
(for example overwriting warning) or the resource profiler, should be
(possible values: ``INFO`` and ``DEBUG``; default value:
``INFO``)
*interface_level*
How detailed the logs regarding interface execution should be (possible
values: ``INFO`` and ``DEBUG``; default value: ``INFO``)
How detailed the logs regarding interface execution should be (possible
values: ``INFO`` and ``DEBUG``; default value: ``INFO``)
*filemanip_level* (deprecated as of 1.0)
How detailed the logs regarding file operations (for example overwriting
warning) should be (possible values: ``INFO`` and ``DEBUG``)
*log_to_file*
Indicates whether logging should also send the output to a file (possible
values: ``true`` and ``false``; default value: ``false``)
*log_directory*
Where to store logs. (string, default value: home directory)
Where to store logs. (string, default value: home directory)
*log_size*
Size of a single log file. (integer, default value: 254000)
Size of a single log file. (integer, default value: 254000)
*log_rotate*
How many rotation should the log file make. (integer, default value: 4)
How many rotation should the log file make. (integer, default value: 4)

Execution
~~~~~~~~~

*plugin*
This defines which execution plugin to use. (possible values: ``Linear``,
``MultiProc``, ``SGE``, ``IPython``; default value: ``Linear``)
This defines which execution plugin to use. (possible values: ``Linear``,
``MultiProc``, ``SGE``, ``IPython``; default value: ``Linear``)

*stop_on_first_crash*
Should the workflow stop upon first node crashing or try to execute as many
nodes as possible? (possible values: ``true`` and ``false``; default value:
``false``)
Should the workflow stop upon first node crashing or try to execute as many
nodes as possible? (possible values: ``true`` and ``false``; default value:
``false``)

*stop_on_first_rerun*
Should the workflow stop upon first node trying to recompute (by that we
mean rerunning a node that has been run before - this can happen due changed
inputs and/or hash_method since the last run). (possible values: ``true``
and ``false``; default value: ``false``)
Should the workflow stop upon first node trying to recompute (by that we
mean rerunning a node that has been run before - this can happen due changed
inputs and/or hash_method since the last run). (possible values: ``true``
and ``false``; default value: ``false``)

*hash_method*
Should the input files be checked for changes using their content (slow, but
100% accurate) or just their size and modification date (fast, but
potentially prone to errors)? (possible values: ``content`` and
``timestamp``; default value: ``timestamp``)
Should the input files be checked for changes using their content (slow, but
100% accurate) or just their size and modification date (fast, but
potentially prone to errors)? (possible values: ``content`` and
``timestamp``; default value: ``timestamp``)

*keep_inputs*
Ensures that all inputs that are created in the nodes working directory are
kept after node execution (possible values: ``true`` and ``false``; default
value: ``false``)

*single_thread_matlab*
Should all of the Matlab interfaces (including SPM) use only one thread?
This is useful if you are parallelizing your workflow using MultiProc or
IPython on a single multicore machine. (possible values: ``true`` and
``false``; default value: ``true``)
Should all of the Matlab interfaces (including SPM) use only one thread?
This is useful if you are parallelizing your workflow using MultiProc or
IPython on a single multicore machine. (possible values: ``true`` and
``false``; default value: ``true``)

*display_variable*
What ``DISPLAY`` variable should all command line interfaces be
run with. This is useful if you are using `xnest
<http://www.x.org/archive/X11R7.5/doc/man/man1/Xnest.1.html>`_
or `Xvfb <http://www.x.org/archive/X11R6.8.1/doc/Xvfb.1.html>`_
and you would like to redirect all spawned windows to
it. (possible values: any X server address; default value: not
set)
What ``DISPLAY`` variable should all command line interfaces be
run with. This is useful if you are using `xnest
<http://www.x.org/archive/X11R7.5/doc/man/man1/Xnest.1.html>`_
or `Xvfb <http://www.x.org/archive/X11R6.8.1/doc/Xvfb.1.html>`_
and you would like to redirect all spawned windows to
it. (possible values: any X server address; default value: not
set)

*remove_unnecessary_outputs*
This will remove any interface outputs not needed by the workflow. If the
required outputs from a node changes, rerunning the workflow will rerun the
node. Outputs of leaf nodes (nodes whose outputs are not connected to any
other nodes) will never be deleted independent of this parameter. (possible
values: ``true`` and ``false``; default value: ``true``)
This will remove any interface outputs not needed by the workflow. If the
required outputs from a node changes, rerunning the workflow will rerun the
node. Outputs of leaf nodes (nodes whose outputs are not connected to any
other nodes) will never be deleted independent of this parameter. (possible
values: ``true`` and ``false``; default value: ``true``)

*try_hard_link_datasink*
When the DataSink is used to produce an orginized output file outside
of nipypes internal cache structure, a file system hard link will be
attempted first. A hard link allow multiple file paths to point to the
same physical storage location on disk if the conditions allow. By
refering to the same physical file on disk (instead of copying files
byte-by-byte) we can avoid unnecessary data duplication. If hard links
are not supported for the source or destination paths specified, then
a standard byte-by-byte copy is used. (possible values: ``true`` and
``false``; default value: ``true``)
When the DataSink is used to produce an orginized output file outside
of nipypes internal cache structure, a file system hard link will be
attempted first. A hard link allow multiple file paths to point to the
same physical storage location on disk if the conditions allow. By
refering to the same physical file on disk (instead of copying files
byte-by-byte) we can avoid unnecessary data duplication. If hard links
are not supported for the source or destination paths specified, then
a standard byte-by-byte copy is used. (possible values: ``true`` and
``false``; default value: ``true``)

*use_relative_paths*
Should the paths stored in results (and used to look for inputs)
be relative or absolute. Relative paths allow moving the whole
working directory around but may cause problems with
symlinks. (possible values: ``true`` and ``false``; default
value: ``false``)
Should the paths stored in results (and used to look for inputs)
be relative or absolute. Relative paths allow moving the whole
working directory around but may cause problems with
symlinks. (possible values: ``true`` and ``false``; default
value: ``false``)

*local_hash_check*
Perform the hash check on the job submission machine. This option minimizes
Expand All @@ -115,10 +119,10 @@ Execution
done after a job finish is detected. (float in seconds; default value: 5)

*remove_node_directories (EXPERIMENTAL)*
Removes directories whose outputs have already been used
up. Doesn't work with IdentiInterface or any node that patches
data through (without copying) (possible values: ``true`` and
``false``; default value: ``false``)
Removes directories whose outputs have already been used
up. Doesn't work with IdentiInterface or any node that patches
data through (without copying) (possible values: ``true`` and
``false``; default value: ``false``)

*stop_on_unknown_version*
If this is set to True, an underlying interface will raise an error, when no
Expand Down Expand Up @@ -146,18 +150,27 @@ Execution
crashfiles allow portability across machines and shorter load time.
(possible values: ``pklz`` and ``txt``; default value: ``pklz``)

*resource_monitor*
Enables monitoring the resources occupation (possible values: ``true`` and
``false``; default value: ``false``)

*resource_monitor_frequency*
Sampling period (in seconds) between measurements of resources (memory, cpus)
being used by an interface. Requires ``resource_monitor`` to be ``true``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a default when resource_monitor=='true'?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added default value to the documentation

(default value: ``1``)

Example
~~~~~~~

::

[logging]
workflow_level = DEBUG
[logging]
workflow_level = DEBUG

[execution]
stop_on_first_crash = true
hash_method = timestamp
display_variable = :1
[execution]
stop_on_first_crash = true
hash_method = timestamp
display_variable = :1

Workflow.config property has a form of a nested dictionary reflecting the
structure of the .cfg file.
Expand Down
8 changes: 8 additions & 0 deletions doc/users/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,14 @@ Optional arguments::
n_procs : Number of processes to launch in parallel, if not set number of
processors/threads will be automatically detected

memory_gb : Total memory available to be shared by all simultaneous tasks
currently running, if not set it will be automatically set to 90\% of
system RAM.

raise_insufficient : Raise exception when the estimated resources of a node
exceed the total amount of resources available (memory and threads), when
``False`` (default), only a warning will be issued.

To distribute processing on a multicore machine, simply call::

workflow.run(plugin='MultiProc')
Expand Down
4 changes: 2 additions & 2 deletions doc/users/resource_sched_profiler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ by setting the ``status_callback`` parameter to point to this function in the

::

from nipype.pipeline.plugins.callback_log import log_nodes_cb
from nipype.utils.profiler import log_nodes_cb
args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}

To set the filepath for the callback log the ``'callback'`` logger must be
Expand Down Expand Up @@ -141,7 +141,7 @@ The pandas_ Python package is required to use this feature.

::

from nipype.pipeline.plugins.callback_log import log_nodes_cb
from nipype.utils.profiler import log_nodes_cb
args_dict = {'n_procs' : 8, 'memory_gb' : 10, 'status_callback' : log_nodes_cb}
workflow.run(plugin='MultiProc', plugin_args=args_dict)

Expand Down
16 changes: 13 additions & 3 deletions docker/files/run_examples.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,18 @@ mkdir -p ${HOME}/.nipype ${WORKDIR}/logs/example_${example_id} ${WORKDIR}/tests
echo "[logging]" > ${HOME}/.nipype/nipype.cfg
echo "workflow_level = DEBUG" >> ${HOME}/.nipype/nipype.cfg
echo "interface_level = DEBUG" >> ${HOME}/.nipype/nipype.cfg
echo "filemanip_level = DEBUG" >> ${HOME}/.nipype/nipype.cfg
echo "utils_level = DEBUG" >> ${HOME}/.nipype/nipype.cfg
echo "log_to_file = true" >> ${HOME}/.nipype/nipype.cfg
echo "log_directory = ${WORKDIR}/logs/example_${example_id}" >> ${HOME}/.nipype/nipype.cfg

echo '[execution]' >> ${HOME}/.nipype/nipype.cfg
echo 'crashfile_format = txt' >> ${HOME}/.nipype/nipype.cfg

if [[ "${NIPYPE_RESOURCE_MONITOR:-0}" == "1" ]]; then
echo 'resource_monitor = true' >> ${HOME}/.nipype/nipype.cfg
echo 'resource_monitor_frequency = 3' >> ${HOME}/.nipype/nipype.cfg
fi

# Set up coverage
export COVERAGE_FILE=${WORKDIR}/tests/.coverage.${example_id}
if [ "$2" == "MultiProc" ]; then
Expand All @@ -25,8 +33,10 @@ fi
coverage run /src/nipype/tools/run_examples.py $@
exit_code=$?

if [[ "${NIPYPE_RESOURCE_MONITOR:-0}" == "1" ]]; then
cp resource_monitor.json 2>/dev/null ${WORKDIR}/logs/example_${example_id}/ || :
fi
# Collect crashfiles and generate xml report
coverage xml -o ${WORKDIR}/tests/smoketest_${example_id}.xml
find /work -name "crash-*" -maxdepth 1 -exec mv {} ${WORKDIR}/crashfiles/ \;
find /work -maxdepth 1 -name "crash-*" -exec mv {} ${WORKDIR}/crashfiles/ \;
exit $exit_code

20 changes: 6 additions & 14 deletions docker/files/run_pytests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,28 +17,20 @@ echo '[logging]' > ${HOME}/.nipype/nipype.cfg
echo 'log_to_file = true' >> ${HOME}/.nipype/nipype.cfg
echo "log_directory = ${WORKDIR}/logs/py${PYTHON_VERSION}" >> ${HOME}/.nipype/nipype.cfg

# Enable profile_runtime tests only for python 2.7
if [[ "${PYTHON_VERSION}" -lt "30" ]]; then
echo '[execution]' >> ${HOME}/.nipype/nipype.cfg
echo 'profile_runtime = true' >> ${HOME}/.nipype/nipype.cfg
echo '[execution]' >> ${HOME}/.nipype/nipype.cfg
echo 'crashfile_format = txt' >> ${HOME}/.nipype/nipype.cfg

if [[ "${NIPYPE_RESOURCE_MONITOR:-0}" == "1" ]]; then
echo 'resource_monitor = true' >> ${HOME}/.nipype/nipype.cfg
fi

# Run tests using pytest
export COVERAGE_FILE=${WORKDIR}/tests/.coverage.py${PYTHON_VERSION}
py.test -v --junitxml=${WORKDIR}/tests/pytests_py${PYTHON_VERSION}.xml --cov nipype --cov-config /src/nipype/.coveragerc --cov-report xml:${WORKDIR}/tests/coverage_py${PYTHON_VERSION}.xml ${TESTPATH}
exit_code=$?

# Workaround: run here the profiler tests in python 3
if [[ "${PYTHON_VERSION}" -ge "30" ]]; then
echo '[execution]' >> ${HOME}/.nipype/nipype.cfg
echo 'profile_runtime = true' >> ${HOME}/.nipype/nipype.cfg
export COVERAGE_FILE=${WORKDIR}/tests/.coverage.py${PYTHON_VERSION}_extra
py.test -v --junitxml=${WORKDIR}/tests/pytests_py${PYTHON_VERSION}_extra.xml --cov nipype --cov-report xml:${WORKDIR}/tests/coverage_py${PYTHON_VERSION}_extra.xml /src/nipype/nipype/interfaces/tests/test_runtime_profiler.py /src/nipype/nipype/pipeline/plugins/tests/test_multiproc*.py
exit_code=$(( $exit_code + $? ))
fi

# Collect crashfiles
find ${WORKDIR} -name "crash-*" -maxdepth 1 -exec mv {} ${WORKDIR}/crashfiles/ \;
find ${WORKDIR} -maxdepth 1 -name "crash-*" -exec mv {} ${WORKDIR}/crashfiles/ \;

echo "Unit tests finished with exit code ${exit_code}"
exit ${exit_code}
Expand Down
2 changes: 1 addition & 1 deletion nipype/info.py
Original file line number Diff line number Diff line change
Expand Up @@ -159,7 +159,7 @@ def get_nipype_gitversion():
'doc': ['Sphinx>=1.4', 'matplotlib', 'pydotplus', 'pydot>=1.2.3'],
'tests': TESTS_REQUIRES,
'nipy': ['nitime', 'nilearn', 'dipy', 'nipy', 'matplotlib'],
'profiler': ['psutil'],
'profiler': ['psutil>=5.0'],
'duecredit': ['duecredit'],
'xvfbwrapper': ['xvfbwrapper'],
'pybids' : ['pybids']
Expand Down
Loading