Run a test suite programmatically¶
See the folder example:
In [1]:
EXAMPLES_FOLDER = "../examples"
… where you can find the following files:
In [2]:
import os, pprint
[f for f in os.listdir(EXAMPLES_FOLDER) if not f.startswith('.')]
Out[2]:
['change_case',
'fluxomics_stationary',
'multivariate',
'sacurine',
'workflow-test-suite-full.yml',
'workflow-test-suite-min.yml',
'workflow-test-suite.yml',
'workflows.json']
To run a test suite you need test suite definition file (see
here for more details) like
workflow-test-suite-min.yml
that you can find in
EXAMPLES_FOLDER
:
In [3]:
suite_conf_filename = os.path.join(EXAMPLES_FOLDER, "workflow-test-suite-min.yml")
The content of the definition file is:
In [4]:
import yaml, json
with open(suite_conf_filename, "r") as fp:
data = yaml.load(fp)
print(json.dumps(data, indent=4))
{
"enable_logger": false,
"workflows": {
"change_case": {
"expected": {
"OutputText": "change_case/expected_output"
},
"inputs": {
"InputText": "change_case/input"
},
"file": "change_case/workflow.ga"
},
"multivariate": {
"expected": {
"variableMetadata_out": "multivariate/variableMetadata_out",
"sampleMetadata_out": "multivariate/sampleMetadata_out"
},
"inputs": {
"DataMatrix": "multivariate/dataMatrix.tsv",
"SampleMetadata": "multivariate/sampleMetadata.tsv",
"VariableMetadata": "multivariate/variableMetadata.tsv"
},
"params": {
"3": {
"predI": "1",
"respC": "gender",
"orthoI": "NA",
"testL": "FALSE"
}
},
"file": "multivariate/workflow.ga"
}
}
}
To run a test suite programmatically, you need an instance of
wft4galaxy.core.WorkflowTestSuite
which mantains the configuration
of the whole test suite. You can directly instantiate that starting from
the test definition file above (cell [4]) by means of the class function
load
(steps [5-6]):
In [5]:
from wft4galaxy.core import WorkflowTestSuite
In [6]:
suite = WorkflowTestSuite.load(suite_conf_filename)
The property workflows_test
of the suite configuration object
contains a dictionary which maps the name of the workflow test to its
configuration (step [7]). Notice that the configuration of a workflow
test is wrapped by a wft4galaxy.core.WorkflowTestCase
instance
(step[8]).
In [7]:
for wft_name, wft_config in suite.workflow_tests.items():
print("{0} ==> {1}\n".format(wft_name, wft_config))
change_case ==> WorkflowTestConfig: name=change_case, file=change_case/workflow.ga, inputs=[InputText], expected_outputs=[OutputText]
multivariate ==> WorkflowTestConfig: name=multivariate, file=multivariate/workflow.ga, inputs=[DataMatrix,SampleMetadata,VariableMetadata], expected_outputs=[variableMetadata_out,sampleMetadata_out]
Now, having the suite definition loaded, we can run the test suite, by
calling the run
method of the suite instance (step [9]) and collect
their results:
In [8]:
test_results = suite.run(enable_logger=True)
Workflow Test: 'change_case' ... 2017-03-30 11:48:34,390 [wft4galaxy] [ INFO] Create a history '_WorkflowTestHistory_0a12a39e-152e-11e7-875d-a45e60c4fc6b' (id: u'0aab0e4c25198ad8')
2017-03-30 11:48:35,583 [wft4galaxy] [ INFO] Workflow '_WorkflowTest_Change Case (imported from API)' (id: 0aab0e4c25198ad8) running ...
2017-03-30 11:48:37,457 [wft4galaxy] [ INFO] waiting for datasets
2017-03-30 11:48:40,115 [wft4galaxy] [ INFO] 5148364840389881: new
2017-03-30 11:48:41,313 [wft4galaxy] [ INFO] 5148364840389881: queued
2017-03-30 11:48:42,371 [wft4galaxy] [ INFO] 5148364840389881: queued
2017-03-30 11:48:43,015 [wft4galaxy] [ INFO] 5148364840389881: running
2017-03-30 11:48:43,734 [wft4galaxy] [ INFO] 5148364840389881: running
2017-03-30 11:48:44,614 [wft4galaxy] [ INFO] 5148364840389881: running
2017-03-30 11:48:45,510 [wft4galaxy] [ INFO] 5148364840389881: ok
2017-03-30 11:48:46,011 [wft4galaxy] [ INFO] Workflow '_WorkflowTest_Change Case (imported from API)' (id: 0aab0e4c25198ad8) executed
2017-03-30 11:48:46,013 [wft4galaxy] [ INFO] Checking test output: ...
2017-03-30 11:48:46,146 [wft4galaxy] [ INFO] Checking test output: DONE
ok
Workflow Test: 'multivariate' ... 2017-03-30 11:48:47,557 [wft4galaxy] [ INFO] Create a history '_WorkflowTestHistory_11e13235-152e-11e7-9e12-a45e60c4fc6b' (id: u'32ff5cf1b96c1df7')
2017-03-30 11:48:58,185 [wft4galaxy] [ INFO] Workflow '_WorkflowTest_Multivariate (imported from API)' (id: 32ff5cf1b96c1df7) running ...
2017-03-30 11:49:03,892 [wft4galaxy] [ INFO] waiting for datasets
2017-03-30 11:49:04,361 [wft4galaxy] [ INFO] 2d0caaef345630d8: queued
2017-03-30 11:49:04,738 [wft4galaxy] [ INFO] 72e7b234f232ef23: queued
2017-03-30 11:49:04,879 [wft4galaxy] [ INFO] dbc510811ab4034e: queued
2017-03-30 11:49:05,351 [wft4galaxy] [ INFO] b7e089dc153354a5: queued
2017-03-30 11:49:05,980 [wft4galaxy] [ INFO] 2d0caaef345630d8: queued
2017-03-30 11:49:06,270 [wft4galaxy] [ INFO] 72e7b234f232ef23: queued
2017-03-30 11:49:06,598 [wft4galaxy] [ INFO] dbc510811ab4034e: queued
2017-03-30 11:49:06,741 [wft4galaxy] [ INFO] b7e089dc153354a5: queued
2017-03-30 11:49:07,383 [wft4galaxy] [ INFO] 2d0caaef345630d8: queued
2017-03-30 11:49:07,610 [wft4galaxy] [ INFO] 72e7b234f232ef23: queued
2017-03-30 11:49:07,756 [wft4galaxy] [ INFO] dbc510811ab4034e: queued
2017-03-30 11:49:07,905 [wft4galaxy] [ INFO] b7e089dc153354a5: queued
2017-03-30 11:49:08,558 [wft4galaxy] [ INFO] 2d0caaef345630d8: queued
2017-03-30 11:49:08,709 [wft4galaxy] [ INFO] 72e7b234f232ef23: queued
2017-03-30 11:49:08,871 [wft4galaxy] [ INFO] dbc510811ab4034e: queued
2017-03-30 11:49:08,990 [wft4galaxy] [ INFO] b7e089dc153354a5: queued
2017-03-30 11:49:09,729 [wft4galaxy] [ INFO] 2d0caaef345630d8: queued
2017-03-30 11:49:10,058 [wft4galaxy] [ INFO] 72e7b234f232ef23: queued
2017-03-30 11:49:10,316 [wft4galaxy] [ INFO] dbc510811ab4034e: queued
2017-03-30 11:49:10,561 [wft4galaxy] [ INFO] b7e089dc153354a5: queued
2017-03-30 11:49:11,336 [wft4galaxy] [ INFO] 2d0caaef345630d8: ok
2017-03-30 11:49:11,467 [wft4galaxy] [ INFO] 72e7b234f232ef23: ok
2017-03-30 11:49:11,608 [wft4galaxy] [ INFO] dbc510811ab4034e: ok
2017-03-30 11:49:11,762 [wft4galaxy] [ INFO] b7e089dc153354a5: ok
2017-03-30 11:49:12,268 [wft4galaxy] [ INFO] Workflow '_WorkflowTest_Multivariate (imported from API)' (id: 32ff5cf1b96c1df7) executed
2017-03-30 11:49:12,271 [wft4galaxy] [ INFO] Checking test output: ...
2017-03-30 11:49:12,470 [wft4galaxy] [ INFO] Checking test output: DONE
ok
----------------------------------------------------------------------
Ran 2 tests in 39.369s
OK
test_results
is a list of instances of WorkflowTestResult
, a
class which contains several information about an executed workflow,
like its ID (dynamically generated when the test starts), the workflow
definition and the results of the comparator function for each step
(step[9]):
In [9]:
for r in test_results:
print("Test %s:\n\t - workflow: [%s] \n\t - results: %r" % (r.test_id, r.workflow.name, r.results))
Test 0a12a39e-152e-11e7-875d-a45e60c4fc6b:
- workflow: [_WorkflowTest_Change Case (imported from API)]
- results: {u'OutputText': True}
Test 11e13235-152e-11e7-9e12-a45e60c4fc6b:
- workflow: [_WorkflowTest_Multivariate (imported from API)]
- results: {u'variableMetadata_out': True, u'sampleMetadata_out': True}
Given a WorkflowTestResult
instance:
In [10]:
a_result = test_results[0]
the list of available methods for inspecting the results of the workflow test are:
In [11]:
help(a_result)
Help on WorkflowTestResult in module wft4galaxy.core object:
class WorkflowTestResult(__builtin__.object)
| Class for representing the result of a workflow test.
|
| Methods defined here:
|
| __init__(self, test_id, workflow, inputs, outputs, output_history, expected_outputs, missing_tools, results, output_file_map, output_folder='results', errors=None)
|
| __repr__(self)
|
| __str__(self)
|
| check_output(self, output)
| Assert whether the actual `output` is equal to the expected accordingly
| to its associated `comparator` function.
|
| :type output: str or dict
| :param output: output name
|
| :rtype: bool
| :return: ``True`` if the test is passed; ``False`` otherwise.
|
| check_outputs(self)
| Return a map of pairs <OUTPUT_NAME>:<RESULT>, where <RESULT> is ``True``
| if the actual `OUTPUT_NAME` is equal to the expected accordingly
| to its associated `comparator` function.
|
| :rtype: dict
| :return: map of output results
|
| failed(self)
| Assert whether the test is failed.
|
| :rtype: bool
| :return: ``True`` if the test is failed; ``False`` otherwise.
|
| passed(self)
| Assert whether the test is passed.
|
| :rtype: bool
| :return: ``True`` if the test is passed; ``False`` otherwise.
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
For example, you can extract the list of tested outputs:
In [12]:
print "Outputs: ", a_result.results.keys()
Outputs: [u'OutputText']
… or explicitly check if the test is globally passed or failed (all actual outputs are equal to the expected):
In [13]:
a_result.passed(), a_result.failed()
Out[13]:
(True, False)
… or check whether a specific actual output is equal or not to the expected one:
In [14]:
a_result.check_output("OutputText")
Out[14]:
True