Run a single test

See the folder example:

In [1]:
EXAMPLES_FOLDER = "../examples"

… where you can find the following files:

In [2]:
import os, pprint
[f for f in os.listdir(EXAMPLES_FOLDER) if not f.startswith('.')]
Out[2]:
['change_case',
 'fluxomics_stationary',
 'multivariate',
 'sacurine',
 'workflow-test-suite-full.yml',
 'workflow-test-suite-min.yml',
 'workflow-test-suite.yml',
 'workflows.json']

Consider the definition file workflow-test-suite-min.yml (steps[3-4]), which contains the two workflow tests named change_case and multivariate respectively:

In [3]:
suite_conf_filename = os.path.join(EXAMPLES_FOLDER, "workflow-test-suite-min.yml")
In [4]:
import yaml, json
with open(suite_conf_filename, "r") as fp:
    data = yaml.load(fp)
    print(json.dumps(data, indent=4))
{
    "enable_logger": false,
    "workflows": {
        "change_case": {
            "expected": {
                "OutputText": "change_case/expected_output"
            },
            "inputs": {
                "InputText": "change_case/input"
            },
            "file": "change_case/workflow.ga"
        },
        "multivariate": {
            "expected": {
                "variableMetadata_out": "multivariate/variableMetadata_out",
                "sampleMetadata_out": "multivariate/sampleMetadata_out"
            },
            "inputs": {
                "DataMatrix": "multivariate/dataMatrix.tsv",
                "SampleMetadata": "multivariate/sampleMetadata.tsv",
                "VariableMetadata": "multivariate/variableMetadata.tsv"
            },
            "params": {
                "3": {
                    "predI": "1",
                    "respC": "gender",
                    "orthoI": "NA",
                    "testL": "FALSE"
                }
            },
            "file": "multivariate/workflow.ga"
        }
    }
}

Similar to that in respect of test suite (example 2), we need to load the test configuration within a Python object. The class specialized for prepresenting a test configuration in wft4galaxy is wft4galaxy.core.WorkflowTestCase (step [5]):

In [5]:
from wft4galaxy.core import WorkflowTestCase

We can create the class instance using the its static load method:

In [6]:
test_case = WorkflowTestCase.load(suite_conf_filename, workflow_test_name="change_case")

Having the test_case object, we can simply run the test that it represents by calling its run method:

In [7]:
test_result = test_case.run(enable_logger=True)
2017-03-30 13:50:18,052 [wft4galaxy] [ INFO]  Create a history '_WorkflowTestHistory_0b688d4c-153f-11e7-87e4-a45e60c4fc6b' (id: u'd3a4d6a5256f2d9a')
2017-03-30 13:50:19,480 [wft4galaxy] [ INFO]  Workflow '_WorkflowTest_Change Case (imported from API)' (id: d3a4d6a5256f2d9a) running ...
2017-03-30 13:50:24,497 [wft4galaxy] [ INFO]  waiting for datasets
2017-03-30 13:50:24,777 [wft4galaxy] [ INFO]  f711c56f400864d1: new
2017-03-30 13:50:25,697 [wft4galaxy] [ INFO]  f711c56f400864d1: new
2017-03-30 13:50:28,410 [wft4galaxy] [ INFO]  f711c56f400864d1: queued
2017-03-30 13:50:29,249 [wft4galaxy] [ INFO]  f711c56f400864d1: ok
2017-03-30 13:50:29,754 [wft4galaxy] [ INFO]  Workflow '_WorkflowTest_Change Case (imported from API)' (id: d3a4d6a5256f2d9a) executed
2017-03-30 13:50:29,758 [wft4galaxy] [ INFO]  Checking test output: ...
2017-03-30 13:50:29,920 [wft4galaxy] [ INFO]  Checking test output: DONE

test_result is an instance of WorkflowTestResult, a class which contains several information about an executed workflow, like its ID (dynamically generated when the test starts), the workflow definition and the results of the comparator function for each step:

In [8]:
print("Test %s:\n\t - workflow: [%s] \n\t - results: %r" % \
      (test_result.test_id, test_result.workflow.name, test_result.results))
Test 0b688d4c-153f-11e7-87e4-a45e60c4fc6b:
         - workflow: [_WorkflowTest_Change Case (imported from API)]
         - results: {u'OutputText': True}

For example, you can extract the list of tested outputs:

In [9]:
print "Outputs: ", test_result.results.keys()
Outputs:  [u'OutputText']

… or explicitly check if the test is globally passed or failed (all actual outputs are equal to the expected):

In [10]:
test_result.passed(), test_result.failed()
Out[10]:
(True, False)

… or check whether a specific actual output is equal or not to the expected one:

In [11]:
test_result.check_output("OutputText")
Out[11]:
True