This quest is nearly over, let's recap what has been done so far. Given the task "test the Omni Converter Challenge service," I first read its documentation and performed exploratory testing to build an understanding of the system (store and retrieve files containing Directed Acyclic Graphs via a RESTful API). Based on that understanding, I created a regression testplan. Then I built a SoapUI test suite which automates that test plan. Now it's time to execute the test suite and review the results.
The first few sets of tests are simple and do not turn up any errors.
From left to right, this image shows my test suite, the test case I executed, and one of the individual tests. When I execute the test case, SoapUI runs all three tests in sequence, and the tests succeed if all of their assertions hold. For the selected test, the assertions were that the HTTP response should have a 401 status code (indicating authorization failure) and the response body should contain the text "errorCode: 1" (ditto).
Moving on to more complicated test cases, errors start appearing.
What we see here is that SoapUI executed the tests in sequence until an assertion failed, then stopped. So what went wrong? Looking at the log in the bottom right corner, we can see that both assertions for the fourth test failed. The HTTP response had a 500 status code (should've been 400) and the response body didn't contain the error code we were looking for. Opening up the testcase and looking at the raw response data, it turns out it contained error code 100 (unknown system error) which is what OCC throws when it has no idea what went wrong. I note down these details for a later bug report and disable the test so that SoapUI can move past it.
Repeating this process through every test case, I ended up finding a few new bugs and many duplicates. Combined with my earlier exploratory testing, there are a total of six unique bugs.
Time to write some bug reports. I set up a JIRA trial instance to store them, and here's an example of how it turned out:
This is a pretty bare-bones report: what's wrong, how did you make it happen and what should've happened, plus a screenshot with any details that I didn't want to type out (this one shows the raw HTTP request and the raw response).
Now, normally I would add more details: what was my environment, what exactly do you do in each of the steps-to-reproduce, etc. But without a target audience I'm struggling to decide how much detail is appropriate. This would be a good time to move on to the retrospective and figure out some answers.