The idea behind regression testing is to find bugs in the new version of a system, when those bugs were not present in older versions of the same system. Since the requirements of the new version are usually very similar to the old version, regression testing can often be done with a single set of tests reused across many versions .... and therefore it's worth investing some extra time/effort up-front to make those tests the best they can be. An extra hour spent automating a test today might save 30 minutes in every subsequent run, quickly paying back that initial cost.
So, if my goal is to regression test the Omni Converter service, then I should be thinking about automation. I know from exploratory testing that I need to send HTTP requests, and that those requests have predictable text responses. A simple way to automate those tests would be to shell script a series of HTTP requests using cURL, print their responses to a file, and run some sort of analysis on that file to verify correct results. Let's try it.
First, I need to decide what tests to write. The matrix of endpoints and methods in the Omni Converter documentation covers most of what OC does, so I'll use that as my basis:
My first thought for organizing the testplan was to simply replace each cell in the matrix with an appropriate test... but that isn't going to work. The behaviours described in each cell of this matrix are too general. For example, there are six different file formats with which to create and store a graph using the POST /v1/documents route, so I'll need at least six tests to cover just that one cell.
To keep things managable, I will instead create a separate matrix for each endpoint, like so:
For posterity, you can find the full testplan here.
While this testplan certainly isn't exhaustive, I think it covers most of OC's expected behaviours (and single-cause errors). I'm feeling diminishing returns on additional tests, so it's time to stop planning and get to the fun stuff: automation.