Filter Regression Tests

(Eliot – you might want to also checkout out Regression Testing Mk II)

When you build interfaces in the Iguana Translator, you’re effectively testing as you go. The Translator runs your sample data through your scripts in real time, so you know instantly if your code works, and you also know instantly if you have an error. If you’ve seen your interface perform correctly against all your sample data, you know you can put it into production with confidence.

But what about when you make a change to an interface, or a change to your environment? You want to be sure your change hasn’t broken your interface, and you especially want to be sure your interface is producing the same results. This is what the regression testing app is for.

The app tests the message filter in an Iguana channel. The first time you run the app, it runs your full set of sample messages through your filter, and saves the results on disk. Any time you run it after that, it does the same thing, and compares the current results with the saves ones. If there’s any difference between an expected result and an actual one, the app reports a test failure.

How to get started
If you are already using the Channel Manager and the iNTERFACEWARE repo on GitHub, you can install the regression testing app with a click. Otherwise, you can use this zipfile: Regressions_From_HTTPS

If you’re using the zip, follow these steps:

  1. Create a new channel, with a “From HTTPS” source and a “To Channel” destination. For a URL, use regression_tests.
  2. Click Edit script, then mouse over the arrow beside main, choose Import Project, and upload the zip.

Inside the new project, edit regressions.config (see the screenshot below) to set the value of config.Worktank. This is the folder where the app will save the expected test results, and it must exist before you run the app. On a Windows computer, the line might look something like this:

config.Worktank = "C:\Good_Folder\Iguana_Expected"

regression_config

Save a milestone and start the channel. After this, you should be able to visit the main page of the app itself. You will see a list of channels that have both message filters and sample data.

 

Click one of the channels, and the app will tell you you need to generate a set of expected results before it can run tests. Click that link. After a pause, you should see your first set of results. Every test will show a pass, because the expected results and the actual ones were generated at the same time.

regressions_results

If there are any test failures, they’ll show at the top of the list.

regressions_failure

If you click the “Inspect” link for an individual test, you’ll see the expected and actual results on the same screen. If the test passed, they’ll be the same. If it failed, you’ll see the differences between the actual and expected highlighted in red and green.

If a test fails, but that’s because your interface is really supposed to be generating something different, you can edit the expected results right on this screen. Click in the text of the expected result, edit it, and as soon as you click outside the text, the changed result will be updated on disk.

Try editing some expected results to change passes into failures and vice versa. And once you’ve got your tests running, make sure to re-run them frequently.

 

Leave A Comment?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.